AIDA: Associative In-Memory Deep Learning Accelerator

Esteban Garzon, Adam Teman, Marco Lanuzza, Leonid Yavits

Research output: Contribution to journalArticlepeer-review

9 Scopus citations


This work presents an associative in-memory deep learning processor (AIDA) for edge devices. An associative processor is a massively parallel non-von Neumann accelerator that uses memory cells for computing; the bulk of data is never transferred outside the memory arrays for external processing. AIDA utilizes a dynamic content addressable memory for both data storage and processing, and benefits from sparsity and limited arithmetic precision, typical in modern deep neural networks. The novel in-data processing implementation designed for the AIDA accelerator achieves a speedup of 270× over an advanced central processing unit at more than three orders-of-magnitude better energy efficiency.

Original languageEnglish
Pages (from-to)67-75
Number of pages9
JournalIEEE Micro
Issue number6
StatePublished - 2022

Bibliographical note

Publisher Copyright:
© 1981-2012 IEEE.


Dive into the research topics of 'AIDA: Associative In-Memory Deep Learning Accelerator'. Together they form a unique fingerprint.

Cite this