AIDA: Associative In-Memory Deep Learning Accelerator

Research output: Contribution to journalArticlepeer-review

21 Scopus citations

Abstract

This work presents an associative in-memory deep learning processor (AIDA) for edge devices. An associative processor is a massively parallel non-von Neumann accelerator that uses memory cells for computing; the bulk of data is never transferred outside the memory arrays for external processing. AIDA utilizes a dynamic content addressable memory for both data storage and processing, and benefits from sparsity and limited arithmetic precision, typical in modern deep neural networks. The novel in-data processing implementation designed for the AIDA accelerator achieves a speedup of 270× over an advanced central processing unit at more than three orders-of-magnitude better energy efficiency.

Original languageEnglish
Pages (from-to)67-75
Number of pages9
JournalIEEE Micro
Volume42
Issue number6
DOIs
StatePublished - 2022

Bibliographical note

Publisher Copyright:
© 1981-2012 IEEE.

Funding

This work was supported by the Israel Science Foundation under Grant 996/18.

FundersFunder number
Israel Science Foundation996/18

    Fingerprint

    Dive into the research topics of 'AIDA: Associative In-Memory Deep Learning Accelerator'. Together they form a unique fingerprint.

    Cite this