Abstract
This work presents an associative in-memory deep learning processor (AIDA) for edge devices. An associative processor is a massively parallel non-von Neumann accelerator that uses memory cells for computing; the bulk of data is never transferred outside the memory arrays for external processing. AIDA utilizes a dynamic content addressable memory for both data storage and processing, and benefits from sparsity and limited arithmetic precision, typical in modern deep neural networks. The novel in-data processing implementation designed for the AIDA accelerator achieves a speedup of 270× over an advanced central processing unit at more than three orders-of-magnitude better energy efficiency.
| Original language | English |
|---|---|
| Pages (from-to) | 67-75 |
| Number of pages | 9 |
| Journal | IEEE Micro |
| Volume | 42 |
| Issue number | 6 |
| DOIs | |
| State | Published - 2022 |
Bibliographical note
Publisher Copyright:© 1981-2012 IEEE.
Funding
This work was supported by the Israel Science Foundation under Grant 996/18.
| Funders | Funder number |
|---|---|
| Israel Science Foundation | 996/18 |