Resisting the bottlenecks in neural networks

Resistive Random Access Memory, or ReRAM, is a promising technology exhibits low energy consumption and high speeds and can be built sustainably with fewer toxic materials than most semiconductor technology. AI-generated image using Midjourney.

An alternative way to build neural accelerators could enable more efficient computer memory systems and lead to improvements in the rapidly expanding field of machine learning.

Deep neural networks (DNNs) are powerful tools for solving complex problems in fields including biometric recognition, robotics and even estimating the spread of disease. However, in order to learn from large datasets and make decisions in real time, DNNs require huge amounts of power and memory. Most modern computers face bottlenecks based on their architecture and need to send data back and forth between processing and memory units.

“A new paradigm of computing is needed where the operands do not shuttle needlessly, and all necessary computing is performed in the memory,” says postdoc Kamilya Smagulova, who works at KAUST with Ahmed Eltawil. “This kind of platform, referred to as compute-in-memory, is suitable for efficient DNN acceleration. Just like biological neurons, each cell serves both as a memory and processing unit.”

Smagulova and co-workers have reviewed the current state of the art of neural accelerators built with Resistive Random Access Memory, or ReRAM, a promising technology for building compute-in-memory architectures[1]. Unlike conventional RAM, ReRAM functions using changes in resistance rather than charge. It exhibits low energy consumption and high speeds and can be built sustainably with fewer toxic materials than most semiconductor technology.

“ReRAM-based neural accelerators could reshape the future of computing, enabling real-time processing of data,” says co-author Mohammed Fouda at Rain Neuromorphics, USA. “There are now many commercially available ReRAM-based devices; however, no standards for their fabrication have been widely adopted.”

Smagulova, Fouda and co-workers discussed the advantages of ReRAM accelerators over traditional accelerators and identified several key priorities that ReRAM developers must address to ensure the widespread adoption of the technology. These include setting new benchmarks for assessing the performance of ReRAM-based DNN accelerators, fixing reliability issues such as thermal sensitivity and effectively combining multiple ReRAM accelerators so that they can tackle large DNN models.

With the help of KAUST’s computing facilities, the team are testing DNNs on ReRAM accelerators.

“We have chosen autonomous driving systems as an exemplar application to showcase the possibilities of a ReRAM system,” says Smagulova. “To do so, we will be using Ibex — a high-performance computing system offered at KAUST.” 

“The underlying Achilles heel of machine learning is the availability of computing platforms that can support the staggering growth the field is experiencing,” says Eltawil. “A radical departure from traditional computing architectures is needed, and ReRAM will play an important role.”

 
For more KAUST Discovery stories, click here

 

REFERENCE

  1. Smagulova, K., Fouda, M.E., Kurdahi, F., Salama, K.N. & Eltawil, A. Resistive neural hardware accelerators. Proceedings of the IEEE 111, 500-527 (2023).| article