Even though CNNs achieve great success in performing complex intelligence tasks, this achievement comes with an overwhelming cost. Modern CNN architectures require hundreds of stacked convolution layers, performing billions of operations for a single input. To meet the unique needs of AI-based systems, various architectures were proposed as possible contenders such as GP-GPUs or application-specific processors like Google TPU and Intel Movidius. These architectures focus on utilizing massive parallelism to address the computational needs; however, the issue of memory access is still largely not addressed. As the CNN complexity increases, the architecture becomes more memory dominant since the total number of weights, activations, and operations increases proportionally, leading to an increase in data access and an associated increase in energy consumption. To address these challenging issues, we propose an In-memory CNN Processor (IMCP) for CNN-based AI applications. The IMCP combines processing and storage in the same location by following the principles of associative computing, where the vectorial operations are performed as bitwise operations inside the memory. The architecture inherently lends itself to variable bit-width computing utilizing bit-level sparsity to result in a level of control on resources not available to traditional processors.
Professor Khaled Nabil Salama is one of the Founders of the Electrical and Computer Engineering (ECE) Program and Principal Investigator of the Sensors Lab. Salama has contributed greatly to the understanding of intelligent sensors, biosensors and VLSI architectures for bio-imaging and instrumentation development. He is also the owner of a large number of patents of sensor technology and number generators for cryptography.