Neuromorphic computing has emerged as a new and promising computing principle that emulates how human brains process information. The underlying spiking neural networks (SNNs) are well-known for having higher energy efficiency than artificial neural networks (ANNs). Neuromorphic systems enable highly parallel computation and reduce memory bandwidth limitations, making hardware performance scalable and sustainable given the ever-increasing complexities of artificial intelligence (AI). Inefficiency in the design of a neuromorphic system generally originates from redundant parameters, nonoptimized models, a lack of computing parallelism, and inefficient training algorithms. This dissertation aims to address these problems and propose effective solutions.

Overview

Abstract

Neuromorphic computing has emerged as a new and promising computing principle that emulates how human brains process information. The underlying spiking neural networks (SNNs) are well-known for having higher energy efficiency than artificial neural networks (ANNs). Neuromorphic systems enable highly parallel computation and reduce memory bandwidth limitations, making hardware performance scalable and sustainable given the ever-increasing complexities of artificial intelligence (AI). Inefficiency in the design of a neuromorphic system generally originates from redundant parameters, nonoptimized models, a lack of computing parallelism, and inefficient training algorithms. This dissertation aims to address these problems and propose effective solutions. 

Over-parameterization and redundant computations are common problems in AI models, causing substantial energy waste. As the first step of my research, I introduce various strategies for pruning neurons and weights while training in an unsupervised SNN by exploring neural dynamics and firing activity. Furthermore, an efficient computational model and hardware implementation strategy are essential for achieving high efficiency. In the second step of the research, I adopt a software–hardware codesign approach that analyzes computational methods to guide hardware implementation for different applications. The network model is optimized from the software level through a biological hyperparameter optimization strategy, resulting in a hardware-friendly network setting. Moreover, an efficient on-chip training algorithm is essential for low-energy processing. In the third step, I dive into the design of local-training-enabled neuromorphic systems, introducing a spatially local backpropagation algorithm. The proposed digital architecture explores spike sparsity, computing parallelism, and parallel training. The spatially local training mechanism is extended into a temporal dimension using a backpropagation through time–based training algorithm. Local training mechanisms in both dimensions work synergistically to improve algorithmic performance.

Brief Biography

Wenzhe Guo is currently a Ph.D. candidate at Sensors Lab in Electrical and Computer Engineering Department, advised by Professor Khaled Salama and Professor Ahmed Eltawil. He received his Bachelor degree from University of Electronic Science and Technology of China (UESTC) in 2017. He obtained his Master degree at Electrical Engineering department in KAUST in 2018. His research interests lie in design and implementation of brain-inspired computational algorithm, and exploration of neuromorphic computing systems for real-time applications.

Presenters