Abundant emerging applications of artificial intelligence have fueled the microelectronics industry in the past decade. Deep neural networks, for example, have achieved astonishing state-of-the-art results and tremendous success in a wide range of applications, in part thanks to the availability of massive labeled datasets and powerful GPUs. Despite that, they fail to compete with biological systems in cognitive tasks and their energy efficiency must improve several orders of magnitude to match that of human brain. At the same time, Moore’s scaling law is coming to an end and the predominant von Neumann architecture is incompatible with the data-intensive nature of modern deep neural networks.
Spiking neural networks are the most realistic representation of nervous system. Their inherent local training rules enable low-cost online learning and their efficient information encoding facilitate low-power computing. Conventional CMOS technologies, however, fail in low-cost realization of synapse functionality namely spike-timing-dependent plasticity (STDP), an indispensable feature of event-driven learning. Nano-scalable memristor technology is an excellent candidate for artificial synapse implementation due to their superb scalability and inherent STDP behavior. Notable previous experimental works focused on discrete 1T1R (RRAM) implementation or CMOS-integrated 2T1R (PCM) devices and neither is fully closed loop i.e., post-spikes are not directly applied to the synaptic array.
In our paper, published in Nature Communications, we experimentally demonstrate, for the first time, coincidence detection task by spiking neural networks implemented with the most prospective passively integrated metal-oxide memristors. The network consists of 20 memristive synapses sharing a bottom electrode in an array of 20x20 crossbar connected to an analog leaky-integrate-and-fire neuron. This completely unsupervised network successfully detects coincidence in 5 out of 20 pre-synaptic pulses by increasing the corresponding synaptic efficacies using STDP rule.
STDP learning mechanism is used to train a single neuron to spike when it receives correlated i.e., simultaneous pre-synaptic spikes. The pre-synaptic and post-synaptic pulse shapes were carefully designed to ensure balanced STDP windows. The results in all our experiments clearly show the progressive cumulative potentiation of the synapses that corresponds to the synchronized inputs. We demonstrated that a neuron can learn a new pattern, while forgetting previously learnt one. More importantly, this happens with relatively much higher spiking activity of the noise inputs (with respect to the synchronous ones), and hence the considered task is more challenging, as compared to those previous studies. Further, we observed that the network is robust against emulated temporal jitter in the position of synchronous spikes. We successfully performed the experiment multiple times using the same column and different columns in different crossbars.
Our findings suggest that, not surprisingly, spatial device-to-device variations, attributed to the filamentary switching mechanism, is the most critical challenge towards building reliable spiking neural networks. The problem could be naturally alleviated by using devices with uniform switching characteristics (e.g., in smaller technology nodes) and holistically optimizing the network parameters with respect to the variations. For example, we optimized the pulse shapes to combat variations and asymmetric switching voltage thresholds. We believe that our experimental results pave the way for realizing nano-scale spiking neural networks with ultra-high connections density.