Neuromorphic Engineering is the discipline that takes inspiration from neurobiology to build novel computing systems. Neuromorphic systems help to better understand the fundamentals of biological computation and offer a new paradigm for computation that goes beyond the capabilities of what the current generation of computers can achieve. Despite the advancements of Very-Large-Scale Integration (VLSI) digital electronic systems, becoming increasingly more powerful over the course of time and technological nodes, biological systems remain orders of magnitude ahead in the efficiency of computation. While conventional digital computers target higher and higher throughput, achieved by increasing clock speed, biology has evolved to optimize efficiency. This involves both the sensory and the computing systems, which are co-designed to work together as a whole. We strongly believe this approach is beneficial to build systems that improve the effectiveness of information sensing and processing. We envision this path to design the next generation of Edge-AI systems.
In this work, we focus on the Object Localization task. Neurobiology offers multiple examples of efficient object localization: perhaps, the most impressive is the case of the barn owl. The barn owl senses the Interaural Time Difference (ITD) of sound reaching its two ears to precisely locate prey, even in low light conditions. To perform this task, the owl has evolved to convert acoustic information in trains of spikes related to the exact timing a waveform stimulates one ear. Spikes related to the left and right pathways are then combined in the Nucleus Magnocellularis, where they are processed in a computational map – called Jeffress Model - formed by geometrically arranged neurons. These neurons are specialized to capture the temporal correlation between spikes and are thus named Coincidence Detectors (CD). Axonal Delay Lines (DLs), propagating input spikes with given delays, connect different CD neurons forming the Jeffress Model. The result is that the angular position of prey is first translated into the Interaural-Time-Difference, then into a particular location in the computational map (Jeffress model) resulting in being maximally active.
We propose a system inspired by the barn owl approach to performing Object Localization, coupling a sensory system to a computational map. In particular, this means avoiding some general concepts of conventional signal processing, such as sampling and frame-based computation: bio-inspired computation is spike-based, meaning that relevant information is expressed by events (spikes) and the irrelevant features of the input are discarded. For such an optimized spike-based system, to co-design the sensory part and the computational part is necessary. Technology plays a fundamental role too. This is why we combined two interesting and novel technologies developed at CEA Leti: piezoelectric Micromachined Ultrasound Transducers (pMUT) as the heart of the sensory system, and Resistive Random Access Memories (RRAM) utilized in analog neuromorphic circuits.

PMUTs are scalable ultrasound sensors with low actuating voltage and power consumption, with a diameter of 880um per membrane, making them ideal for embedded applications. We have assembled a 12cm-wide PCB placing 3 pMUTs: 2 receivers at the sides at a 10cm distance, and an emitter at the middle. The emitter produces an ultrasound wave-front which bounces off a target object and is sensed by the receivers. Sound at the two receivers arrives with a small time difference, the ITD, depending on the angular position of the object. The goal of the sensory system is then to detect such ITD. Sound sensed by the receivers is first band-pass filtered, then half-wave-rectified and then passed to a Leaky-Integrate-and-Fire (LIF) neuron. The LIF neuron is calibrated to emit a single output spike whose timing represents the time of arrival of the reflected wave-front. Such a simple pipeline is in contrast to that of a conventional signal processing system, which would sample the state of the pMUT at a fixed rate - dictated by a clock - and process all the produced frames to extract information.
RRAM devices are non-volatile resistive memories whose internal state can be modified at the atomic level to control their conductance. In this work, we have used hafnium dioxide-based devices integrated into 130nm CMOS technology to fabricate analog neuromorphic circuits. RRAM devices are employed as synaptic weights, as their conductance can be programmed in a certain range, modulating the coupling between input spikes and the response of LIF output neurons. Exploiting this technology, we have designed, fabricated and tested the neuromorphic circuit platform, a circuit that features 8 RRAM devices, 2 input channels and 2 output neurons. This circuit can be configured to perform different functions, depending on the state of the RRAMs. In particular, the circuit can work as a Delay Line and a Coincidence Detector. To optimize the behavior of the circuit and to cope with the inherent variability of RRAM devices and analog circuits, we developed a calibration procedure. RRAM devices are repeatedly programmed, updating their conductances until the target criteria for the circuit are met. Once the Delay Line and Coincidence Detector circuits are calibrated, they are arranged in a grid composing the computational map. This computes the angular position of the target object.
Overall, the system reduces computation to the bone, extracting the time of arrival from the signal sensed by the receiving pMUT, and emitting a single spike to the output of the computational map to encode the object position. The simplicity in the processing pipeline reflects an unprecedented energy efficiency per localization, which results in 3 orders of magnitude reduced power consumption for the proposed system compared to a conventional microprocessor.
For more information, the full paper can be accessed here:
Please sign in or register for FREE
If you are a registered user on Nature Portfolio Engineering Community, please sign in