Computational event-driven vision sensors that convert motion into spiking signals

Event-driven in-sensor spiking neural network designed by the researchers. A) Comparison between frame- and event-based vision sensors. B) The designed in-sensor spiking neural network by combining the advantages of event-driven characteristics and in-sensor computing. C) Designed pixel circuit to realize programmable event-driven characteristics. D) The relationship between input light intensity and output spikes in the designed pixel. Credit: Zhou et al.

Neuromorphic vision sensors are unique sensing devices that automatically respond to environmental changes, such as a different brightness in their surrounding environment. These sensors mimic the functioning of the human nervous system, artificially replicating the ability of sensory neurons to preferentially respond to changes in the sensed environment.

Typically, these sensors solely capture dynamic motions in a scene, which are then fed to a computational unit that will analyze them and try to recognize what they are. These system designs, in which sensors and computational units processing the data they collect are physically separated, can create a time latency in the processing of sensor data, while also consuming more power.

Researchers at Hong Kong Polytechnic University, Huazhong University of Science and Technology and Hong Kong University of Science and Technology recently developed new event-driven vision sensors that capture dynamic motion and can also convert it into programmable spiking signals. These sensors, introduced in a paper published in Nature Electronics, eliminate the need to transfer data from sensors to computational units, thus enabling better energy-efficiencies and faster speeds in the analysis of captured dynamic motions.

“Near-sensor and in-sensor computing architecture efficiently decrease data transfer latency and power consumption by directly performing computation tasks near or inside the sensory terminals,” Yang Chai, co-author of the paper, told Tech Xplore. “Our research group is dedicated to the study of emerging customized devices for near-sensor and in-sensor computing. However, we found that existing works focus on conventional frame-based sensors, which generate a lot of redundant data.”

Recent advancements in the development of artificial neural networks (ANNs) have opened new opportunities for the development of neuromorphic sensing devices and image recognition systems. As part of their recent study, Chai and their colleagues set out to explore the potential of combining event-based sensors with spiking neural networks (SNNs), ANNs that mimic the firing patterns of neurons.

“The combination of event-based sensors and spiking neural network (SNN) for motion analysis can effectively reduce the redundant data and efficiently recognize the motion,” Chai said. “Thus, we propose the hardware architecture with two-photodiode pixels with the functions of both event-based sensors and synapses that can achieve in-sensor SNN.”

Computational event-driven vision sensors that convert motion into spiking signals
In-sensor spiking neural network (SNN) simulation for motion recognition presented in the team’s paper. A) Illustration of the event-driven SNN pixel array and the corresponding circuit diagram of a single pixel and an output neuron. B) The output photocurrent Itotal after the pixel sensing process. C) The photoresponsivity distribution of each subpixel array after the training. D) The generated output spikes of the output neurons when left-hand waving, right-hand waving and arm rotation are performed sequentially. Credit: Zhou et al.

The new computational event-driven vision sensors developed by Chai and his colleagues are capable of both event-based sensing and performing computations. These sensors essentially generate programmable spikes in response to changes in brightness and the light intensity of locally recorded pixels.

“The event-driven characteristic is achieved by using two branches with opposite photo response and different photo response times that generate the event-driven spiking signals,” Chai explained. “The synaptic characteristic is realized by photodiodes with different photo responsivities that allow precise modulation of the amplitude of the spiking signals, emulating different synaptic weights in an SNN.”

The researchers evaluated their sensors in a series of initial tests and found that they effectively emulate the processes through which neurons in the brain adapt to changes in visual scenes. Notably, these sensors reduce the amount of data that the sensors will collect, while also eliminating the need to transfer this data to an external computational unit.

“Our work proposes a method to sense and process the scenario by capturing local pixel-level light intensity change thus realizing in-sensor SNN instead of conventional ANN,” Chai said. “Such design combines the advantages of event-based sensors and in-sensor computing which is suitable for real-time dynamic information processing, such as autonomous driving and intelligent robots.”

In the future, the computational event-driven vision sensors developed by Chai and his colleagues could be developed further and tested in additional experiments, to further assess their value for real-world applications. In addition, this recent work could serve as an inspiration for other research groups, thus potentially paving the way for new sensing technologies that combine event-based sensors and SNNs.

“In the future, our group will focus on array-level realization and the integration technology of computational sensor array and CMOS circuits to demonstrate a complete in-sensor computing system,” Chai added. “In addition, we will try to develop the benchmark to define the device metric requirements for different applications and evaluate the performance of in-sensor computing system in a quantitative way.”

More information:
Yue Zhou et al, Computational event-driven vision sensors for in-sensor spiking neural networks, Nature Electronics (2023). DOI: 10.1038/s41928-023-01055-2

© 2023 Science X Network

Citation:
Computational event-driven vision sensors that convert motion into spiking signals (2023, December 20)
retrieved 20 December 2023
from https://techxplore.com/news/2023-12-event-driven-vision-sensors-motion-spiking.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.