logo
ResearchBunny Logo
An artificial sensory neuron with visual-haptic fusion

Engineering and Technology

An artificial sensory neuron with visual-haptic fusion

C. Wan, P. Cai, et al.

Explore the groundbreaking work of Changjin Wan, Pingqiang Cai, Xintong Guo, Ming Wang, Naoji Matsuhisa, Le Yang, Zhisheng Lv, Yifei Luo, Xian Jun Loh, and Xiaodong Chen as they unveil the bimodal artificial sensory neuron (BASE) that revolutionizes visual-haptic fusion. This innovative design integrates optic and pressure data to control myotubes and robotic hands, enhancing pattern recognition in cyborg and neuromorphic systems.

00:00
00:00
Playback language: English
Introduction
Biological systems significantly outperform electronic counterparts in real-world interactions due to advanced sensorimotor skills. While bio-inspired systems using silicon-based circuits and software have achieved complex sensorimotor functions, efficiency decreases with larger datasets due to centralized processing. Biological systems utilize a distributed computing paradigm inherent in the adaptive, plastic, and event-driven network of sensory neurons, offering superior fault tolerance and power efficiency. Emulating biological sensory neuron processes is crucial for achieving biological perceptual capabilities. A key advantage of biological sensory systems lies in their analysis of multiple cues for reliable reactions. Visual and haptic cues are integrated in the inferior parietal cortex, providing supramodal spatial ability. Combining these cues enhances object perception accuracy. While artificial sensory neurons with either haptic or visual modalities exist, supramodal perceptual capabilities remain absent. Synaptic transistors, enabling parallel gating of active channels, offer a platform for mediating multimodal sensory data. This research introduces a bimodal artificial sensory neuron (BASE) based on ionic/electronic hybrid neuromorphic electronics to achieve visual-haptic fusion, integrating visual (photodetector) and haptic (pressure sensor) inputs via a hydrogel-based ionic cable and a synaptic transistor, analogous to biological excitatory postsynaptic currents (EPSC). The BASE's ability to control a biohybrid neuromuscular junction and robotic hand, mimicking ‘perception for action’, is demonstrated. A BASE matrix's enhanced multi-transparency pattern recognition capability further validates the advantages of multimodal sensory fusion.
Literature Review
Existing research has demonstrated artificial sensory neurons or synapses with either haptic or visual modalities, applied to pattern recognition and muscular contraction control. However, the integration of multiple sensory modalities to achieve enhanced reliability and accuracy in artificial systems has remained a challenge. Previous work on artificial sensory systems often focuses on single modalities and lacks a robust framework for integrating multimodal sensory inputs at a neuronal level. This study builds upon previous work on artificial sensory neurons and synaptic transistors, leveraging their capabilities for advanced sensory processing. The authors cite previous work on artificial afferent nerves, NeuTap systems for tactile sensing, and optoelectronic sensorimotor artificial synapses, highlighting their individual contributions while emphasizing the novelty of integrating visual and haptic information in a single, biomimetic device.
Methodology
The BASE unit comprises four key components: a resistive pressure sensor, a perovskite-based photodetector, a hydrogel-based ionic cable, and a synaptic transistor. The photodetector and pressure sensor mimic the retina and skin, converting stimuli into electrical signals. These signals are transmitted through the ionic cable to the synaptic transistor for integration into transient channel current (EPSC). The fabrication involved several steps: Photodetector fabrication used ITO/PET substrates, Zn2SnO4/PEA2MA2Pb3I10 perovskite, and PTAA/Au layers. Pyramidal structured PDMS films were created using photolithography and wet etching. Carbon nanotubes (CNTs) were deposited via an ultrasonic vibrating method. Synaptic transistors were fabricated using patterned ITO electrodes, a PVA gate dielectric, and a PVA hydrogel. CNT patterning utilized a printing-filtration-transferring process. The PVA hydrogel was fabricated by dissolving PVA powder in LiCl solution and undergoing freeze-thaw cycles. The biohybrid neuromuscular junction (BNJ) was fabricated using interdigital Au electrodes with a Polypyrrole (PPy) coating and cultured C2C12 myotubes. For motion control experiments, visual and haptic stimuli were applied with varying time intervals (ΔT), and the resulting EPSC amplitudes were measured. For pattern recognition, a BASE matrix served as a feature extraction layer in a perceptron neural network. Multi-transparency alphabetic patterns were used for the recognition task, which included variation in the pattern and the addition of random noise pixels to test for robustness. The recognition rate was evaluated using data from unimodal (visual or haptic) and bimodal (visual-haptic) matrices with varying kernel sizes.
Key Findings
The BASE demonstrated successful visual-haptic fusion. The sheet resistance of CNT electrodes was optimized, and their low interfacial impedance with the hydrogel ensured efficient electronic-ionic current transduction. Both visual and haptic channels showed responses to varying stimulus intensity and duration. The biohybrid neuromuscular junction (BNJ) successfully transmitted signals from the BASE, causing skeletal myotube contraction. Motion control experiments showed that synchronized visual and haptic stimuli (ΔT ≤1s) generated stronger EPSCs, enabling myotube activation. A robotic hand experiment demonstrated that the BASE’s multi-dimensional spatial information capability was superior to unimodal systems in catching a tennis ball. In pattern recognition simulations using multi-transparency alphabetic patterns, the BASE matrix achieved higher recognition rates than unimodal matrices, even with reduced data size due to spatial integration. The synaptic transistor's ability to integrate stimuli from multiple sources, with distance-dependent weighting (convolution-like operation), was demonstrated. The recognition rates increased with the combined visual-haptic inputs, indicating enhanced pattern recognition capability when compared to unimodal inputs alone. The study showed that the integration effect of the synaptic transistor to multiple inputs was the integral of the product of the input intensities and their distance-dependent weights.
Discussion
The results demonstrate the successful development of an artificial sensory neuron capable of fusing visual and haptic information, resulting in enhanced performance in motion control and pattern recognition tasks. This surpasses the capabilities of previous unimodal artificial sensory neurons. The study showcases a significant advancement in neuromorphic computing and biohybrid systems. The successful integration of visual and haptic cues at the neuronal level mimics biological sensory processing more closely than previous approaches, opening possibilities for building more robust and efficient sensorimotor systems in robotics and other applications. The superior performance of the BASE in pattern recognition suggests potential applications in image processing and other areas requiring robust feature extraction. The bottom-up approach used in the construction of the biohybrid system makes it particularly well-suited for the integration with biological systems.
Conclusion
This work successfully demonstrated a bimodal artificial sensory neuron (BASE) that achieves visual-haptic fusion, enabling control of both myotubes and a robotic hand. The BASE demonstrated superior pattern recognition capabilities compared to unimodal systems. This biomimetic approach offers significant advancements in neuromorphic and cyborg technologies, paving the way for more sophisticated and biologically relevant sensorimotor systems. Future research could explore the integration of additional sensory modalities and the development of more complex neural networks based on the BASE.
Limitations
The current study primarily focuses on visual and haptic modalities. The generalizability of the findings to other sensory modalities requires further investigation. The pattern recognition task uses simplified alphabetic patterns; the performance on more complex patterns and real-world scenarios needs to be evaluated. While the myotube control demonstrates a proof-of-concept, more extensive studies are needed to fully characterize the capabilities of the BASE in regulating muscle contractions. The simulation-based pattern recognition is based on a relatively simple perceptron; the performance using more advanced neural network architectures should be explored.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny