Data-intensive computing operations, such as training neural networks, are energy intensive. This paper explores memcapacitive devices exploiting charge shielding as a highly energy-efficient approach for parallel multiply-accumulate operations. A crossbar array of 156 microscale memcapacitor devices was fabricated and used to train a neural network distinguishing the letters 'M', 'P', and 'I'. Modeling suggests energy efficiency of 29,600 tera-operations per second per watt with high precision (6-8 bits), and potential downscaling to 45 nm.
Publisher
Nature Electronics
Published On
Oct 11, 2021
Authors
Kai-Uwe Demasius, Aron Kirschen, Stuart Parkin
Tags
memcapacitive devices
energy efficiency
neural networks
parallel computing
multiply-accumulate operations
charge shielding
microscale fabrication
Related Publications
Explore these studies to deepen your understanding of the subject.