logo
ResearchBunny Logo
Abstract
Data-intensive computing operations, such as training neural networks, are energy intensive. This paper explores memcapacitive devices exploiting charge shielding as a highly energy-efficient approach for parallel multiply-accumulate operations. A crossbar array of 156 microscale memcapacitor devices was fabricated and used to train a neural network distinguishing the letters 'M', 'P', and 'I'. Modeling suggests energy efficiency of 29,600 tera-operations per second per watt with high precision (6-8 bits), and potential downscaling to 45 nm.
Publisher
Nature Electronics
Published On
Oct 11, 2021
Authors
Kai-Uwe Demasius, Aron Kirschen, Stuart Parkin
Tags
memcapacitive devices
energy efficiency
neural networks
parallel computing
multiply-accumulate operations
charge shielding
microscale fabrication
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs—just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny