logo
ResearchBunny Logo
Highly sensitive 2D X-ray absorption spectroscopy via physics informed machine learning

Physics

Highly sensitive 2D X-ray absorption spectroscopy via physics informed machine learning

Z. Li, T. Flynn, et al.

Discover groundbreaking advancements in X-ray near-edge absorption structure (XANES) imaging! This research by Zeyuan Li, Thomas Flynn, Tongchao Liu, Sizhan Liu, Wah-Keat Lee, Ming Tang, and Mingyuan Ge presents a novel deep neural network approach that enhances signal-to-noise ratios and reveals valence states of nickel and cobalt in complex materials.

00:00
00:00
~3 min • Beginner • English
Abstract
Improving the spatial and spectral resolution of 2D X-ray near-edge absorption structure (XANES) has been a decade-long pursuit to probe local chemical reactions at the nanoscale. However, the poor signal-to-noise ratio in the measured images poses significant challenges in quantitative analysis, especially when the element of interest is at a low concentration. In this work, we developed a post-imaging processing method using deep neural network to reliably improve the signal-to-noise ratio in the XANES images. The proposed neural network model could be trained to adapt to new datasets by incorporating the physical features inherent in the latent space of the XANES images and self-supervised to detect new features in the images and achieve self-consistency. Two examples are presented in this work to illustrate the model's robustness in determining the valence states of Ni and Co in the LiNixMnyCo1−x−yO2 systems with high confidence.
Publisher
npj Computational Materials
Published On
Jun 18, 2024
Authors
Zeyuan Li, Thomas Flynn, Tongchao Liu, Sizhan Liu, Wah-Keat Lee, Ming Tang, Mingyuan Ge
Tags
XANES
deep neural network
signal-to-noise ratio
valence states
materials science
self-supervised learning
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny