logo
ResearchBunny Logo
Deep learning-based virtual staining, segmentation, and classification in label-free photoacoustic histology of human specimens

Medicine and Health

Deep learning-based virtual staining, segmentation, and classification in label-free photoacoustic histology of human specimens

C. Yoon, E. Park, et al.

Discover a groundbreaking deep learning framework for automated virtual staining, segmentation, and classification in label-free photoacoustic histology of human specimens. Researchers Chiho Yoon, Eunwoo Park, Sampa Misra, Jin Young Kim, Jin Woo Baik, Kwang Gi Kim, Chan Kwon Jung, and Chulhong Kim achieved remarkable accuracy in classifying liver cancers, establishing new potential for digital pathology.

00:00
00:00
~3 min • Beginner • English
Abstract
In pathological diagnostics, histological images highlight the oncological features of excised specimens, but they require laborious and costly staining procedures. Despite recent innovations in label-free microscopy that simplify complex staining procedures, technical limitations and inadequate histological visualization are still problems in clinical settings. Here, we demonstrate an interconnected deep learning (DL)-based framework for performing automated virtual staining, segmentation, and classification in label-free photoacoustic histology (PAH) of human specimens. The framework comprises three components: (1) an explainable contrastive unpaired translation (E-CUT) method for virtual H&E (VHE) staining, (2) an U-net architecture for feature segmentation, and (3) a DL-based stepwise feature fusion method (StepFF) for classification. The framework demonstrates promising performance at each step of its application to human liver cancers. In virtual staining, the E-CUT preserves the morphological aspects of the cell nucleus and cytoplasm, making VHE images highly similar to real H&E ones. In segmentation, various features (e.g., the cell area, number of cells, and the distance between cell nuclei) have been successfully segmented in VHE images. Finally, by using deep feature vectors from PAH, VHE, and segmented images, StepFF has achieved a 98.00% classification accuracy, compared to the 94.80% accuracy of conventional PAH classification. In particular, StepFF's classification reached a sensitivity of 100% based on the evaluation of three pathologists, demonstrating its applicability in real clinical settings. This series of DL methods for label-free PAH has great potential as a practical clinical strategy for digital pathology.
Publisher
Light: Science & Applications
Published On
Authors
Chiho Yoon, Eunwoo Park, Sampa Misra, Jin Young Kim, Jin Woo Baik, Kwang Gi Kim, Chan Kwon Jung, Chulhong Kim
Tags
deep learning
virtual staining
segmentation
photoacoustic histology
classification
digital pathology
human specimens
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny