logo
ResearchBunny Logo
Abstract
This paper presents a deep learning (DL)-based framework for automated virtual staining, segmentation, and classification in label-free photoacoustic histology (PAH) of human specimens. The framework consists of three components: (1) an explainable contrastive unpaired translation (E-CUT) method for virtual H&E (VHE) staining, (2) a U-net architecture for feature segmentation, and (3) a DL-based stepwise feature fusion method (StepFF) for classification. Applied to human liver cancers, the framework achieved promising results. E-CUT produced VHE images highly similar to real H&E images, preserving morphological aspects. Segmentation successfully identified features like cell area, cell number, and nuclear distance. StepFF achieved 98.00% classification accuracy, exceeding conventional PAH classification (94.80%) and reaching 100% sensitivity based on pathologist evaluation. This framework shows potential as a clinical tool for digital pathology.
Publisher
Light: Science & Applications
Published On
Authors
Chiho Yoon, Eunwoo Park, Sampa Misra, Jin Young Kim, Jin Woo Baik, Kwang Gi Kim, Chan Kwon Jung, Chulhong Kim
Tags
deep learning
virtual staining
segmentation
photoacoustic histology
classification
digital pathology
human specimens
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs—just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny