This paper presents a deep-learning method for deconvolution in light-sheet microscopy using propagation-invariant beams. Instead of end-to-end training with ground truths, the method leverages known physics of the imaging system. A generative adversarial network (GAN) is trained using images generated with the known point-spread function (PSF) and unpaired experimental data. The method achieves a two-fold improvement in image contrast compared to conventional methods, requiring only a few hundred regions of interest for training. Its effectiveness is demonstrated on various samples, including oocytes, preimplantation embryos, and brain tissue, using both Airy and Bessel beams.
Publisher
Light: Science & Applications
Published On
Jul 26, 2022
Authors
Philip Wijesinghe, Stella Corsetti, Darren J. X. Chow, Shuzo Sakata, Kylie R. Dunning, Kishan Dholakia
Tags
deep learning
deconvolution
light-sheet microscopy
generative adversarial network
image contrast
point-spread function
biological samples
Related Publications
Explore these studies to deepen your understanding of the subject.