logo
ResearchBunny Logo
An interactive ImageJ plugin for semi-automated image denoising in electron microscopy

Computer Science

An interactive ImageJ plugin for semi-automated image denoising in electron microscopy

J. Roels, F. Vernaillen, et al.

Discover how DenoisEM, an innovative ImageJ plugin for GPU-accelerated denoising developed by Joris Roels and colleagues, dramatically enhances the speed and quality of 3D electron microscopy data. This breakthrough allows for a fourfold increase in data acquisition speed, ensuring exceptional visualization and segmentation without compromising quality.... show more
Introduction

High-resolution 3D electron microscopy (EM), including SBF-SEM and FIB-SEM, enables nanoscale visualization but generates very large datasets and often requires long acquisition times. Reducing dwell time speeds up acquisition but increases noise, hampering visualization and downstream analysis (e.g., segmentation). Although many advanced denoising methods exist, they are often inaccessible due to complex parameter tuning, low-level implementations, and high computational burden. The study addresses the need for an interactive, expert-in-the-loop denoising workflow that is fast, user-friendly, and scalable to large 3D EM datasets. The authors propose DenoisEM, an ImageJ plugin with GPU-accelerated backend enabling low-latency parameter tuning and batch processing, aiming to improve EM data quality, accelerate imaging, and enhance subsequent analysis.

Literature Review

State-of-the-art denoising methods in computer vision include multiresolution shrinkage, nonlocal means, Bayesian estimation, and convolutional neural networks. Many have been applied to EM (e.g., wavelet-based, bilateral, anisotropic diffusion, nonlocal means, BLS-GSM), but interactive frameworks often suffer from difficult parameter interpretation and high computational cost, limiting practical adoption for large 3D datasets. Existing tools (e.g., DeconvolutionJ, PureDenoise, DeconvolutionLab2) provide components of restoration but lack a unified, extensible, and low-latency interface that supports expert-guided parameter selection and rapid scaling. This work builds upon and integrates these advances into an interactive, GPU-accelerated platform.

Methodology

Tool and workflow: DenoisEM is an ImageJ plugin providing an interactive, human-in-the-loop denoising workflow: (1) data loading and backend initialization; (2) ROI selection; (3) automatic noise estimation; (4) algorithm selection with interactive parameter tuning; and (5) batch processing and export with metadata (algorithm and parameters) stored in TIFF. User interface: Side-by-side display of original and denoised ROI with near real-time updates. Parameter sliders with tooltips; parameters cached to facilitate switching between algorithms. Optional sharpness estimation panel. Computational backend: Implemented in Quasar (high-level GPU/CPU parallel computing framework). A Java–Quasar bridge via JNI connects ImageJ UI to Quasar algorithms, enabling low-latency processing. Source code for the plugin, bridge, and algorithms is open-source. Algorithms implemented: Eight denoising/deconvolution methods: Gaussian filtering; wavelet thresholding; anisotropic diffusion (Perona–Malik with c(s) nonlinearity); bilateral filtering; Tikhonov denoising/deconvolution; total variation denoising; Bayesian least-squares Gaussian scale mixtures (BLS-GSM); nonlocal means denoising/deconvolution (with optional PSF H for deconvolution). Algorithms operate slice-by-slice (2D) on volumes for practical memory/latency reasons; selected 3D versions (Gaussian, bilateral, Tikhonov) were tested but offered modest gains at high compute/memory overhead. Statistical modeling and estimators:

  • Observation model: y = Hx + n with additive, zero-mean, homoscedastic noise.
  • Noise estimation: Median absolute deviation (MAD): σ = median(|y − median(y)|).
  • Blur estimation: Based on differences in horizontal/vertical derivative responses between original and smoothed images to produce a normalized blur metric.
  • Parameter estimation: For each algorithm, optimal parameters are modeled as polynomial functions of estimated noise level σ. Noise-free EM benchmarks (K=100) are synthetically noised at multiple σ; per σ, optimal parameters θ minimize reconstruction error; then θ(σ) is fit via linear/quadratic least squares to provide initial parameter suggestions. Data and experiments: Sample preparation and acquisition protocols provided for SBF-SEM (Arabidopsis root tips, mouse heart tissue) and FIB-SEM (murine heart). SBF-SEM examples: Arabidopsis denoised with Tikhonov deconvolution (λ = 1.5, σ = 0.31, N = 86); heart tissue with anisotropic diffusion (η = 0.07, N = 6, diffusion factor = 0.18). FIB-SEM example: murine heart denoised with NLM (h = 0.23, B = 6, W = 9). CREMI dataset used for automated membrane segmentation with ilastik random forest classifier; noise levels σ = 0.01–0.2 simulated; NLM parameters tuned per σ to preserve membranes. Performance benchmarking: Quasar implementations compared with MATLAB/ImageJ equivalents for bilateral filtering, anisotropic diffusion, BLS-GSM, and NLM over input sizes 256^2 to 4096^2. Latency threshold ~100 ms targeted for interactive tuning. Throughput analysis: SBF-SEM dwell time experiment: acquired images at 1, 2, 4 µs dwell times; 1 µs + denoising compared qualitatively/quantitatively to longer dwell times. Time budget computed for a 100 × 100 × 500 µm^3 block at 10-nm pixels and 100-nm slicing (500 images, 499 slices), including slice time (18 s) and per-image acquisition times at different dwell settings. Processing costs measured on a laptop (Tikhonov deconvolution on Quadro P2000 ~5+ min for the dataset). Reproducibility: Denoising/segmentation repeated ≥10 times on different ROIs with similar results; counting by two experts thrice; timing repeated 20 times. Data, code, and parameter estimation scripts publicly available.
Key Findings
  • Speed and interactivity: Quasar-based implementations are typically 10–100× faster than MATLAB/ImageJ CPU versions; real-time parameter tuning (≤100 ms latency) feasible for megapixel images with bilateral filtering, anisotropic diffusion, and NLM; BLS-GSM reduced from seconds–minutes to ~1 s for 16 MP images.
  • Imaging throughput: Using 1 µs dwell plus denoising produces image quality comparable to 4 µs dwell. For a dataset of 500 images and 499 slices: 4 µs dwell yields ~58 h (6'40" per image + 18 s per slice); 1 µs dwell yields ~16 h (1'40" per image + 18 s per slice). GPU-accelerated Tikhonov deconvolution required slightly over 5 min on a laptop (Quadro P2000), not a bottleneck. Overall, throughput improved by ~3.5× in this scenario; the abstract reports up to 4× without significant quality loss.
  • Noise reduction and visualization: In Arabidopsis SBF-SEM, MAD-estimated noise standard deviation decreased by nearly two orders of magnitude after Tikhonov deconvolution, with improved recognition of nuclear membrane and ER; axial view quality also improved despite slice-wise processing. In mouse heart SBF-SEM, anisotropic diffusion improved separation of A- and I-bands, confirmed by histogram shifts and thresholded masks.
  • Segmentation and analysis: For FIB-SEM murine heart filaments, denoising (NLM) enabled accurate threshold-based 3D rendering and counting. Automatic Analyze Particles on denoised images matched manual counts (with watershed), while raw images underestimated by >1.7×. Automated pipeline time ~20 s/image vs ≥2.5 min for manual counting.
  • Robust automated segmentation: On CREMI TEM data with simulated noise (σ = 0.01–0.2), ilastik random forest segmentation quality (Dice/Jaccard) degrades with noise but is stabilized and often improved by denoising. With denoising, equal segmentation performance to low-noise data is achievable even with up to 20× acquisition acceleration (σ = 0.01 vs 0.2); performance can improve at 10× acceleration (σ = 0.01 vs 0.1). NLM preprocessing of a 1250 × 1250 × 125 volume took <1 min on a modern GPU.
  • Resolution versus speed: Increasing pixel size (e.g., to 20 nm for 4× speedup) reduces high-frequency content and sharpness versus 1 µs + denoising; Fourier analysis and edge maps show better structural detail retention with denoising-based acceleration.
Discussion

The study addresses the challenge of balancing acquisition speed and image quality in 3D EM. By integrating advanced denoising into an interactive, GPU-accelerated workflow, DenoisEM enables experts to quickly optimize parameters tailored to specific samples and tasks (visualization, segmentation, counting). The results demonstrate that denoising can compensate for the increased noise from reduced dwell times, maintaining or improving downstream segmentation quality. This directly supports higher-throughput EM acquisition without sacrificing essential structural information. The performance gains from the Quasar backend ensure low-latency feedback for expert-in-the-loop tuning and scale to large datasets. Moreover, denoising facilitates automation: threshold-based object detection and classical pixel classifiers become more reliable and time-efficient after preprocessing. The framework’s extensibility further ensures compatibility with evolving restoration methods, making it relevant across EM modalities and other grayscale imaging fields.

Conclusion

DenoisEM delivers an interactive, GPU-accelerated ImageJ plugin that consolidates state-of-the-art denoising and deconvolution methods with expert-guided parameter tuning and batch processing. It enhances visualization and enables more accurate and faster downstream analyses. Empirically, it (a) improves segmentation robustness under higher noise, (b) allows substantial imaging throughput gains (≈3.5–4× in tested scenarios) without significant quality loss, and (c) offers 10–100× computational speedups over common CPU-based tools, enabling real-time parameter exploration. Future work includes predictive parameter optimization via regression on image/noise/metadata, leveraging acquisition metadata for automated recommendations, extending to multichannel data to serve light microscopy, and incorporating modern deep learning-based restoration within the Quasar framework.

Limitations
  • Denoising trade-off: Most algorithms require balancing noise reduction against edge blurring, which can degrade boundary resolution if overprocessed; expert tuning remains necessary.
  • Dimensionality: Current implementations operate slice-by-slice (2D) for volumes due to memory and computational constraints; while 3D versions can improve restoration slightly, they incur ~50% higher compute and significant memory overhead on typical GPUs.
  • Generalizability: Optimal parameters are data-dependent (sample prep, modality, structures of interest), necessitating expert-in-the-loop adjustments despite initial parameter estimation.
  • Hardware dependence: Achieving low-latency interactivity and fast batch processing benefits from access to a capable GPU; performance on CPU-only systems may limit responsiveness.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny