
Engineering and Technology
Widely accessible method for 3D microflow mapping at high spatial and temporal resolutions
E. Lammertse, N. Koditala, et al.
Discover a groundbreaking method for mapping micro-flows in 3D at high resolution, developed by Evan Lammertse and colleagues. This innovative approach utilizes brightfield microscopy and open-source software to visualize and control fluids at the microscale, enhancing the design of microfluidic structures.
~3 min • Beginner • English
Introduction
The study addresses the need for high-resolution, experimentally validated 3D flow mapping in microfluidic systems, where numerical simulations can be inadequate due to complex geometries, driving forces, and fluid–surface interactions. Conventional micro-PIV (µPIV) is widely used but is fundamentally 2D with limited Z-resolution and often requires complex, expensive setups or specialized illumination to improve depth resolution. Defocusing micro-PTV (µPTV) extends single-particle tracking into 3D by comparing defocused particle images to reference libraries, but prior implementations largely relied on fluorescence microscopy with additional optics (e.g., pinhole apertures, cylindrical lenses), reducing signal and temporal resolution and increasing complexity. The authors propose a widely accessible, high-resolution µPTV method using brightfield microscopy and open-source tools (Fiji/ImageJ with TrackMate) to track particles in 2D, followed by Z-classification against a defocus reference library. They compare a deep learning classifier with a normalized cross-correlation approach, and validate across representative single-phase and two-phase microfluidic examples, aiming to demonstrate accurate, fast, and practical 3D flow mapping at high spatiotemporal resolution.
Literature Review
Prior work establishes µPIV as the gold standard for single- and multiphase microflow mapping, but its depth resolution is limited to several micrometers and 3D reconstruction between slices is computationally costly. Enhancements using multiple cameras, confocal disks, or specialized illumination improve depth resolution at significant cost and complexity. Defocusing µPTV enables 3D single-particle tracking by matching defocused images to reference libraries; prior implementations often used fluorescence microscopy coupled with asymmetric defocusing (e.g., 3-pinhole apertures or cylindrical lenses), which reduce signal and necessitate low-NA objectives to increase depth of field, limiting temporal resolution and accessibility. Earlier reports suggest cross-correlation as a strong baseline for Z-classification, with some indicating deep learning may not always surpass cross-correlation in accuracy. The present work builds on these methods, demonstrating that with brightfield imaging, optimized optical aberrations, and careful dataset labeling, a deep learning model can outperform cross-correlation in Z-accuracy, while providing large computational speed gains, and validates the method across canonical microfluidic flows including channel expansions, displacement structures, and droplet internal flows.
Methodology
Imaging and optical setup: A simple brightfield microscope is used with a 20x/0.45 NA objective. The objective’s correction ring is set to 0.2 mm to introduce spherical aberration, producing an asymmetric defocus pattern that resolves axial position unambiguously. Polystyrene microbeads of 3 µm diameter are used as tracer particles to generate robust defocus patterns with sufficient pixel support.
Data acquisition and 2D tracking: High-speed brightfield videos are acquired (e.g., 300 fps, 1920 x 1080). Particle density is tuned to remain below 0.01 (pixel/pixel) to balance data density with manageable tracking errors. TrackMate (Fiji/ImageJ) detects and links particle positions over time to produce 2D (XY) trajectories; manual curation removes erroneous spots or fixes linking errors as needed, particularly at higher seed densities.
Z-position classification: A reference library of defocused particle images is assembled at known Z-levels (e.g., n = 110 levels covering a working Z-range h ≈ 54 µm). Each particle image along a track is classified against this library using either: (a) normalized cross-correlation with reference motifs; or (b) a supervised deep learning model trained on labeled defocus images. The DL model outputs continuous Z estimates based on discrete reference classes. Attention to labeling accuracy is emphasized; dataset tilt or device misalignment can bias labels, necessitating re-labeling and retraining to restore accuracy and precision.
Accuracy and precision evaluation: Accuracy (σ) is assessed using synthetic images with known ground-truth positions, reporting RMSE between predicted and actual coordinates in X, Y, Z across the working Z-range. Precision (E_prec) is evaluated on experimental Poiseuille flow in a straight rectangular channel (90 µm × 35 µm cross-section) using index-matched fluids (0.25 mg/ml beads in 45/55% v/v water/glycerol). For each track, Z vs (X,Y) is fit with a linear model to remove system tilt; E_prec is the median RMSE around the fit.
Flow field reconstruction: Instantaneous velocities are computed from frame-to-frame displacements. Vectors with unphysical magnitudes are filtered (e.g., wmax = 1300 µm/s for out-of-plane speed near the step). Velocities are lattice-averaged over the measurement volume (e.g., 416 × 93 × 43 µm in X,Y,Z) using specified element sizes (e.g., 4 × 2 × 1 µm) with 50% overlap; median counts of vectors per non-empty element are recorded (e.g., 10).
Continuity error analysis: To validate physical plausibility in the absence of ground truth, the divergence-free condition for incompressible flow is assessed. The scalar continuity error parameter η is computed over a subvolume (e.g., 178 µm < x < 255 µm surrounding the expansion region) across various lattice element sizes (1–6 µm in each dimension). The evolution of median η with element size diagnoses noise and discretization trade-offs and compares cross-correlation vs. deep learning outputs.
Computational cost benchmarking: Classification speed is benchmarked on 2681 particle image time-stacks (median track length 53 frames). Cross-correlation is executed on a CPU (Intel i5-4570S, 2.9 GHz, 16 GB RAM), while the DL model runs inference on a GPU (NVIDIA GeForce RTX 2070 SUPER, 7.79 GB). DL training time is measured and compared to cumulative cross-correlation classification time.
Experimental validations:
- Channel step expansion: A 45 µm-wide, 13 µm-deep channel expands to 90 µm-wide, 35 µm-deep. A 0.52 mg/ml bead suspension is injected at 5 µL/hr. A single 8300-frame video at 300 fps yields 701 pathlines (median length 121), totaling 79,814 spots. Flow mapping and velocity fields are reconstructed and analyzed (including near-wall considerations).
- Displacement structures: Overhangs 19 µm thick within a 30 µm-deep channel generate displacement; 0.42 mg/ml seed solution at 5 µL/hr near overhangs with a 20 µL/hr co-flow. Trajectories are categorized by initial wall proximity and interaction with structures.
- Droplet microfluidics: Internal flow within droplets is mapped in straight and curved channels to reveal recirculation and folding patterns (details summarized from abstract).
Key Findings
- Z-accuracy: On synthetic datasets spanning n = 110 Z-levels (working range h ≈ 54 µm), the deep learning model achieved lower Z RMSE (σ = 0.41 µm; σ/h = 0.0075) than cross-correlation (σ = 0.63 µm; σ/h = 0.012). XY localization RMSE varied with Z (median σX = 0.12 µm, σY = 0.21 µm), increasing toward the positive Z extreme.
- Z-precision: On experimental Poiseuille flow, median Z precision E_prec was comparable between methods (cross-correlation 0.14 µm; deep learning 0.17 µm; IQR ≈ 0.06 µm). Y-direction E_prec was ≈ 0.12 µm, indicating full-3D pathline precision near lateral precision limits.
- Importance of re-labeling: With a dataset exhibiting a subtle tilt (0 µm reference level spanning −2 to +2 µm in Z across the field), the DL model performed poorly without re-labeling (E_prec = 0.26 µm; pathline slope −0.0012 µm/µm vs. −0.0036 µm/µm for cross-correlation). After re-labeling and retraining, DL E_prec improved to 0.17 µm and the median slope matched cross-correlation (−0.0036 µm/µm), highlighting sensitivity of DL to labeling bias.
- Channel step flow mapping: From 79,814 spots (701 pathlines), both classifiers produced similar 3D trajectories with very few unphysical vectors (20 for cross-correlation; 4 for DL out of 79,112 instantaneous vectors). Lattice-averaging (416 × 93 × 43 µm volume; 4 × 2 × 1 µm elements; 50% overlap) revealed: upstream Poiseuille parabolic profile; pre-expansion divergence (upward in Z and outward in Y) starting ≈10 µm before the step; maximum out-of-plane velocity just downstream, then decaying to near-zero; continued gradual upward expansion ≈40 µm downstream.
- Continuity validation: Median continuity error η was below 0.5 for all but the smallest lattice (1 × 1 × 1 µm: η ≈ 0.88–0.90). η decreased with element size to a minimum at 4 × 4 × 4 µm (η = 0.14 for both methods), then slightly increased at 6 × 6 × 6 µm (η ≈ 0.16–0.17). A rectangular lattice emphasizing Z and Y resolution (4 × 2 × 1 µm) maintained acceptable continuity (η ≈ 0.45–0.50), enabling higher axial resolution.
- Computational efficiency: DL inference was ≈2 orders of magnitude faster than cross-correlation (median 0.049 s vs. 20.6 s per time-stack). DL training took 4572 s (~76 min), still an order of magnitude less than total cross-correlation classification time for the dataset. Speed advantages are expected to grow with larger datasets and benefit from GPU acceleration.
- Displacement structures: In a 30 µm-deep channel with 19 µm overhangs, four trajectory types were observed depending on initial wall proximity. Particles near the wall either remained under overhangs (Type 1) or were displaced upward and outward after interacting with the structures (Type 2), elucidating how such geometries shift particle centers of mass across streamlines for efficient capture.
- Droplet microfluidics: High-resolution internal flow maps revealed recirculation structures and folding patterns within droplets in straight and curved channels, consistent with mechanisms that can enhance internal mixing (as summarized in the abstract).
Discussion
The results demonstrate that brightfield defocusing µPTV, combined with open-source particle tracking and modular Z-classification, can produce accurate and precise 3D microflow maps at high temporal and spatial resolution without specialized optics. The deep learning classifier not only matches but surpasses cross-correlation in Z-accuracy while maintaining comparable precision and drastically reducing computation time, making it particularly suitable for iterative experiments or design optimization workflows. The channel step case verified the method’s capability to resolve complex 3D flow features and to generate physically consistent velocity fields, as indicated by low continuity error across appropriate lattice sizes. The analysis of displacement structures provides mechanistic insight into how overhang geometries induce 3D particle displacements, supporting their use for efficient particle or cell trapping. The droplet internal flow mapping reveals recirculation and folding features pertinent to mixing and transport processes inside droplets. Overall, the approach addresses the need for accessible, high-fidelity 3D flow mapping, enabling broader adoption in microfluidic research and aiding the design and validation of microfluidic devices.
Conclusion
This work introduces a widely accessible, high-resolution 3D microflow mapping technique based on brightfield defocusing and open-source software. By tracking particles in 2D with TrackMate and classifying Z-positions via a reference library using either cross-correlation or a deep learning model, the method achieves sub-micrometer axial accuracy and precision, with the deep learning approach providing superior accuracy and markedly faster inference. Validation on canonical microfluidic systems—including a channel step expansion, displacement structures, and droplet flows—demonstrates the capability to resolve complex 3D trajectories and physically consistent velocity fields. The technique’s simplicity and performance make it well suited for widespread use and for accelerating microfluidic device design.
Future directions include: further automating and standardizing training data labeling (e.g., tilt correction) to robustify deep learning; improving near-wall measurements and handling of particle–wall exclusion zones; expanding the axial working range and optimizing element sizing for application-specific resolution; extending to more complex multiphase flows; and integrating streamlined GPU-enabled pipelines for real-time or high-throughput flow mapping.
Limitations
- Near-wall measurements are inherently limited: flow cannot be resolved within approximately one particle radius (1.5 µm) of walls, and near-wall seeding is sparse due to low seeding densities required for reliable defocus-based tracking.
- Seeding density trade-offs: higher densities yield more data but increase spot overlap and tracking/linking errors, necessitating manual curation.
- Sensitivity of deep learning to labeling bias: small tilts or misalignments can bias Z labels; re-labeling and retraining are required to achieve optimal performance.
- Z-range limits: accuracy deteriorates near the negative extreme of the working Z-range, indicating optical limits; step size reduction may not improve normalized accuracy when comparable to σ.
- Computational comparisons reflect hardware differences (GPU-accelerated DL vs CPU-limited cross-correlation), which may influence relative speeds across setups.
- As with all particle-based velocimetry, measurements are sparse where particles are absent, and interpolation/lattice-averaging choices impact resolution and continuity metrics.
Related Publications
Explore these studies to deepen your understanding of the subject.