Engineering and Technology
Real-time multiaxial strain mapping using computer vision integrated optical sensors
S. Hong, V. P. Rachim, et al.
The study addresses key limitations of flexible strain sensors—poor reproducibility, vulnerability to environmental noise (temperature and curvature), and degradation over time—common in nanomaterial-based piezoresistive and piezocapacitive devices. Although optical strain sensors based on piezotransmittance improve uniformity and repeatability, they are affected by external light conditions and surface curvature. The authors hypothesize that integrating AI-enabled computer vision with an optically read soft sensor can enhance accuracy, enable automatic correction for error sources (focus distance, hysteresis, bending), and support multiaxial strain mapping. The purpose is to realize a highly reproducible, sensitive, durable, and scalable strain sensing platform suitable for real-world applications, including complex body motion monitoring.
The paper surveys flexible strain sensors that rely on resistance or capacitance changes using diverse conductive media: CNTs, nanoparticles, nanowires, thin metal films, conductive polymers, and laser-induced graphene. These offer higher sensitivity and large strain ranges but suffer from randomness in nanomaterial distribution, complex fabrication processes, environmental susceptibility (temperature, curvature), and durability issues (e.g., conductive layer damage, hydrogel dehydration). Optical strain sensors based on transmittance changes have shown high uniformity and repeatability but are sensitive to external illumination and surface state. Advances in computer vision and AI motivate using vision-based analysis to overcome these drawbacks and to enable robust sensing with simpler fabrication.
System design: The CVOS sensor comprises (1) a polymer sensing part—white Ecoflex 0030 film (500 µm thick) patterned with circular micro-markers using CO2 laser marking—and (2) a compact optical system—tiny camera (ArduCAM IMX219 AF), compact microscope lens (iMicro C), and LED illumination—packaged in a 3D-printed housing maintaining a 5 mm working distance. The white pigment improves contrast by reducing reflection and penetration. The symmetric square sensor (typically 14×14 mm) has micro-markers with mean diameter 528.11 ± 23.58 µm, inter-marker pitch ~670.23 ± 18.2 µm. Camera calibration: Additional lens-induced pincushion distortion is corrected via OpenCV calibration using multi-view images of a circular grid, with border removal and resizing post-calibration. Image processing and micro-marker detection: A lightweight pipeline suitable for embedded processors includes CLAHE for contrast enhancement, unsharp masking (with provided 3×3 sharpening filter) and Gaussian blur for edge sharpening/denoising, border removal, Otsu binarization, and contour-based bounding box extraction, filtering out implausible sizes. MOI initialization and tracking: Nine micro-markers (MOIs) nearest the image center in the no-strain frame are selected via Euclidean distance. Tracking between frames uses nearest-neighbor association in Euclidean space. To handle markers exiting the field of view, virtual MOIs are generated: when outer MOIs cross threshold corner areas, their current coordinates are predicted from inner MOIs using a ratio update rule based on prior-frame positions. State detection and response correction: Loading/unloading state is determined by sign of inter-frame response change. Curvature (out-of-plane bending) is detected using Voronoi tessellation built from MOI centers (outer cells excluded). Average and standard deviation of Voronoi cell areas classify linear vs bending states (StdDev > 300 indicates bending). Curvature radius is estimated from the average cell area via an empirical log relation: R = -62.21 ln(Avg) + 565.65. A lightweight fully connected neural network (2 hidden layers × 10 nodes) implemented in TensorFlow Lite (25.7 KB) corrects responses using features: curvature state, estimated radius, loading/unloading, and MOI positions; training used combined test and simulation datasets reflecting hysteresis. Multiaxial strain mapping: Correspondences between initial and current micro-marker positions are established with FLANN. A homography is estimated via RANSAC, and a 7×9 grid (63 points) is warped to compute per-point strain magnitude (Euclidean displacement) and direction (inverse tangent). For quadrant-wise direction analysis, pixels are partitioned into four quadrants and dominant directions computed via weighted averages proportional to strain magnitude. Numerical simulations: ANSYS simulations of tensile deformation treat Ecoflex 0030 as linear elastic (E = 126.23 kPa; ν = 0.49). One edge fixed; opposite edge displaced to ε = 0–80%. Simulated micro-marker trajectories are compared with measurements (mean absolute error 4.8 pixels) and used to model bending response from linear-state simulations. Performance testing: Samples (14, 17, 20 mm squares) mounted on a universal testing machine with the optical module attached. Embedded boards (Raspberry Pi 4 / Zero2) executed real-time processing and powered the LED. Sampling rate characterization: ~83 FPS on PC; ~15 FPS on Raspberry Pi 4 at 640×480; 30 FPS at 320×240 (with reduced GF and doubled detection limit). Human-subject demonstrations were IRB-approved (PIRB-2022-E042) using body braces with integrated sensors; an IMU (WT901BLECL) provided comparison data.
- High sensitivity and linearity over a large range: Gauge factor (GF) = 503.4; hysteresis ≈ 0.9%; R^2 > 0.99 over ε = 0–81%.
- Low detection limit: capable of detecting very small strains of 0.19%, 0.25%, and 0.31% (limit of detection ~0.19%).
- Predictive accuracy: numerical simulations predicted micro-marker positions with mean absolute error ≈ 4.8 pixels.
- Uniformity and reproducibility: four sensors exhibited similar performance with mean absolute percentage error (MAPE) = 3.1% across samples.
- Durability: after 1,000, 5,000, and 10,000 load–unload cycles (0–60% strain), response curves remained nearly identical, indicating high reliability and long-term operability.
- Extended working range via virtual MOIs: special tracking enables estimation of outer MOIs beyond field-of-view, reducing corner detection errors and extending range.
- Curvature compensation: bending vs linear state responses differed by up to ~3% at 80% strain; Voronoi-based curvature detection and correction accurately estimate bending-state responses from linear-state models.
- Multiaxial strain mapping: quadrant-based direction analysis accurately captured strain directions; average measured directions (63 samples) matched intended directions (e.g., Q1 = 223.52 ± 26.55°, Q2 = 168.47 ± 20.23°, Q3 = 130.44 ± 35.86°, Q4 = 226.41 ± 25.21°).
- Real-time performance: sampling rates of ~83 FPS (PC) and ~15 FPS (Raspberry Pi 4) at 640×480; 30 FPS at 320×240 (with GF ~251.7 and doubled detection limit ~0.38%).
- Application demonstrations: accurately tracked elbow, wrist, and knee bending; captured forearm rotation trends comparable to IMU without drift; classified six shoulder movements (flexion, extension, abduction, adduction, internal/external rotation) using multiaxial mapping, surpassing magnitude-only sensors.
Integrating computer vision with a simple, conductive-layer-free soft substrate addresses core shortcomings of conventional strain sensors. The controlled optical environment and regular micro-marker pattern enable lightweight, interpretable processing, robust tracking (including virtual MOIs), and automated correction for hysteresis and curvature, leading to high sensitivity, linearity, and low hysteresis across large strains. Voronoi-based curvature state detection supports reliable operation on curved surfaces, and simulation-informed correction reduces bending-induced errors. Multiaxial strain mapping yields direction-sensitive measurements that facilitate classification of complex motions—an ability typically requiring multi-sensor arrays or IMUs. Demonstrations show the CVOS sensor can match IMU trends for rotational motions without drift, indicating potential as a long-term, wearable alternative. The high reproducibility (laser patterning), durability (>10,000 cycles), and scalability (design parameters readily tunable) highlight relevance for human-machine interfaces and industrial sensing in real-world conditions.
The study introduces a computer-vision-based optical strain sensor (CVOS) that combines a laser-patterned Ecoflex substrate, compact optics, and a lightweight yet adaptive analysis algorithm to deliver real-time, multiaxial strain mapping with high sensitivity (GF 503.4), low hysteresis (0.9%), broad range (0–81%), low detection limit (~0.19%), and long-term durability (≥10,000 cycles). Automated correction for focus distance, hysteresis, and curvature and the use of virtual MOIs extend robustness and range. Multiaxial mapping enables precise direction detection and classification of complex motions, positioning the CVOS sensor as a versatile platform for wearable healthcare, rehabilitation, and broader industrial applications. Future work will develop softer device platforms with fewer rigid components, extend curvature detection to include in-plane bending, and explore more advanced AI models (e.g., deep learning) on higher-performance embedded hardware to improve speed and functionality.
- Processing speed on low-power embedded boards is limited (≈15 FPS at 640×480 on Raspberry Pi 4), potentially constraining high-speed motion capture; increasing FPS via lower resolution reduces sensitivity (GF ~251.7) and raises detection limit (~0.38%).
- Current curvature state detection focuses on out-of-plane bending; in-plane and more complex curvature states are not yet fully addressed.
- Performance assumes a controlled optical environment (stable illumination, fixed focal distance); significant deviations could affect detection without recalibration or algorithm adjustments.
- Strap-type mounting can introduce strain components not solely due to body motion (e.g., additional y-axis strain), requiring careful system integration.
- The method relies on visible micro-markers; contamination or severe surface damage may impair detection and require maintenance (e.g., cleaning post-laser impurities).
Related Publications
Explore these studies to deepen your understanding of the subject.

