logo
ResearchBunny Logo
Introduction
Near-eye displays are crucial for augmented reality (AR) and virtual reality (VR), but challenges remain in creating immersive and comfortable experiences. These challenges include achieving a compact form factor, resolving vergence-accommodation conflict, and attaining high resolution with a large eyebox. Waveguide image combiners are a leading technology for AR glasses due to their compact size. However, they typically only display images at infinity conjugate, leading to focus spread effects and ghost noise when finite-conjugate images are projected. Holographic displays, on the other hand, offer potential solutions, including aberration-free images, per-pixel depth control, and large color gamuts. However, designing compact near-eye holographic displays remains difficult due to limited étendue. This paper proposes a novel architecture combining the strengths of waveguide and holographic displays to overcome these limitations, paving the way for 3D holographic AR glasses.
Literature Review
Existing near-eye display architectures include birdbath, curved mirror, retinal projection, and pin mirror types. Waveguide image combiners stand out due to their compact form factor and étendue expansion capabilities, leading to sufficient eyebox size and a large field of view. Various waveguide designs utilize different light coupling elements, such as geometric waveguides with partially reflective surfaces and diffractive waveguides employing surface relief gratings, volume Bragg gratings, or metasurfaces. While waveguide displays have advantages, limitations include fixed depth rendering, focus spread effects from pupil replication with finite-conjugate images, and challenges in achieving sufficient brightness. Holographic displays, which modulate light wavefronts using SLMs, offer solutions such as aberration-free, high-resolution images and per-pixel depth control but have historically struggled with compact form factor and limited étendue. Previous attempts to combine waveguides and holograms have been limited, either displaying only static images or failing to address the focus spread effect in exit-pupil expanding waveguides due to the thickness of the substrates required.
Methodology
This research introduces a waveguide holography system that addresses the focus spread effect of exit-pupil expanding waveguides. The core concept is to model coherent light interaction within the waveguide as a propagation with multi-channel kernels. A complex wavefront capturing system and phase-shifting digital holography algorithm allow for precise model calibration. The system consists of a collimated laser, an SLM, an exit-pupil expanding waveguide with surface relief gratings, and linear polarizers. The SLM, without a projection lens, modulates the input light, which then propagates through the waveguide. The pupil replication process creates multiple shifted wavefront copies which interfere, an effect typically considered an artifact but here leveraged to precisely shape the output wavefront using the SLM. A region of interest (ROI) is computationally steered to match the user's eye pupil, effectively using the expanded étendue and enabling a software-steered eyebox. The coherent light interaction inside the waveguide is modeled as a linear shift-invariant (LSI) system, simplifying the modeling as a convolution operation. A multi-channel convolution model with complex apertures is used to account for spatially variant properties of the waveguide. The model includes parameters for SLM response, and physical wavefront propagation. Model calibration utilizes a Mach-Zehnder interferometer system to capture the complex wavefront from the waveguide, allowing for precise training using an L1 norm loss function. After calibration, computer-generated holograms (CGHs) are rendered by adding numerical propagation to the model pipeline, iteratively updating the input phase until a target image is achieved. This is achieved using various methods to capture the image from the output wavefront including an imaging camera with a pupil mask and by using a wavefront camera for numerical propagation to the image plane.
Key Findings
Experimental results from both a compact prototype and a benchtop prototype demonstrate the system's capabilities. The architecture successfully displays full 3D images and achieves a large software-steered eyebox. Ablation analysis shows the importance of multi-channel modeling and complex apertures for accurately capturing waveguide behavior. The model significantly improves image quality, even at infinity depth, and effectively eliminates ghost noise and aberrations associated with finite-depth holograms in conventional waveguide displays. Results from both imaging camera and wavefront camera captures confirm the system's ability to reconstruct holograms at desired depths. The system demonstrates a full depth range of hologram generation and capture, with arbitrarily selectable eyebox positions. Temporally multiplexed 3D results show accurate rendering of blur and occlusion using a focal stack target. The system achieves sub-arc-minute resolution by stitching phase discontinuities caused by beam clippings, effectively increasing the numerical aperture and improving the Strehl ratio over threefold. This surpasses the resolution limitations of conventional waveguide displays.
Discussion
The findings demonstrate the feasibility of creating compact 3D holographic AR glasses by combining the strengths of waveguide and holographic display technologies. The proposed system effectively addresses the limitations of conventional waveguide displays, achieving high resolution, a large eyebox, and true 3D capabilities. The multi-channel model accurately captures the complex light interactions within the waveguide, allowing for precise control of the output wavefront and overcoming the focus spread effect. The software-steerable eyebox eliminates the need for mechanical pupil steering, further contributing to the system's compact form factor. The achievement of sub-arc-minute resolution showcases the potential for significantly improved image quality in waveguide-based AR displays. The ability to render holograms with accurate depth cues, blur, and occlusion further enhances the realism and immersion of the 3D experience.
Conclusion
This paper presents a significant advancement towards creating ultra-compact, true 3D holographic AR glasses. The waveguide holography system combines the benefits of waveguide combiners and holographic displays, overcoming key limitations of both technologies. Future research directions include improving SLM technology to address artifacts like phase flickering and increase the field of view, enhancing the robustness of the calibration process, refining the waveguide model for greater accuracy and efficiency, and exploring applications of the multi-channel model to other holographic display systems. The use of laser light sources could also improve brightness and efficiency.
Limitations
Current limitations include SLM-induced artifacts (DC noise and phase flickering) and a field of view limited by SLM pixel pitch. The calibration process is also sensitive to mechanical perturbation. While the model accurately captures many waveguide properties, further improvements in fidelity could enhance performance. Computational efficiency of the model could also be improved by addressing redundancies in parameters.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs—just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny