


Here, we present a tomographic projector for a volumetric display system that accommodates large audiences while providing a uniform experience. Over the past century, as display evolved, people have demanded more realistic and immersive experiences in theaters. We show prototypes supporting 30, 40 and 60 cpd foveal resolution at a net 85° × 78° field of view per eye. The resulting family of displays significantly improves on the field-of-view, resolution, and form-factor tradeoff present in previous augmented reality designs. Our display supports accommodation cues by varying the focal depth of the microdisplay in the foveal region, and by rendering simulated defocus on the "always in focus" scanning laser projector used for peripheral display.

The same optics relay an image of the eye to an infrared camera used for gaze tracking, which in turn drives the foveal display location and peripheral nodal point. The display combines a traveling microdisplay relayed off a concave half-mirror magnifier for the high-resolution foveal region, with a wide field-of-view peripheral display using a projector-based Maxwellian-view display whose nodal point is translated to follow the viewer's pupil during eye movements using a traveling holographic optical element. We present a near-eye augmented reality display with resolution and focal depth dynamically driven by gaze tracking. The performance in fovea is comparable to the state-of-the-art view synthesis methods, despite using around 10x less light field data. Our algorithm achieves fidelity in the fovea without any perceptible artifacts in the peripheral regions. To the best of our knowledge, this is the first attempt that synthesizes the entire light field from sparse RGB-D inputs and simultaneously addresses foveation rendering for computational displays. The proposed architecture comprises of log-polar sampling scheme followed by an interpolation stage and a convolutional neural network. We designed a novel end-to-end convolutional neural network that leverages human vision to perform both foveated reconstruction and view synthesis using only 1.2% of the total light field data. This computational challenge of rendering sharp imagery of the foveal region and reproduce retinal defocus blur that correctly drive accommodation is tackled in this paper.

However, light field HMDs require rendering the scene from a large number of viewpoints. Near-eye light field displays provide a solution to visual discomfort when using head mounted displays by presenting accurate depth and focal cues. This network is demonstrated to accurately synthesize defocus blur, focal stacks, multilayer decompositions, and multiview imagery using only commonly available RGB-D images, enabling real-time, near-correct depictions of retinal blur with a broad set of accommodation-supporting HMDs.
#DEEPFOCUS CAREERS FULL#
In this paper, we introduce DeepFocus, a generic, end-to-end convolutional neural network designed to efficiently solve the full range of computational tasks for accommodation-supporting HMDs. To date, no unified framework has been proposed to support driving these emerging HMDs using commodity content. These designs all extend depth of focus, but rely on computationally expensive rendering and optimization algorithms to reproduce accurate defocus blur (often limiting content complexity and interactive applications). A multitude of accommodation-supporting HMDs have been proposed, with three architectures receiving particular attention: varifocal, multifocal, and light field displays. Second, HMDs should accurately reproduce retinal defocus blur to correctly drive accommodation. First, the hardware must support viewing sharp imagery over the full accommodation range of the user. Addressing vergence-accommodation conflict in head-mounted displays (HMDs) requires resolving two interrelated problems.
