Experimental camera can focus on multiple planes simultaneously

Foreground, middle ground, and background appear sharp in the same photo

by · TechSpot

Serving tech enthusiasts for over 25 years.
TechSpot means tech analysis and advice you can trust.

In Focus Photographers have always had to consider angle and focus, as traditional lenses can capture only one plane at a time. While adjusting the aperture can help mitigate this limitation, its effectiveness is limited. Researchers believe they can address this long-standing shortcoming through computational lensing, a technique that could bring entire scenes into sharp focus.

Researchers from Carnegie Mellon University's College of Engineering have proposed a new photography method that eliminates the need to choose between focusing on the foreground, midground, or background. By combining previously explored technologies, the team developed a "computational lens" that can selectively focus on objects at different distances within the same image.

Photographs appear blurry outside the central focus area because traditional lenses can clearly capture only one focal plane at a time. This limitation is why distance and camera angle are critical considerations in photography. While narrowing the aperture can expand the depth of field, it also reduces brightness and can introduce other trade-offs that affect image quality.

The computational lens addresses this limitation by building on a concept known as the Lohmann lens, which adjusts focus by shifting two curved, cubic lenses relative to each other. The researchers combined the Lohmann lens with a phase-only spatial light modulator that bends light differently at each pixel, allowing different regions of a scene to be focused at varying depths. The result – called the Split-Lohmann lens – was inspired by earlier research into virtual reality headset displays.

To achieve this, the system first uses contrast-detection autofocus to divide an image into regions known as superpixels, each of which independently determines the focus depth that produces maximum sharpness. It then applies phase-detection autofocus (PDAF), using a dual-pixel sensor to identify what is currently in focus and the direction in which focus should be adjusted. One of the researchers likened the approach to giving each pixel its own lens.

The university reports that PDAF also makes computational lensing viable for capturing moving subjects. Using the technique, the researchers were able to record perfectly focused images at up to 21 frames per second.

Freeform depth-of-field photography is the most obvious application of computational lensing. The technology can not only render an entire scene in sharp focus, but also selectively blur specific regions to obscure objects or simulate a tilt-shift effect without physically tilting a lens's optics. Beyond photography, microscopes could use the approach to focus on multiple layers of a sample simultaneously, while automated camera systems could benefit from improved overall image quality.