New lensless camera creates detailed 3-D images without scanning

New lensless camera creates detailed 3-D images without scanning
The lensless DiffuserCam consists of a diffuser placed in front of a sensor (bumpson the diffuser are exaggerated for illustration). The system turns a 3-D scene into a 2-D image on the sensor. After a one-time calibration, an algorithm is used to reconstruct 3-D images computationally. The result is a 3-D image reconstructed from a single 2-D measurement. Credit: Laura Waller, University of California, Berkeley

Researchers have developed an easy-to-build camera that produces 3D images from a single 2D image without any lenses. In an initial application of the technology, the researchers plan to use the new camera, which they call DiffuserCam, to watch microscopic neuron activity in living mice without a microscope. Ultimately, it could prove useful for a wide range of applications involving 3D capture.

The camera is compact and inexpensive to construct because it consists of only a diffuser - essentially a bumpy piece of plastic - placed on top of an image sensor. Although the hardware is simple, the software it uses to reconstruct high resolution 3D is very complex.

"The DiffuserCam can, in a single shot, capture 3D information in a large volume with high resolution," said the research team leader Laura Waller, University of California, Berkeley. "We think the camera could be useful for self-driving cars, where the 3D information can offer a sense of scale, or it could be used with machine learning algorithms to perform face detection, track people or automatically classify objects."

In Optica, The Optical Society's journal for high impact research, the researchers show that the DiffuserCam can be used to reconstruct 100 million voxels, or 3D pixels, from a 1.3-megapixel (1.3 million pixels) image without any scanning. For comparison, the iPhone X camera takes 12-megapixel photos. The researchers used the camera to capture the 3D structure of leaves from a small plant.

"Our new camera is a great example of what can be accomplished with computational imaging—an approach that examines how hardware and software can be used together to design imaging systems," said Waller. "We made a concerted effort to keep the hardware extremely simple and inexpensive. Although the software is very complicated, it can also be easily replicated or distributed, allowing others to create this type of camera at home."

A DiffuserCam can be created using any type of image sensor and can image objects that range from microscopic in scale all the way up to the size of a person. It offers a resolution in the tens of microns range when imaging objects close to the sensor. Although the resolution decreases when imaging a scene farther away from the sensor, it is still high enough to distinguish that one person is standing several feet closer to the camera than another person, for example.

The researchers used the DiffuserCam to reconstruct the 3-D structure of leaves from a small plant. The new camera can reconstruct 100 million voxels, or 3-D pixels, from a 1.3-megapixel image without any scanning. Credit: Nick Antipa and Grace Kuo, University of California, Berkeley

A simple approach to complex imaging

The DiffuserCam is a relative of the field camera, which captures how much light is striking a pixel on the as well as the angle from which the light hits that pixel. In a typical light field camera, an array of tiny lenses placed in front of the sensor is used to capture the direction of the incoming light, allowing computational approaches to refocus the image and create 3D images without the scanning steps typically required to obtain 3D information.

Until now, light field cameras have been limited in spatial resolution because some spatial information is lost while collecting the directional information. Another drawback of these cameras is that the microlens arrays are expensive and must be customized for a particular camera or optical components used for imaging.

"I wanted to see if we could achieve the same imaging capabilities using simple and cheap hardware," said Waller. "If we have better algorithms, could the carefully designed, expensive microlens arrays be replaced with a plastic surface with a random pattern such as a bumpy piece of plastic?"

After experimenting with various types of diffusers and developing the complex algorithms, Nick Antipa and Grace Kuo, students in Waller's lab, discovered that Waller's idea for a simple light field camera was possible. In fact, using random bumps in privacy glass stickers, Scotch tape or plastic conference badge holders, allowed the researchers to improve on traditional light field camera capabilities by using compressed sensing to avoid the typical loss of resolution that comes with microlens arrays.

Although other cameras use lens arrays that are precisely designed and aligned, the exact size and shape of the bumps in the new camera's diffuser are unknown. This means that a few images of a moving point of light must be acquired to calibrate the software prior to imaging. The researchers are working on a way to eliminate this calibration step by using the raw data for calibration. They also want to improve the accuracy of the software and make the 3D reconstruction faster.

New lensless camera creates detailed 3-D images without scanning
The researchers used the DiffuserCam to reconstruct the 3-D structure of leaves from a small plant. They plan to use the new camera to watch neurons fire in living mice without using a microscope. Credit: Laura Waller, University of California, Berkeley

No microscope required

The will be used in a project at University of California Berkeley that aims to watch a million individual neurons while stimulating 1,000 of them with single-cell accuracy. The project is funded by DARPA's Neural Engineering System Design program - part of the federal government's BRAIN Initiative - to develop implantable, biocompatible neural interfaces that could eventually compensate for visual or hearing deficits.

As a first step, the researchers want to create what they call a cortical modem that will "read" and "write" to the brains of animal models, much like the input-output activity of internet modems. The DiffuserCam will be the heart of the reading device for this project, which will also use special proteins that allow scientists to control neuronal activity with light.

"Using this to watch neurons fire in a mouse brain could in the future help us understand more about sensory perception and provide knowledge that could be used to cure diseases like Alzheimer's or mental disorders," said Waller.

Although newly developed imaging techniques can capture hundreds of neurons firing, how the brain works on larger scales is not fully understood. The DiffuserCam has the potential to provide that insight by imaging millions of neurons in one shot. Because the is lightweight and requires no microscope or objective lens, it can be attached to a transparent window in a mouse's skull, allowing neuronal activity to be linked with behavior. Several arrays with overlying diffusers could be tiled to image large areas.

A need for interdisciplinary designers

"Our work shows that computational imaging can be a creative process that examines all parts of the optical design and algorithm design to create optical systems that accomplish things that couldn't be done before or to use a simpler approach to something that could be done before," Waller said. "This is a very powerful direction for imaging, but requires designers with optical and physics expertise as well as computational knowledge."

The new Berkeley Center for Computational Imaging, headed by Waller, is working to train more scientists in this interdisciplinary field. Scientists from the center also meet weekly with bioengineers, physicists and electrical engineers as well as experts in signal processing and machine learning to exchange ideas and to better understand the imaging needs of other fields.

More information: N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, L. Waller, "DiffuserCam: Lensless Single-Exposure 3D Imaging," Optica, Volume 5, Issue 1, 1-9 (2017).
DOI: 10.1364/OPTICA.5.000001

Journal information: Optica

Citation: New lensless camera creates detailed 3-D images without scanning (2017, December 21) retrieved 29 March 2024 from https://phys.org/news/2017-12-lensless-camera-d-images-scanning.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Engineers produce breakthrough sensor for photography, life sciences, security

2322 shares

Feedback to editors