29th November 2021
Ultracompact camera is the size of a salt grain
U.S. researchers have developed a new imaging device just 500 μm (0.5 mm) in diameter. The system can produce crisp, full-colour images on a par with conventional compound camera lenses 500,000 times larger in volume.
Micro-sized cameras have great potential to spot problems in the human body and enable sensing for super-small robots, but past approaches captured only fuzzy, distorted images with limited fields of view.
Now, researchers at Princeton University and the University of Washington have overcome these obstacles with an ultracompact camera the size of a coarse grain of salt. The new system can produce crisp, full-colour images on a par with a conventional compound camera lens 500,000 times larger in volume. A paper describing this breakthrough is published today in Nature Communications.
Enabled by a joint design of the camera's hardware and computational processing, the system could enable minimally invasive endoscopy with medical robots to diagnose and treat diseases, and improve imaging for other robots with size and weight constraints. Arrays of thousands of such cameras could be used for full-scene sensing, turning surfaces into cameras.
While traditional cameras are based on a series of curved glass or plastic lenses to bend light rays into focus, the new optical system here relies on technology called a metasurface, which can be produced much like a computer chip. Just half a millimetre wide, its surface is studded with 1.6 million cylindrical "nanoposts" – each roughly the size of the human immunodeficiency virus (HIV).
Each nanopost has a unique geometry, and functions like an optical antenna. Varying the design of each nanopost is necessary for correctly shaping the entire optical wavefront. With the help of machine learning-based algorithms, the posts' interactions with light combine to produce both the highest-quality images and widest field of view for a full-colour metasurface camera developed to date.
The integrated design of the optical surface and the signal processing algorithms – key innovations in the camera's creation – helped to boost the camera's performance in natural light conditions. By contrast, previous metasurface cameras required the pure laser light of a laboratory or other ideal conditions to produce high-quality images, according to Felix Heide, the study's senior author and Assistant Professor of Computer Science at Princeton.
Previous metasurface lenses have also suffered from major image distortions, small fields of view, and limited ability to capture the full spectrum of visible light – referred to as RGB imaging because it combines red, green and blue to produce different hues.
"It's been a challenge to design and configure these little microstructures to do what you want," said Ethan Tseng, a computer science PhD student at Princeton who co-led the study. "For this specific task of capturing large field of view RGB images, it was previously unclear how to co-design the millions of nano-structures together with post-processing algorithms."
Shane Colburn, co-lead author, tackled this challenge by creating a computational simulator to automate testing for different nano-antenna configurations. Because of the number of antennas and the complexity of their interactions with light, this type of simulation can use "massive amounts of memory and time," said Colburn. He developed a model to efficiently approximate the metasurfaces' image production capabilities with sufficient accuracy.
Co-author James Whitehead, a PhD student at the University of Washington, fabricated the metasurfaces, which are based on silicon nitride, a glass-like material that is compatible with standard semiconductor manufacturing methods used for computer chips. This means that a given metasurface design could be easily mass-produced at lower cost than the lenses in conventional cameras.
Credit: Tseng, et al. / Nature
"Although the approach to optical design is not new, this is the first system that uses a surface optical technology in the front end and neural-based processing in the back," said Joseph Mait, a consultant at Mait-Optik and a former senior researcher and chief scientist at the U.S. Army Research Laboratory.
"The significance of the published work is completing the Herculean task to jointly design the size, shape and location of the metasurface's million features and the parameters of the post-detection processing to achieve the desired imaging performance," added Mait, who was not involved in the study.
Heide and his colleagues are now working to add more computational abilities to the camera itself. Beyond optimising image quality, they would like to add capabilities for object detection and other sensing modalities relevant for medicine and robotics.
In the future, Heide also envisions using ultracompact imagers to create entire surfaces that function as sensors: "We could turn individual surfaces into cameras that have ultra-high resolution, so you wouldn't need three cameras on the back of your phone anymore, but the whole back of your phone would become one giant camera. We can think of completely different ways to build devices in the future," he said.