Camera expands vision capabilities with lasers
Stanford University researchers have developed a high-powered laser that captures reflected particles of light and reconstructs them with a camera’s advanced sensors and processing algorithm.
David Lindell, a graduate student at Stanford University, donned a high visibility tracksuit and got to work, stretching, pacing and hopping across an empty room. Through a camera aimed away from Lindell, at what appeared to be a blank wall, his colleagues could watch his every move.
That’s because Lindell was being scanned by a high-powered laser hidden to the naked eye. The single particles of light he reflected onto the walls around him were captured and reconstructed by the camera’s advanced sensors and processing algorithm.
“People talk about building a camera that can see as well as humans for applications such as autonomous cars and robots, but we want to build systems that go well beyond that,” said Gordon Wetzstein, an assistant professor of electrical engineering at Stanford. “We want to see things in 3-D, around corners and beyond the visible light spectrum.”
The camera system Lindell tested builds on previous around-the-corner cameras the Stanford team developed. It’s able to capture more light from a greater variety of surfaces, see wider and farther away and is fast enough to monitor out-of-sight movement – such as Lindell’s calisthenics – for the first time. Someday, the researchers hope superhuman vision systems could help autonomous cars and robots operate even more safely than they would with human guidance.
Keeping their system practical is a high priority for these researchers. The hardware they chose, the scanning and image processing speeds, and the style of imaging are already common in autonomous car vision systems. Previous systems for viewing scenes outside a camera’s line of sight relied on objects that either reflect light evenly or strongly. But real-world objects, including shiny cars, fall outside these categories, so this system can handle light bouncing off a range of surfaces, including disco balls, books and intricately textured statues.
Powerful laser developed
Central to their advance was a laser 10,000 times more powerful than what they were using a year ago. The laser scans a wall opposite the scene of interest and that light bounces off the wall, hits the objects in the scene, bounces back to the wall and to the camera sensors. By the time the laser light reaches the camera only specks remain, but the sensor captures every one, sending it along to a highly efficient algorithm, also developed by this team, that untangles these echoes of light to decipher the hidden tableau.
“When you’re watching the laser scanning it out, you don’t see anything,” described Lindell. “With this hardware, we can basically slow down time and reveal these tracks of light. It almost looks like magic.”
The system can scan at four frames per second (FPS). It can reconstruct a scene at speeds of 60 FPS on a computer with a graphics processing unit (GPU), which enhances graphics processing capabilities.
To advance their algorithm, the team looked to other fields for inspiration. The researchers were particularly drawn to seismic imaging systems, which bounce sound waves off underground layers of Earth to learn what’s beneath the surface, and reconfigured their algorithm to likewise interpret bouncing light as waves emanating from the hidden objects. The result was the same high-speed and low memory usage with improvements in their abilities to see large scenes containing various materials.
“There are many ideas being used in other spaces – seismology, imaging with satellites, synthetic aperture radar – that are applicable to looking around corners,” said Matthew O’Toole, an assistant professor at Carnegie Mellon University (CMU) who was previously a postdoctoral fellow in Wetzstein’s lab. “We’re trying to take a little bit from these fields and we’ll hopefully be able to give something back to them at some point.”
Looking ahead
Being able to see real-time movement from otherwise invisible light bounced around a corner was a thrilling moment for this team but a practical system for autonomous cars or robots will require further enhancements.
“It’s very humble steps. The movement still looks low-resolution and it’s not super-fast but compared to the state-of-the-art last year it is a significant improvement,” Wetzstein said. “We were blown away the first time we saw these results because we’ve captured data that nobody’s seen before.”
The team hopes to move toward testing their system on autonomous research cars, while looking into other possible applications, such as medical imaging that can see through tissues. Among other improvements to speed and resolution, they’ll also work at making their system even more versatile to address challenging visual conditions that drivers encounter, such as fog, rain, sandstorms and snow.
This content originally appeared on ISSSource.com. ISSSource is a CFE Media content partner. Edited by Chris Vavra, production editor, CFE Media, cvavra@cfemedia.com.
Original content can be found at isssource.com.
Do you have experience and expertise with the topics mentioned in this content? You should consider contributing to our WTWH Media editorial team and getting the recognition you and your company deserve. Click here to start this process.