Get the latest updates on the Coronavirus impact on engineers.Click Here
Sensors, Vision

Artificial neural networks, sensor computing speeds up machine vision

One of the greatest challenges in terms of biological engineering is creating accurate and reliable artificial machine vision.

By AIA June 15, 2020
Courtesy: CFE Media

Sight is an important sense for living creatures. Our eyes give us seemingly infinite information about the world around us. One of the greatest challenges in terms of biological engineering is creating truly accurate and reliable artificial machine vision. This technology is rapidly developing, but human vision is still more efficient when all is said and done.

This is because while it’s easy enough to make highly sophisticated cameras, microscopes, and telescopes, it’s difficult to approximate the ability of the brain to make sense of visual data and make classifications and predictions based on it. Our brain has had millions of years to evolve this level of complexity, and this technology is still very new in comparison.

In-sensor computing to speed up machine vision

One research team at the Vienna University of Technology is developing a way to improve the speed of machine vision. Current machine vision technology uses an image sensor that responds to light, which is digitized by another device, and then processed in the cloud. This system works but faces the difficulty of efficiently processing large amounts of data across multiple devices.

Their solution – cut out the middleman through in-sensor computing. In this technology, the image sensor itself begins to process the data, cutting out one of the steps in the machine vision pipeline.

Neural networks for in-sensor computing

This system was made possible by the adoption of neural networks – or computing architecture that has highly interconnected elements that can function in parallel the way our brain’s neurons do.

Neural networks can learn from their surroundings, so they are a great candidate to put into an in-sensor computing system because the image sensor is the part of the system that actually gathers data from the surroundings.

This technology is currently at an early phase of development, though the researchers have successfully used their sensor to identify a series of printed letters. The implications for this technology when it has come to full fruition are vast.

The ability of image sensors to process their own data could have implications for driverless vehicles and industrial manufacturing.
In the life sciences, this technology could have major medical implications. Because of its capacity for capturing dynamic and three-dimensional images in a wide field of view, this technology could lead to a massive improvement in medical imaging, thereby saving lives by allowing for better and earlier diagnosis of illnesses and injuries.

This article originally appeared in Vision Online. AIA is a part of the Association for Advancing Automation (A3), a CFE Media content partner.


AIA