Convolutional neural networks help embedded vision applications

Convolutional neural networks (CNNs) have the ability to replicate the human thought process and use embedded vision to automate those processes.

By AIA March 23, 2020

In their quest to give machines the comprehension and thinking abilities of humans, scientists have created convolutional neural networks (CNNs). Although they require highly complex algorithms that can mimic the thought processes of the human mind, they aren’t difficult to understand. Convolutional doesn’t mean convoluted.

Helping machines to think like humans

Leaders in the fields of convolutional neural networks and embedded vision don’t just want to make machines faster. They want to make machines smarter. The goal is to help machines see the world as humans do, perceive it in the same way, and then apply the knowledge they have. Some uses include image recognition, image classification, and natural language processing.

As humans, we constantly analyze our environment. We also label and make predictions about what we see. For example, you likely know where your closest exit is. And every door on your way into the building. Your brain has that information ready in case of an emergency. But do you remember the number of fluorescent lights you passed into your office this morning?

Your brain handles this workload automatically. It’s hardwired into your DNA. It might seem simple, but programming this process into a CNN is quite challenging. CNNs must take input, identify what’s important, classify it, predict what to do next, and then act. But how does this artificial intelligence relate to embedded vision?

Convolutional neural networks and embedded vision

CNNs can be trained to see the world in the way that humans do. Instead of having to train a machine on every characteristic of every object it may encounter, a CNN can automate the process. Using embedded vision, machines can recognize objects by comparing them to huge datasets. They can even adjust their classification parameters when they make mistakes.

Convolutional neural networks are even linked to vision in that they are organized in 3D. They have a width, height, and depth. The artificial neurons of the network connect to other nearby neurons. CNNs use a technique called convolution to add a filter to input and then map out probabilities for what objects the CNN thinks it sees.

CNNs are perfect for classifying images. This is why they pair so well with embedded vision. They can quickly extract data from images and then classify that data. Embedded vision systems often feature large bandwidth capacities and low power consumption, precisely what a CNN needs.

This article originally appeared in Vision Online. AIA is a part of the Association for Advancing Automation (A3), a CFE Media content partner.