Embedded systems in industrial vision applications

Embedded vision systems leverage unique technology to help applications ranging from aerospace to robotics to logistics and transform machines into intelligent systems.

By AIA September 28, 2018

Embedded vision technology essentially incorporates image processing and image capture into one vision system, whereas traditional machine vision systems separate these processes with an external PC. Embedded vision systems leverage unique technology to make this possible. From aerospace to robotics to logistics, embedded vision is bringing all new forms of autonomy and productivity, transforming machines into intelligent systems.

Embedded vision processors

At the most basic level, an embedded vision system includes a sensor or camera module, a standards interface and some form of image processor. In most embedded vision systems, this processor is either a graphics processing unit (GPU) or field programmable gate array (FPGA). GPUs are a common processor in embedded vision systems because of their ability to deliver large amounts of parallel computing power, especially when dealing with pixel data.

However, GPUs tend to have higher latency than hardware solutions, making them less than ideal for some applications. FPGAs have been gaining popularity in recent years for the ultra low levels of latency. As hardware solutions, FPGAs are fast and deliver massive amounts of computing power, but ultimately lack the flexibility of GPUs.

Embedded vision imaging modules

Embedded vision systems leverage an imaging module for image capture, which typically comes in the form of a sensor module or a camera module. Sensor modules lack any processing capabilities. They simply transmit raw image data to a host processor for tasks such as noise reduction or other application-specific processing tasks. Sensor modules are used when a simple, streamlined design is desirable.

Camera modules, on the other hand, are a more sophisticated solution. They use FPGA processors to offload some of the processing responsibility from the primary image processor. This lightens the workload on the host processor and allows developers to focus on application software. Camera modules recently emerged as a flexible embedded vision peripheral.

While image processors and imaging modules are core to embedded vision technology, there are several other components that are involved in an embedded vision system.

This article originally appeared on the AIA website. The AIA is a part of the Association for Advancing Automation (A3). A3 is a CFE Media content partner. Edited by Chris Vavra, production editor, CFE Media, cvavra@cfemedia.com.

Original content can be found at www.visiononline.org.