Differences between machine vision and embedded vision

Embedded vision and machine vision applications are effective, but they have different priorities and interfaces that need to be considered.

By AIA June 9, 2019

Embedded vision and machine vision are highly effective systems. Selecting one system over the other depends on the current and projected application requirements.

Machine vision systems have traditionally relied on a PC. Images from a frame grabber or interface card send image data to the computer, which then analyzes the images and relays information to another part of the system. The system usually includes a camera, a PC, and a cable linking the two. But there’s a trend in electronics to miniaturize.

The industry now is seeing common use of single-board computers (SPCs). Camera electronics have also become smaller. Small cameras without housings are now an option, resulting in cameras integrated onto compact systems. Because of these developments, highly compact camera systems for new applications are being created. These are embedded vision systems.

Machine vision characteristics

Machine vision typically relies on standard operating systems and interface protocols. Software can easily be written using commercial image processing libraries. The reason for a newer solution wasn’t the performance. It’s that PC-based machine vision systems are complex and bulky. Integrating them into existing systems can be tricky due to the number of interfaces involved. Recent technology advances have made the components smaller.

Embedded vision advancements

Embedded vision systems are usually easier to use and integrate than PC-based systems. They also have fewer moving parts and require less maintenance. They often only include a camera without a housing (onboard camera) connected to a processing board (embedded board) via a connector. The components are combined into one device and images sent from the camera are processed directly on the system’s processing board.

With advancements in machine learning, embedded vision systems are able to classify images captured by the camera. In the past, software developers spent a lot of time and energy developing algorithms to classify an item by its characteristics. But machine learning algorithms can learn to distinguish between different items based on “experience” of what has already been seen.

Embedded vision applications

Embedded vision systems can be found in applications ranging from everyday devices to automated smart factories and more. Their small dimensions and processing make them suitable for a wide range of industrial applications, from manufacturing car and vehicle components, chemicals and pharmaceuticals, and electronics, to industrial robotics and automated packaging systems. They are also used in self-driving vehicles and driver assistance systems, drones, biometrics, medical imaging and space imaging.

This article originally appeared in Vision Online. AIA is a part of the Association for Advancing Automation (A3), a CFE Media content partner. Edited by Chris Vavra, production editor, CFE Media, cvavra@cfemedia.com.

Original content can be found at www.visiononline.org.


Related Resources