Machine vision software advances are enhancing innovation

Companies are using software to help realize many machine vision software developments such as ultra-high-dynamic-range imaging and self-analyzing algorithms.

By Winn Hardin, AIA November 9, 2018

As the machine vision industry prepares for its busiest six months of technology events, companies are putting the final touches on a large number of machine vision software developments. From ultra-high-dynamic-range imaging to self-analyzing algorithms, machine vision customers may come to accept the secret sauce is in the software and not just deep learning, which has been touted a great deal of late.

According to Jordan Wisniewski, Teledyne Dalsa development leader, the electronics industry is interested in high-dynamic-range imaging. “As board densities go up and hardware becomes more integrated, electronics customers want to run their lines faster with higher resolution,” Wisniewski said. “As a result, signal-to-noise ratio (SNR) becomes more critical than ever before, especially if you’re trying image layers below the surface through other layers of electronic assemblies.”

The answer is to tackle the problem from both a hardware and software perspective, Wisniewski said. Since Teledyne Dalsa makes their own sensors as well as cameras, they can offer customers the opportunity to embed preprocessing algorithms developed either in-house or the customer that solves SNR challenges for very specific application requirements.

Other customers want similar embedded intelligence related to tagging images with metadata that includes preprocessing actions and values to help the host system extract meaningful information from the hundreds of gigabytes of image data produced by advanced machine vision cameras.

“Data reduction is becoming a key concern for many of our medical and electronics customers, and by embedding key algorithms into our cameras for specific customer needs, we’re helping to solve that challenge,” Wisniewski said. “This is almost a new hybrid class of smart cameras that don’t use an integrated development environment but do have some customer-specific processing inside.”

OCR bulks up

Datalogic makes image-based machine-readable code readers and also offers laser marking systems. To bring both capabilities into greater alignment, the company has focused on improving its optical character recognition (OCR) and code-reading tools.

“We’re developing tools that can read more challenging codes,” said Bradley Weber, product marketing manager at Datalogic. “Many times characters are not clear black-on-white printing, and many times the print is not perfect or it’s on a varying background.”

Tordivel AS, maker of machine vision software and cameras, is joining Cognex and MVTec GmbH with the introduction of its NeuralOCR tool, which uses deep learning to handle any existing or new font—even when the letters are stamped on metal or raised on rubber and offer no significant visible contrast between lettering and background material. Unlike other neural network-based OCR tools, Tordivel’s allows the user to draw the font and then trains the system on the font by creating an image set where various noise elements are introduced into the image.

“This way, there is no need for generating all the training images you need for a deep-learning OCR tool,” said Thor Vollset, CEO of Tordivel AS. “We can do both training methods, but we think this method is better.”

When used in conjunction with Scorpion’s new Venom (small- to medium-baseline distances) or Stinger (long baseline) 3-D cameras, height information can be added into the 2-D image for applications such as stamped metal parts or raised letters on tires.

Meanwhile, Cognex’s deep-learning OCR tool has opened new applications for reading dot-matrix codes in automotive, food packaging, and other challenging applications that were recently beyond the ability of traditional OCR algorithms. According to John Petry, director of marketing for vision software at Cognex, the new OCR tool ships with a pretrained network that only requires a few customer images to learn a new font.

“No one wants to take the time to label hundreds of images of OCR characters, or deal with the inevitable mistakes,” Petry said.

3-D and deep learning

Matrox has added a photometric 3-D algorithm to their toolset to improve the Matrox Imaging Library (MIL). The photometric 3-D algorithm combines multiple images of the same object using different sources of illumination. By combining the images, MIL can greatly improve the contrast of 3-D features, including stamped or raised text.

Matrox has also added a new rectangle shape-finder tool for object location, since most machine vision applications inspect objects based on common geometric shapes. “Simple shapes solve the most common machine vision applications, including squares, rectangles, circles, and ellipses,” said Arnaud Lina, director of research and innovation at Matrox.

Deep learning approaches develop inspection criteria based on the image analysis of images that have been expertly tagged with a description of the defect shown in a given image. This approach is strong for handling hard-to-define defects and product variations, but often requires more time to run and special knowledge to implement.

All three major software companies will be offering new deep learning image classification tools that not only help identify a part or feature in an image, but also provide location information similar to pattern-search algorithms.

As customers have more options for solving machine vision challenges through advanced software, support and training become even more important.

Winn Hardin is contributing editor for AIA. This article originally appeared in Vision Online. AIA is a part of the Association for Advancing Automation (A3), a CFE Media content partner. Edited by Chris Vavra, production editor, CFE Media, cvavra@cfemedia.com.

Original content can be found at www.visiononline.org.