Sensors, Vision

3-D imaging offers new machine vision inspection potential

Advances in 3-D imaging have allowed machine vision users to overcome some challenging inspection tasks and tackles applications 2-D imaging cannot.
By Winn Hardin May 17, 2019
Courtesy: CFE Media

Advances in 3-D imaging have allowed vision users to overcome some challenging inspection tasks. In the machine vision marketplace, 3-D imaging continues to mature, tackling applications 2-D imaging cannot. Improving 3-D technologies cost-effectively simplifies a variety of inspection tasks. And it’s taking machine vision inspection to the next level.

“In a manufacturing setting, the fusion of 2-D with 3-D is necessary to measure how well components go together into an assembly and assess the product for final fit, finish, and packaging,” said Terry Arden, CEO of LMI Technologies.

According to David Dechow, Principal Vision Systems Architect at Integro Technologies, a systems integrator specializing in machine vision technologies with broad experience in helping companies implement 3-D and 2-D imaging for industrial automation, accuracy has improved as well. And with inspection tasks in 3-D space, which may include measurement or reconstruction, precision is even more essential than with most tasks in robotic guidance or bin picking.

“End users should now recognize that the components are not just generic 3-D image acquisition systems but rather have become in many cases more targeted in their applicability to specific tasks,” Dechow said.

Many 3-D imaging devices also are extending past the 3-D point cloud to encompass color, texture, and other 2-D imaging features. “The combination of a 3-D image with grayscale ‘texture’ and/or color information is somewhat new in the machine vision marketplace,” Dechow said. “Having this information can be critical to the identification, differentiation, location, or measurement of an object.”

A 3-D image by itself has no grayscale or color content. “Incorporating this content in a spatially correct way into a 3-D image can have great benefit,” Dechow adds. “In many cases, the 3-D profile simply does not fully define the object or its features.”

Additionally, 3-D enables users to measure the angle of the surface and planar features such as distance. “This is an important capability to support robots, which move with 6° of freedom and need both angle and position to operate effectively,” Arden said.

Advancing technologies

Machine vision component suppliers are developing products that address common 3-D imaging needs. Hardware and software manufacturer Euresys provides frame grabbers capable of interfacing with traditional 2-D cameras and laser-line projectors to build 3-D height data, which can be combined with images. Additionally, the Euresys Open eVision Easy3D software library accurately calibrates and produces data sets from 2-D image data and 3-D shape data, which can now be processed using traditional machine vision tools such as gauging and measurement for metrology, OCR, and pattern matching to solve challenging vision applications.

“Previously, 3-D imaging was accomplished using multiple cameras looking at the same object from known fixed positions or by using a single camera combined with a structured light pattern or laser point or line generator,” said Mike Cyros, vice president sales & support Americas at Euresys. This required precise alignment of the cameras and pattern generators to be able to calculate, or triangulate, each point in the image to determine its distance from the camera. This process required a lot of computing power.

“Today’s FPGA and multicore embedded processor architectures make it possible to do these calculations at increasing resolutions and imaging rates, which enables in-line, real-time use of 3-D image data for machine vision,” Cyros said. “What was once difficult to achieve can now be accomplished with off-the-shelf sensors and advanced software libraries, making it possible for systems integrators and machine makers to more easily integrate 3-D measurements into their vision processes. When customers can just concentrate on how to mount a 3-D sensor on their machine and trust that it will produce accurate, repeatable 3-D point clouds, they’re more likely to adopt the technology.”

Machine vision customers also have access to many 3-D measurement technologies, including time of flight, stereo, and laser triangulation. Laser profilers are also used across a broad range of applications from log scanning in sawmills for optimal volume extraction to protein portioning in fish and meat processing and fastener location inspection in cell phone assembly.

In all these cases, the object to scan is moving through the scanning plane and a 3-D surface is built up on-the-fly for inspection. Fringe projection is ideal in robot metrology to measure features such as slots, holes, openings, and studs in automotive body-in-white applications. During inspection, the robot remains stationary for the time it takes to snap a 3-D point cloud.

“Laser triangulation and stereo fringe projection offer high accuracy and speed, and their integrated bandpass filters allow them to tolerate high levels of ambient light in order to deliver reliable data for inline automation, inspection, and optimization processes,” Arden said.

Meanwhile, industrial robot manufacturer FANUC America Corporation offers three different technologies using structured light for vision-guided robotics, one of the most common uses for 3-D vision. The company’s 3DL product uses two laser lines in a 2-D image to produce one 3-D pose result.  The 3DA uses multiple image stereo using two digital 2-D cameras and a DLP light projector. The 3DV uses the same approach but with only one image, making it faster.

“The multi-image approach can provide more accurate data than the single image, but it really depends on the application as to which one is used,” said David Bruce, engineering manufacturer at Fanuc. “The addition of 3-D data makes part segmentation easier and allows for precise 3-D location to be extracted, which for robotic guidance is huge.”

Bin picking, as a result, is one of the largest applications for 3-D imaging. “You need accurate 3-D positional information of each part in the bin, and there’s no good way to do this with 2-D,” Bruce said. “Tracking parts on a conveyor has a lot of advantages if you can use 3-D because the 2-D contrast between the part and the belt is not an issue when using 3-D.”

Overcoming obstacles

Like any machine vision technology, 3-D imaging presents challenges. “Part reflectivity is a consideration with structured-light stereo 3-D imaging. If the part is highly reflective, the depth image or point cloud algorithms will have issues doing this stereo calculation,” Bruce said, adding the same issue applies if the part is not reflective enough, which can occur with dark or black parts.

Positional accuracy can be another trouble spot. “You’ve got to mount, or fixture, the sensors in a very accurate, precise way, and that can be difficult,” Cyros said.

As with any new machine vision technology, educating end users can be a struggle. “Many customers aren’t familiar with the technology and hold various myths about 3-D, such as the widely held misconception that 3-D is much more difficult to set up or maintain or is somehow more expensive than 2-D,” Arden said.

When it comes to advanced machine vision technologies like 3-D imaging, end users expect the same benefits they always do — ease-of-use and cost-effectiveness. Vision product manufacturers are addressing these demands by developing targeted solutions that improve accuracy and reliability in numerous inspection tasks.

Winn Hardin is contributing editor for AIA. This article originally appeared in Vision Online. AIA is a part of the Association for Advancing Automation (A3), a CFE Media content partner. Edited by Chris Vavra, production editor, CFE Media, cvavra@cfemedia.com.

Want this article on your website? Click here to sign up for a free account in ContentStream® and make that happen.


Winn Hardin
Author Bio: Contributing Editor, AIA