Emerging 3-D vision technologies for robot and machine vision

3-D machine vision is a growing trend that delivers accurate, real-time information to improve performance in applications. 3-D machine vision detects objects regardless of position.

New applications for robots have called for increased speed, and the ability to find parts positioned randomly on moving conveyors, stacked in bins, or on pallets. Machine vision systems are being paired with robots to help them locate and process these parts.

Often, 2-D systems work fine for vision-guided robotics (VGR). 2-D VGR systems can quickly process randomly located parts in a flat plane relative to the robot. 2-D systems are usually easier to implement, requiring only a single digital camera and software that analyzes the image. However, 3-D VGR systems let robots process the location of parts across all three dimensions.

Application of 3-D vision

3-D machine vision is a growing trend that delivers accurate, real-time information to improve performance in applications. 3-D machine vision detects objects regardless of position. As a result, robots have more flexibility and independence when compared to their 2-D only counterparts. Robot vision with 3-D lets the machine know if an object is lying down, upright, or hanging.

Robots with 3-D machine vision can fulfill various tasks without reprogramming. They can account for unexpected variables in work environments. 3-D vision allows robots to know what’s in front of them and react properly. 3-D imaging is currently being used in metrology, guidance, and defect analysis systems.

Types of 3-D vision technologies

There are different ways to implement 3-D machine vision. Active techniques, like time of flight, use an active light source to provide distance information. Passive techniques, like stereo vision, rely on the camera’s data and work much like the depth perception of the human visual system.

3-D information on a part is obtained by observing a common feature from two distinct viewpoints. The distance calculated for each viewpoint returns X-Y-Z values. If multiple features are located on the same part, 3-D orientation can be calculated.

3-D stereo vision can be very inexpensive. A single 2-D camera can be mounted on a robot that can move the camera to two different points of view. The main disadvantage of stereo vision is that only one part can be located per “snap” of the camera.

Time-of-flight (TOF) 3-D sensors measure how much time it takes for light to travel to the scene and back to the sensors in the array. This works in a way similar to the way pixels work in a CCD or CMOS vision sensor. Using this method, Z-axis information is obtained from each sensor in the array and a point cloud is created.

The phase shift in the emitted light vs. the received light provides enough information to calculate a time difference. Spatial location is then calculated by applying this information against the value of the speed of light. TOF sensors do typically provide a lower Z-resolution, but frame rates are much higher.

This article originally appeared in Vision Online. AIA is a part of the Association for Advancing Automation (A3), a CFE Media content partner. Edited by Chris Vavra, production editor, CFE Media, [email protected].