Machine vision boosts quality for mass-produced robotic workcells

Inside Machines: Feedback from machine vision adjusts robotic movement and enhances manufacturing quality. Vision-guided robots have greater position accuracy, providing closed-loop control.

By Nick Tebeau July 16, 2013

On any given day, I can walk out of my office and see dozens, if not hundreds, of robots standing in neat rows waiting to become integration ready. They all look exactly the same, but the truth is, they’re not. Each piece of metal, each servo joint, is subtly different than the next. Put those robots to work, and depending on the temperature in the workcell, the robot’s physical dimensions will change again as metals expand (or contract) and electrical efficiencies vary.

These mechanical and electrical variations (or errors) stack one atop the next to determine the overall accuracy of the robot. Robotic accuracy is defined as the ability to go to a programmed spot in space. Customers regularly fail by trying to use a robot with 0.1 mm repeatability to position a part within 0.1 mm accuracy because they overlook the robot’s true accuracy—the ability to return to a given spot in space on demand.

Vision-guided robotics (VGR), or the use of industrial cameras connected to computers running image processing software to determine an offset for robot control, counter these effects by providing an objective method for determining the robot’s position and orientation in 3D space versus where it is supposed to be. But machine vision also has a stack error that depends on many factors, from intrinsic changes in lighting and sensor response to extrinsic variations in surface finish and part presentation due to material handling systems.

Unfortunately, most end users do not understand the sources of accuracy and repeatability, or how to account for stack errors in robotic and vision systems to create a VGR solution that works. It takes experience with machine vision and robot programming to define a complete application specification based on true robotic stack error matched with the right machine vision system.

Embracing the unique

Every VGR application is different. It’s different because every robot, environment, manufactured part, and process is different. As a result, there are many ways to peel the proverbial onion, but the ultimate goal is to design a system that accomplishes specific tasks, at a specific rate, based on known parameters, for the least cost.

And it all starts with a complete and thorough understanding of the application needs. What is the part? How does it vary in size, texture, and orientation to the robot based on actual production, not just CAD files? From temperature to changes in light, what are the ambient conditions of the workcell? What does the robot need to do with the part, and how will that affect your choice of robot, including speed, force, and the effect of part mass and momentum on robotic position? (See related article on inertia measurements.)

Armed with this information (and more), most customers will have a preference for a specific robot original equipment manufacturer (OEM) based on what’s already installed on their plant floor. Based on the part variations and part position requirements, the experienced designer can help select the specific robot model for the application. Each robot is a one-of-a-kind kinematic model comprised of unique mechanical segments and unique electrical (or hydraulic) controls. Most robot OEMs provide an absolute accuracy service that will determine that individual robot’s absolute accuracy, which can be useful for applications where robotic and vision stack error are very close to the application’s material handling accuracy and repeatability requirements.

After defining the application requirements and selecting the right robot, the designer has to figure out how to program the robot to do its job. The robot will need help finding incoming parts, either through fixtures that consistently present the part to the robot in a given 3D location and orientation, or through the use of a vision system to provide an offset to the standard robot path to accommodate variations in part position and orientation.

Today, more manufacturers are using vision rather than fixtures because fixtures are a custom expense, often do not offer the flexibility to handle different parts on the same line without additional costs, or offer the chance to reuse robotic workcells in other parts of the plant. Machine vision systems can be reprogrammed and, assuming the system and its components meet the specific needs of the new application (a big if), can be deployed around the plant like any other asset.

Put vision into VGR

Once the application has been clearly defined, the next step is to determine what sort of information the robot needs from the vision system to perform to the necessary specification. Is the part relatively flat on a flat conveyor so a 2D vision system will be sufficient? Does the application require orientation and relative height information in addition to X and Y information—therefore, a 2.5D vision solution? Or do you require absolute 3D information for hole inspection in addition to providing an offset for the lugs or pick points on the part?

While 2D and 2.5D solutions are relatively straightforward and usually can be solved with one camera assuming that minimum spatial resolution can be achieved per pixel across the necessary field of view, designers have several options when it comes to 3D vision, namely single-camera 3D, single- or multi-camera 3D with structured light triangulation, and multi-camera stereoscopic vision. Each of these approaches offers advantages and disadvantages. For example, single-camera 3D solutions can be extremely accurate across relatively narrow fields of view but may require multiple images to create the 3D point set. Stereoscopic is highly accurate for large area fields of view and can be further improved with the use of structured light sources, such as light-grating projectors, LED, or laser line generators, but requires more hardware. All these systems depend on frequent calibration routines to ensure bumps, thermal expansion, and other factors do not generate inaccurate 3D data.

One of the least understood factors of a machine vision system involves lighting. Lighting and, more importantly, changes in lighting, will greatly affect machine vision systems, regardless of dimensional aspects of the vision solution. Lighting is often considered as the last part of the vision solution but should be considered early in the design since light interaction with the part as perceived by the camera is the basis for a successful machine-vision solution.

For example, if your workcell is in a room with windows, infrared lights may not be the best choice because the sun’s light is strongest in the red and infrared end of the visible spectrum. To determine the best “color” of light (white, blue, amber, red, etc.), understand the physics of light and optics. Does the VGR workcell need to sense very similar colors on the part, for example, requiring a color camera and light? Or are the colors different enough that a grayscale camera with bandpass filter and a complementary colored light can offer a cheaper solution with less data processing?

Much can be said on the art of matching colored illumination, but a basic rule of thumb is: Don’t use a light source that’s similar to the ambient light in the room, and don’t use a light that is opposite the color of the part color because it will absorb that light (unless you’re considering backlight or darkfield illumination).

Simple is as simple does

A successful VGR solution requires careful consideration of the application and specific performance requirements for the robot and the vision system, as well as the total performance of the combined VGR solution in respect to the application and associated production equipment. The solution is often complex. And while it would be useful for your VGR designer to have robotic programming and vision-system design expertise, few companies offer both. If you cannot find such an integrator to help guide your system development, be sure to ask your vision or robotic integrator about its partners on the other side of the design equation. What is their experience? What can they demonstrate?

In all fairness, VGR solutions are not necessarily the most complex automation problems that machine vision will help solve. Many robotic suppliers provide optional machine vision systems that are well integrated into their robotic control systems. However, a vision system is not a vision solution. The physics necessary to optimize the light, camera, and optics part of the equation alone can require considerable knowledge and expertise. Don’t be afraid to ask suppliers about their past experiences and client referrals. Also, associations [such as Automated Imaging Association (AIA), the North American trade association for the machine vision industry, and the Control Systems Integrator Association (CSIA)] have lists of companies that have passed certified vision professional and certified systems integrator courses. These companies have proven their system design knowledge across a wide range of applications and design environments. Working with the right supplier, a VGR solution can put the competitive edge back into an operation.

– Nick Tebeau is manager vision solutions, business unit industrial solutions, Leoni. Edited by Mark T. Hoske, content manager, CFE Media, Control Engineering and Plant Engineering, mhoske@cfemedia.com

ONLINE

At www.controleng.com/archive, August, see this article for links, more information.

www.leonivisionsolutions.com 

Key concepts

  • Using feedback from machine vision positions robots more precisely for higher quality.
  • Match the technology to the application.
  • Proper lighting helps machine vision accuracy.

Consider this

If vision guides robotics, what other motion-control applications could it enhance?