Vision for Process Control

Machine vision is generally better than human vision in inspection and control tasks that are fast, precise, and repetitive. To be effective, a machine vision system also needs to control “hands” to move parts into its field of view, to sort parts, change process settings, or guide assembly.

By Ben Dawson & Wes Philbrook, Dalsa IPD May 1, 2008

Machine vision is generally better than human vision in inspection and control tasks that are fast, precise, and repetitive. To be effective, a machine vision system also needs to control “hands” to move parts into its field of view, to sort parts, change process settings, or guide assembly.

Machine vision requires parts to be placed in a known location and orientation, and careful control of lighting. It is also less tolerant of part variations, which can be a benefit if variations indicate defects. To its credit, machine vision can make hundreds of precise measures per second and, once installed, provides inexpensive and reliable labor.

Components of a machine vision system.

Most machine vision systems have components for part positioning, lighting, image forming (lens), one or more cameras, a vision processor and an interface to process and motion control. Some engineers also use part-in-place sensors, such as a photosensor, to signal when a part is ready for inspection.

For example, in an application where a manufacturer needed to determine the dimension of stamped metal parts made by a progressive punch press, the manufacturer had been measuring sample parts off line, so die wear or damage was not detected until thousands of bad parts had been produced. Replacing that off-line manual system with a vision system made by system integrator Faber Industrial Technologies that inspects each part and stops production when the part dimensions show that a die is worn or damaged. The result was improved quality, less scrap and higher productivity.

In this application, the punch press moves a strip of parts into the camera’s field of view. A telecentric lens forms collimated light (a column of parallel light rays) from behind the part into an image. The image is recorded by a camera and analyzed by a Dalsa IPD Vision Appliance, which is a computer specialized for machine vision. The system triggers image acquisition based on recognizing an index hole in the carrier strip. If the inspection fails, the computer signals a PLC to shut down the stamping process.

The computer runs Dalsa camera’s Sherlock or iNspect software, both of which have intuitive, graphical user interfaces that make it easy to develop machine vision inspection and control applications, even it you are not very familiar with machine vision.

Hand-eye coordination

The computer has to communicate with motion and process control systems to be effective. Physically, this communication goes through digital inputs and outputs, RS-232 lines, or Ethernet. When communicating with PLCs or motion-control hardware, the computer typically uses standard protocols.

The model for interaction with a PLC is one of variables, where a variable is a data item, such as a short integer, that can be set and read by both the computer and the PLC. Communication between the Vision Appliance and a PLC driving a robot might consist of:

Computer loads variables in the PLC with the coordinates of a part to pick up;

Computer signals a change of state (CoS) to the PLC by setting a flag in another variable; or

PLC instructs the robot to move and signals success by setting a flag variable in the computer.

Because there are no “events,” PLCs have to poll for flags that indicate a CoS. Vision Appliances have a special feature where variables can be marked as “events” such that any change to the variable immediately causes it to react.

This coordination between the vision system and the process or motion controller can range from defective part removal (perhaps by a “kicker”) or adjusting some aspect of a process, to sophisticated interactions between these component systems. Another common use of vision for process control is to have the machine vision system read a product’s barcode, date and lot code (OCR or OCV), or label pattern. The result is used to sort products, check date codes, and ensure that the correct label is on a product.

An example combining sophisticated machine vision, process control, and motion control is de-palletizing one-gallon paint cans. These cans ship on a pallet with six layers of 56 cans per layer, and each layer separated by a slip sheet—a large rectangle of cardboard. The top layer of cans is covered by another slip sheet and a “picture frame,” which is an open rectangle of wood that prevents the straps that bind the pallet stack from damaging the top layer of cans. The customer was using manual labor to remove cans from the pallet stack and put them into the fill line. To reduce labor costs and improve speed, the company employed system integrators to automate it.

The robot’s end effectors pick up half the cans in a pallet stack layer and loads them onto the fill station conveyer (center, behind the lifted cans).

In this application, a forklift operator removes a pallet stack from a truck, puts it on a conveyer, and cuts the binding straps. Part-in-place sensors and motor drives on the conveyer queue pallet stacks for de-palletizing by a vision-guided robot.

The machine-vision camera is mounted slightly to one side of the pallet stack, so it views the stack at an angle. The robot arm is equipped with custom end effectors for gripping pallet stack. An IPD Vision Appliance processes pallet stack images, identifying pallet stack components and guiding the robot in removing them.

The vision system first finds the “picture frame” and determines its position and orientation. It then directs the robot to remove the picture frame and stack it atop a pile. The system finds the top slip sheet and also directs the robot to remove it using suction cups. This exposes the top layer of cans.

The vision system finds each can by looking for its roughly circular, bright rim. When it finds a can, the vision system compares its measured position to a calibrated reference position. If any can is more than 30 mm off from its reference position, the Vision Appliance stops the process until an operator corrects the can position to avoid crushing it.

When all cans’ positions are within tolerance, the robot’s end effectors (its “hands”) pick up half (26) of the cans and places them onto the fill line. The robot then picks up the other half, and places them on the fill line. It then removes the next slip sheet to expose the next layer of cans. This process repeats until the pallet is exposed. The robot then uses a gripper in its end effectors to remove the pallet and stack it on a pallet pile.

As the system removes layers of cans, the apparent sizes of cans in the next layer decreases due to perspective changes. The off-center camera location introduces additional lens and perspective distortions, so that can openings appear to be ovals of varying sizes, rather than circles.

The vision-system’s challenges are to recognize each of the components and to locate the center opening of each can despite shifts in pallet location and rotation, and despite fairly large changes in cans’ apparent sizes due to perspective distortion and perspective distortion.

Lighting is key

As always, lighting is a key part of the solution. Directed, fluorescent lighting was used highlight the can rims while not illuminating the interiors excessively, and to also provide good illumination of the picture frame and pallet.

A second key was to know, in advance, exactly where each can center should be using the calibrated reference positions. This limited the search range for each can rim increasing operating speed and decreasing the chance of other bright patterns, such as some can interiors, from being confused with a can rim.

Third, the picture frame’s position and orientation was easy to find, and limited the search range for subsequent layers in the pallet stack.

Last, a different program was used for each layer of material on the pallet stack, so that the visual component detection and location could be tuned for each layer.

The vision system communicates with the robot’s motion control system via RS-232. Once the vision system locates each layer and element, the robot’s motions are automatic, meaning there is no visual feedback to correct and control the motion.

These examples show the wide range of applications for machine vision in automation–from simple measurements on stamped parts to robot guidance. Machine vision is now quite easy to add to your process and many of the concepts and methods are familiar from other control systems. ce

Author Information

Ben Dawson is director of strategic development, and Wes Philbrook is senior principal software engineer at Dalsa IPD. Contact them by email at bdawson@goipd.com .