How to Inspect Closures with Machine Vision
An explanation of the “template” and “features” methods of closure inspection and the role of vision systems in automating the process.
Closures, such as plastic caps for food, cosmetic, and pharmaceutical containers, must be individually inspected for defects. Possible defects include out-of roundness, short shots (missing material), liner defects, or contamination. The inspection rate can be as much as 1,200 closures per minute, so a machine vision inspection system is required.
There are two common approaches to closure inspection. “Template” methods compare images from a known good part, a “golden template,” to an image of a part being inspected and report differences. “Features” methods measure such parameters as part dimensions and intensity distributions and use these measures to make a good/ bad decision. Both methods start by taking a digital image of the parts to be inspected.
There are many factors in getting an acceptable digital image of the parts, including strong and steady illumination of the parts and a camera with a fast shutter speed to reduce image blurring caused by part motion. For the purposes of the explanations presented in this article, let’s assume an acceptable image has been acquired.
Considered attractive because the inspection can be set up by showing the machine vision system (MVS) 30 or 40 known good parts (also known as “Train by Showing”), template methods are fairly common in industry. In this method, the MVS builds a golden template consisting of an “average image” of the known good part images and a “variance image” that specifies the “natural” or allowed variation of the part’s intensity (or color) at each point in the image.
Each part is located in the input image and its precise location and angle are determined. The average and variance images are then moved and rotated to precisely fit over the part input image. The input part image and average image are subtracted to show any differences that could be defects, and then the variance image is used, pixel-by-pixel, to detect differences that are greater than the natural variation at that part point. Template methods are easy to set up and work well when the input part can be precisely aligned with the average and variance images, and when the natural variation of the part is small. In applications where the liner or seal ring on a circular closure can easily be aligned (no need to compute a rotation angle) and the inspection area has low variance, template methods work well. One of the more common applications of the template method is for print inspection.
Template methods don’t work well when the part cannot be precisely aligned, the natural variation of the part is large, or precise measures must be taken. The rotation to align the template with the part image is slow and adds re-sampling noise, so template methods are typically not used where computational resources are limited.
Extracting measures or features from the part image and comparing these measures with measures taken from known good parts is known as the feature method. This method is appropriate for measuring things such as closure dimensions, roundness (ovality), seal or bead width and can be designed without the precise alignment characteristics used in template methods. Some feature methods are capable of handling large changes in part appearance or natural variation. For example, features based on part edges (edges appear in an image as a sudden intensity change) are tolerant to changes in part reflectivity and color, as well as to changes in the part illumination.
Rotationally invariant features, such as intensity or color histograms, moments, ovality, and perimeters, can quickly pinpoint contamination, flags, and other defects in a closure. With the feature method, you can’t tell where the defect is located on the part (as can be done with the template method), but defect location is not something that concerns most manufacturers. The greater advantage of the feature method is that no rotational alignment is required, meaning that the features are fast to compute and are the method of choice on smart cameras (such as Dalsa’s BOA).
Feature methods can require more work to set up, as measures and tolerances specific to the type of closure must be programmed into the MVS. Modern MVS software has made this type of programming easier through the use of graphical user interfaces. (Dalsa’s iNspect and Sherlock software are examples of this kind of software.)
In the “Bottle cap analysis” image, feature methods measure the circularity of a bottle cap liner. The approximate center of the cap liner is found. Then a “spoke tool” (green lines) measures liner edge positions at 16 locations (red crosses on the green lines). A circle is fit and circularity is computed as (minimum radius/maximum radius) * 100, where the minimum and maximum radii are measured during the circle fit.
Deviations from circularity indicate a defective liner – one that is missing, distorted, etc. This inspection will, however, miss edge defects that are smaller than the distance between edges on the spokes, but this is rare. It does not inspect the seal surface for contamination, but this could be done by looking for pixels that are too high or low in intensity within the circle defined by the seal edges. Note that these methods are fast, in part because they are rotationally invariant.
In practice, both methods can be used for a variety of applications. For example, template methods can be used to inspect closure printing, as it is difficult to specify features to measure text or logo defects. Invariant features and measurement features are used to look for liner defects such as flags, color shifts, and foreign mater. “Train by showing” systems can be built that use both template and feature methods, but often have to be “tuned” for a particular closure type by adding additional features.
– Ben Dawson is director of strategic development at Dalsa Corporation. He can be reached at email@example.com. www.dalsa.com
See other machine vision tutorials and feature articles in the June/July 2010 issue of Control Engineering, Machine Vision / Inside Machines section, and: