Digital Cameras in Control Applications

Machine vision is an increasingly valuable technology for industrial control, with uses in automated quality inspection, error detection, parametric measurements, and automated assembly. A key component of such systems, affecting the cost, speed, and accuracy of applied machine vision, is the digital camera.

By Petko Dinev, Imperx April 1, 2007

Machine vision is an increasingly valuable technology for industrial control, with uses in automated quality inspection, error detection, parametric measurements, and automated assembly. A key component of such systems, affecting the cost, speed, and accuracy of applied machine vision, is the digital camera. Understanding camera specifications and their importance in various vision tasks is an essential step toward applying machine vision to industrial control.

Machine vision systems have three main elements: an image processor, a frame grabber, and a camera. The image processor is the programmable computer element that works with the stored image to extract whatever information the application requires. The frame grabber collects the high-resolution images that the processor needs to access. Both are relatively straightforward electronic devices where memory depth and processing speed are the main parameters.

The key, and most complex, element of a machine-vision system is the camera. Most systems now employ digital cameras, which use charge-coupled devices (CCDs) as the image sensors. A CCD consists of an array of square photosensitive cells that convert incoming photons to electrons and accumulate the resulting charge. Cells are wired in series, forming rows and columns, with each cell representing one picture element, or pixel.

During image readout, control lines in the CCD cause cells to transfer their charge to an adjacent cell in a row or column, moving the accumulated charge along in bucket-brigade fashion. Reading an image from a digital camera thus consists of many repeated row-and-column transfers that ultimately move cell contents past a charge sensor and digitizer to produce the camera output one pixel at a time.

CCD image sensors consist of rectangular arrays of photosensitive elements.

Resolution vs. speed

One of a digital camera’s primary parameters is its resolution, which has two components: the number of sensing elements (pixels) in the CCD array, and the size of each sensing element. Pixel counts can range from a few hundred thousand to many millions. Element sizes typically run from 5 to 12ìm on each edge.

A second key parameter is frame rate, or the speed with which the camera can deliver successive images. Because row-and-column readout limits speed, the pixel count and frame rate of a camera are intertwined: the more pixels a camera offers, the slower its frame rate. The rule is not hard-and-fast, however. A finer-geometry semiconductor process usually allows faster shift rates, so two cameras of the same pixel count could have significantly different frame rates if they use CCDs made with different processes. Also, camera sensors may be designed to break the image into sections for simultaneous readout through multiple ports. Breaking the image into four equal sections, for example, can speed the image readout by a factor of four. It is also possible under software control to read out only an “area of interest” in the image rather than the full sensor array, reducing the transfer time.

While resolution and frame rate are the camera parameters developers consider most often, several others merit investigation. One is the dynamic range, or number of bits per pixel. This parameter affects the memory size needed in the frame grabber as well as the arithmetic precision needed in the image processor. It also has an impact on the sensor’s exposure latitude. Cameras with a few bits per pixel will support more restricted lighting conditions than a camera offering more bits.

A sensor’s sensitivity also dictates the lighting conditions required for operation. Low light or the need to use fast shutter speeds to eliminate motion-related image blurring require a more sensitive camera. The camera’s wavelength-dependent sensitivity may also be important. Depending on the application, infrared, ultraviolet, or even x-ray lighting may be needed, and the camera’s wavelength sensitivity should match. Finally, a camera’s ability to produce color or monochrome images can be important.

These various parameters all interact to dictate a camera’s cost. Typically, larger pixel-count cameras are more expensive. Similarly, faster frame rates for a given resolution also tend to boost camera cost. Trying to simultaneously achieve high frame rates and high resolution usually requires cameras with multi-port readouts, which add cost and complexity.

Breaking the image area into sections that can be read out simultaneously allows faster frame rates.

Varying vision requirements

The right set of camera parameters for a given application depends on what the machine vision system is trying to achieve. Three common applications are visual inspection, contactless measurement, and identification and orientation of objects. Each has different vision requirements.

Inspection systems typically take an image and compare it to a template or “known-good” image to identify variations. Here, a high-quality image is often required for the image processor to make reliable comparisons. This means that the camera must offer high resolution and many bits per pixel. Color capability may also be required

Contactless measurement systems take pictures of objects, then count the number of pixels the object occupies, translating that count into a dimensional value. High resolution may be required in such systems, but bits-per-pixel may not need to be as high. Often, the image processor extracts only edges or outlines from the image, so wide dynamic ranges and color typically are not needed.

Object identification and orientation applications have varying requirements. In many cases, the image processing system seeks to identify reference marks called fiducials in the image. The resolution required depends on the size of these marks relative to the overall image size. Identification applications may also need color capability.

Matching the application

Matching the camera to the application depends on performance as well as function. In an inspection application, for instance, the image area to inspect and the size of defects to be detected set the camera’s resolution requirement. Finding small defects in large objects requires high resolution. One such system, used to rapidly inspect glass panels of high-definition plasma televisions, looks for defects as small as 5ìm on a panel 2.5 m wide. This system requires a dozen 11 megapixel cameras to image the entire sheet in one frame! A system for inspecting the screw threads on bottle tops, on the other hand, can work with much lower resolution, as defects must be much more substantial to compromise the bottle.

Measurement systems similarly depend on the size of the object involved and the precision needed to establish the resolution requirement. A system measuring threads on a 10-mm-long machine with 1ìm precision will require an image with at least 10k pixels in a line. If measuring the length with millimeter precision, however, much lower resolution can be tolerated.

Identification-system requirements can vary widely, depending on the nature of the matching template. A system to verify that pills being loaded into bottles are the right type (a safety feature in pharmaceutical manufacturing) may need to identify general shape, end cap color, and visible markings at a fairly modest resolution. A system for automated assembly of circuit boards, on the other hand, may need very high resolution. The system needs to measure positions of fiducial marks on boards in an assembly frame with high accuracy to control movement of component-placement arms.

Frame rate sets throughput

In all these systems, the camera frame rate establishes system throughput. The higher the frame rate, the more inspections, measurements, or identifications the system can accomplish in a given time. Because throughput affects manufacturing cost, the tendency is to choose the fastest camera available.

The camera is not the only system element to be considered, however. Frame grabber and image processor speeds also may create limits. For instance, if image processing requirements are complex, a simple embedded processor may be unable to complete them as fast as the camera can supply frames. Thus, high throughput increases camera and other system costs.

Similarly, resolution requirements affect costs beyond the camera. Optics needed for larger image areas and finer details are more expensive. In addition, the optical design becomes more critical as better images are needed. Stray light in the wrong place can easily compromise system accuracy.

Resolution and frame rate thus have compound effects on system costs, so the benefits of machine vision systems in manufacturing must be evaluated carefully. Early detection of errors saves wasted effort and materials, but that savings must offset the cost of a vision system with the required throughput to be practical. Developers will need to determine the tradeoffs between vision system performance, system throughput, and cost savings to arrive at the right combination for their machine-vision systems.

Unfortunately for many installations, machine vision resolution and throughput requirements may vary over time. For instance, changing product dimensions or production line reconfigurations may force replacement of machine vision cameras if they cannot match the new requirements. One way to avoid such replacements is to use a programmable camera in the first place. Programmable cameras allow users to change effective resolution and frame rates under software control so they can be matched to the application without an equipment change.

The interplay of performance parameters with system costs makes the evaluation of cameras for control applications a challenging exercise. Programmable cameras can help by adding flexibility to the solution, but they are not a substitute for sound engineering. Developers need to understand application needs in detail, along with the benefits a machine-vision system will offer, to determine the optimum combination of camera parameters.

Author Information

Petko Dinev is president of Imperx,


Related Resources