Machine Vision Lens selection Makes the Difference

The role of machine vision in controlling industrial processes continues to expand, especially in the arenas of robot guidance, object recognition and quality assurance. Today’s sophisticated vision systems go beyond basic blob analysis—distinguishing a part from a pile and establishing its orientation—to providing information for subsequent functions such as moving an object ...

By Timne Bilton, Edmund Optics December 1, 2007

The role of machine vision in controlling industrial processes continues to expand, especially in the arenas of robot guidance, object recognition and quality assurance. Today’s sophisticated vision systems go beyond basic blob analysis—distinguishing a part from a pile and establishing its orientation—to providing information for subsequent functions such as moving an object from one location to another.

For robotic systems used in assembly or high volume inspection operation, such as automotive production and inspection lines, the conveyor belt is generally the point of reference. Here, robots perform two jobs: identification and transportation.

As in most machine-vision applications, lighting control is critically important. Robotic vision systems also demand a high degree of repeatability; therefore, reducing mounting jitter to provide a clear image is important.

In mass unit inspection lines, such as in the pharmaceuticals industry, the vision system must be capable of identifying defective packages, unreadable labels and product absence. The vision system must quickly discern and measure square, circular, and oblong objects with a high degree of accuracy. Maintaining uniform packaging appearance and color helps increase the precision of the machine vision system. For food inspection systems, size, color, density and shape of the product typically are determined by multi-element inspection. Multi-element machine-vision systems may employ both color and monochrome cameras and typically use structured illumination to establish the profile and internal makeup of a product.

Although the camera, analysis software, and illumination are important factors in machine vision systems, perhaps the most critical component is the imaging lens. For the system to reach its full capability, the lens must be appropriate for the job. When choosing a lens for a control setup, machine vision integrators should consider four major factors:

The type of object to be detected and its characteristics;

Depth of field or focus;

Mounting and detecting space;

Operating environment.

Analyzing these four areas optimizes the selection of possible lenses for a specific application.

Primary magnification is the ration of the image size on the sensor to that of the actual object imaged.

Object characteristics

Before selecting a lens for a machine vision system, a system integrator must define the object and surrounding area for analysis. This visible area is called the field of view (FOV). It can be measured either diagonally or horizontally. Typically, there is a 4:3 ratio between the vertical and horizontal FOV dimensions. This ratio depends on the dimensions of the camera sensor’s active area. Sensor size is important in determining the primary magnification (PMAG) required to obtain the desired field of view. PMAG is defined as the ratio of sensor size to FOV and is the “work” done by the lens. It should be taken into account when determining if a lens is appropriate.

PMAG= Sensor size

Lens magnification is extremely important when pairing a lens with cameras of different size chips; however, lens magnification should not be confused with microscope magnification, which is determined by the length of the optical tube and objective focal length. It does not take into account the size of the camera sensor.

System magnification (SMAG) is the product of the PMAG and the monitor to sensor size ratio. It is the total magnification from the object to the image on the monitor, and is the “work” done by the whole system. System magnification is useful when the screen size of the object comes into play.

SMAG= PMAG x Monitor size

Characteristics of the object are also important. The ability of the lens to resolve features in an object depends on how much contrast exists between the features. One way to determine the system resolution, or the smallest resolvable feature of the object, is to use resolution targets such as Ronchi rulings on the USAF test target. These rulings define features as line pairs—one white line and one black line of equal width. Other targets may use circles or a dot grid.

The ability of the lens to distinguish line pairs or spaced dots of a particular width under specific lighting conditions defines lens resolution. Resolution is often displayed graphically by the modulation transfer function (MTF).

The graph plots the relative contrast available at a specific line pair frequency. Distortion, chromatic aberration and other wavefront distortions will affect the slope of this curve, lowering the curve from the ideal performance, or diffraction limited performance. Lens specifications sometimes list object resolution in terms of line pairs per millimeter (lp/mm). Dividing 1,000 by this value provides an estimate of the lens’ object resolution in microns.

When determining surface topography, more than one camera and lens typically are used, and knowing the amount of aberration inherent in a lens is valuable. Aberration is an optical error in the lens that causes a difference in image quality at different points within the image. Topography applications often involve a laser line or other illumination in the image to ensure the accuracy of a measurement. Some software programs can eliminate errors, such as distortion, caused by the lens, so only the topographical data is evident in the final image.

Large format and area scan camera lenses are excellent in control applications because of their high resolution, low distortion, and limited chromatic aberration. The wide FOV and compatibility with large format sensors make these lenses valuable in applications including inspection of webs, LCDs, and food and beverages.

Space constraints

The amount of floor space needed for an automated machine vision system and assembly line can vary from a few meters to an entire warehouse. Working distance is the space between the object and the front of the camera lens when the image is in focus. It constrains the spacing needed for the vision system and the equipment the vision system works with. In some applications, such as looking through a vacuum oven port, the working distance is more flexible and can be achieved with either close focus lenses or long working distance video microscope lenses. In other applications, such as high power microscopic inspection, working distance is reduced to a few inches.

Working distance can be altered, within limits, by refocusing the lens. Infinite conjugate lenses can focus from a finite minimum working distance out to infinity. Finite conjugate lenses have a specific working distance range.

Housing and mounting constraints, including protective shielding for harsh environments must be flexible enough to allow adjusting working distance. For example, in many installations, the region of interest on a product or line of products may shift during the inspection process. This may require a vision system or vision component that can be adjusted to accommodate several sensing conditions. Many camera lenses come with stabilization mounts, but when object space (area between object and lens) is limited, changing the image space (area between lens and image) can modify the working distance.

Image space can be altered in two ways: with zoom functions or ring spacers. Zoom lenses adjust the field of view of a camera system without changing the working distance. Some zoom systems are divided into components that can be matched to accommodate a specific setup. For metrology and microscopy applications requiring magnification on the order of microns, these lens systems can be coupled with microscope objectives. Zoom lenses retain high resolution, but can become costly.

Alternatively, lens spacers are economical and will shorten working distances, decreasing the FOV of the lens. Unfortunately, they can introduce distortion and reduce resolution. Therefore, spacers are not recommended unless the space adjustment needed is less than 5 mm or the lens is designed to work with spacers.

Basic lens design factors combine to determine image characteristics.

Depth of focus

An optical system’s performance is based upon the amount of allowable image blur. Blur can result from a position shift in the object plane or the image plane. Depth of focus refers to the limit of acceptable blur, caused when a detector shifts. It depends on the working F-Number (F/#), a measure of the light gathering ability of a lens. F/# increases as the lens aperture is closed. Closing the aperture, or increasing the F/#, increases the system’s depth of field, but reduces the amount of illumination reaching the sensor. Light levels should be increased to compensate. Lens specifications that list depth of focus should also provide the F/# at which the value was measured.

Depth of field (DOF) refers to the blur caused by shifting the object. DOF is the maximum object depth that can be maintained entirely in focus. It is also the amount of object movement (in and out from best focus) allowable while maintaining a desired amount of focus. As an object is placed closer or farther than the working distance, it goes out of focus and both resolution and contrast suffer. For this reason, DOF is associated with a defined resolution and contrast. As with depth of focus, DOF can be increased by closing the lens aperture (increasing the F/#); and will need illumination to be increased accordingly.

A lens’ DOF range depends on its effective focal length, the allowable blur diameter, and the nominal back focal length. Some lenses are designed to be hyperfocal or hyperfocal-capable, meaning that the far point of the focus range extends out to infinity. This condition is often exhibited in fixed focal length lenses. The depth of field is deep, but can be altered with the aid of an iris.

Telecentric lenses should not be confused with lenses with large depth of field. Telecentric lenses enable a machine vision system to control its magnification, removing perspective error so all equal-sized objects are imaged at the same height regardless of distance. An application example for such a lens is analyzing computer circuit boards. Telecentric lenses often have a range of working distances with a finite depth of field surrounding each working distance point. Integrators should look at both working distance range and depth of field when considering if a telecentric is applicable to a project.

In some cases, as with pipe inspection, large depth of field can be produced with varifocal lenses. Varifocals are similar to zoom lenses and are used when the focal length needs to be adjusted frequently. These lenses are often motorized to ensure smooth shift in the focal plane. Using such a lens, the entire length of the pipe can be in scanned, essentially section by section, to check for imperfections by adjusting focal length. Unlike zoom lenses however, the working distance for the lens will also change and positioning readjustments may be necessary.

The importance of environment

Environmental concerns for a machine vision system include object reflectivity, lighting, temperature, vibration, and contaminants. Reflection from an object can induce glare and blur its features. Baffles in the lens housing and lens hoods can reduce glare caused by scattered light. Baffles are opaque disks with carefully sized central holes strategically positioned to limit light paths that reach the sensor. Polarizing or diffusing the light source also helps reduce or remove hot spots from an object.

Lighting, especially if it is monochromatic, can boost the contrast of an object and maximize a lens’ image quality. Contrast is especially important when a monochrome camera is used, and is produced either through an additive or subtractive process. In an additive process, a monochromatic light source and camera lens filter are matched to the color of the medium in which the object under analysis resides. The area around the object reflects or transmits the light in the system and appears brighter than the object. This technique is useful in applications where a gel or colored fluid is backlit and examined for particulates.

Conversely, in a subtractive system, a filter blocking the reflected light surrounding an item is used. This makes the object appear lighter than all items around it. Applications such as pill inspection, where the color of the object may be its only distinguishable feature, utilize filters.

High temperature environments can cause problems through thermal expansion of optical components in a lens. Not all lenses tolerate temperature variation. Lenses with long working distances are best when inspecting hot objects.

Another consideration is vibration, which can often be reduced by mounting the lens directly to an isolation platform or table instead of the camera. Heavier camera lenses typically are offered with mounting clamps. If a lens cannot be mounted directly to a breadboard or similar isolation table, the object to which the lens is mounted can be placed on an isolation platform. Boom stands placed on isolation platforms are prime camera/lens mount examples.

Contaminants in industrial environments can degrade the surface of a lens. Harsh environment optics (HEO) products are specially designed to provide high quality images while sustaining long-term exposure to severe conditions. Because the optics are hermetically sealed, HEOs can withstand liquid submersion, resist abrasion and corrosion, are protected against dust exposure, and resist mechanical impact.

Camera lenses have a profound effect on machine vision systems. To select the proper lens for an application, a machine vision integrator must know the size, features and reflectivity of the object under analysis. He or she must also estimate the working distance range and the depth of field needed to view an item of significant thickness. When altering object and image space, an integrator can make choices that make a system more flexible, but also degrade overall performance. As environments become more dynamic and demanding, selecting lenses that can weather the challenges is critical.

Author Information

Timne Bilton is an applications engineer with Edmund Optics. Contact her at tbilton@edmundoptics.com .


Related Resources