What is machine vision, and how can it help?

Understanding how machine vision works will help you see if machine vision will clear up specific application difficulties in manufacturing or processing.

By Frank Lamb December 6, 2018

People are often confused about what machine vision can and cannot do for a manufacturing line or process. Understanding how it works can help make decisions about if it will resolve problems with the application. So exactly what is machine vision, and how does it work?

Machine vision is the use of a camera or multiple cameras to inspect and analyze objects automatically, usually in an industrial or production environment. The data acquired then can be used to control a process or manufacturing activity. A typical application might be on an assembly line; after an operation is performed on a part, the camera is triggered to capture and process an image. The camera may be programmed to check the position of something, its color, size or shape, or whether the object is there or not. It also can look at and decipher a standard or 2-D matrix barcode or even read printed characters.

After the product has been inspected, a signal is usually generated to determine what to do with it. The part might be rejected into a container or an offshoot conveyor, or passed on through more assembly operations, tracking its inspection results through the system. In any case, machine vision systems can provide a lot more information about an object than simple absence/presence type sensors.

Typical uses for machine vision include:

  • Quality assurance
  • Robot/machine guidance
  • Test and calibration
  • Real-time process control
  • Data collection
  • Machine monitoring
  • Sorting/counting.

Many manufacturers use automated machine vision instead of human inspectors because it is better suited to repetitive inspection tasks. It is faster, more objective, and works continuously. Machine vision systems can inspect hundreds or even thousands of parts per minute, and provides more consistent and reliable inspection results than human inspectors.

By reducing defects, increasing yield, facilitating compliance with regulations and tracking parts with machine vision, manufacturers can save money and increase profitability.

An analogy for machine vision

A discrete photoeye is one of the most basic sensors in industrial automation; the reason we call it “discrete” or digital is it only has two states: on or off (Figure 1).

The principal idea behind a diffuse photoeye is it emits a beam of light and detects if that light is being reflected off of some object. If the object is not present, no light reflects back into the photoeye’s receiver. An electrical signal, usually 24 V, is connected to the receiver. If an object is present, the signal turns on and can be used in a control system to make something happen. If the object is removed, the signal turns back off.

Figure 3: Machine sensors make images using arrays of pixels. Courtesy: Frank Lamb, Automation Primer[/caption]

The series of images in Figure 3 is only a small section of the image captured by the camera. This area is considered to be the “region of interest” for a particular inspection.

Machine vision can use color sensing pixels and often use much larger pixel arrays. Software tools are applied to the captured images to determine dimensions, edge locations, movement, and the relative positions of components to each other. (Figure 4 shows a CCD image.)

Four main vision system components

Lenses and lighting, the image sensor or camera, the processor, and a method of communicating results, whether by physical input/output (I/O) connections or through other communications, are the four main parts to a vision system.

The lens captures the image and presents it to the sensor in the form of light. To optimize the vision system, the camera needs to be matched with the appropriate lens. Although there are many types of lenses, machine vision applications typically use a lens with a fixed focal length.

Three factors are an important part of the selection process:

  1. Field of view
  2. Working distance
  3. Sensor size of the camera.

There are many different methods of applying lighting to the image. The direction the light comes from, its brightness, and its color or wavelength compared to the color of the target are all important elements to consider when designing a machine vision environment. While lighting is an important part of getting a good image, there are two other things that affect how much light exposure an image gets. The lens has an adjustment called the aperture, which is opened or closed to let more or less light enter the lens. In combination with the exposure time, this determines the amount of light on the pixel array before lighting is even applied. The shutter or exposure time determines how long the image is imposed onto the array of pixels. In machine vision, the shutter is electronically controlled, usually on the order of milliseconds.

After the image has been captured, software tools are applied Some are applied before analysis (pre-processing) while others are used to determine the properties of the object being examined. In the pre-processing stage, effects can be applied to the image to sharpen the edges, increase contrast or fill spaces. This is done to enhance the ability of other software tools.

Machine vision target

The following is a list of some common tools that can be applied to obtain information about the target:

  • Pixel counting: Counts the number of light or dark pixels in an object
  • Edge detection: Finding object edges
  • Gauging/metrology: Measurement of object dimensions (such as pixels, inches, or millimeters)
  • Pattern recognition or template matching: Finding, matching, and/or counting specific patterns. This may include location of an object that may be rotated, partially hidden by another object, or varying in size.
  • Optical character recognition (OCR): Automated reading of text such as serial numbers
  • Barcode, data matrix and “2-D barcode” reading: Acquisition of data contained in various bar-coding standards.
  • Blob detection and extraction: Inspecting an image for discrete blobs of connected pixels (such as a black hole in a gray object) as image landmarks.
  • Color analysis: Identify parts, products and items using color, assess quality and isolate features using color.

The purpose of acquiring data in inspections is often to use for comparison against target values to determine a “pass or fail” or “go/no go” result. For example, with code or bar code verification, the read value is compared to the stored target value. For gauging, a measurement is compared against the proper value and tolerances.

For alpha-numeric code verification, the OCR text value is compared to the proper or target value. For inspection for blemishes, the measured size of the blemishes may be compared to the maximums allowed by quality standards.

Machine vision communications

After extracting the information using the processor and software tools, the information can be communicated to the control system using many of the standard industrial communication protocols. EtherNet/IP, Profinet, and Modbus TCP often are supported by major machine vision systems. Serial RS232 and RS485 based protocols are also common. Digital I/O often is built into the system for triggering and simple result reporting.

Machine vision communication standards also are available.

Understanding the physics and capabilities of machine vision systems can help qualify if an application is appropriate for camera-based systems. In general, whatever a human eye can see is what a camera can see (sometimes more or less), and deciphering and reporting the information can be tricky. Using a vendor knowledgeable in the systems, lighting, and techniques can save a lot of time and money in the long run.

Frank Lamb is the founder of Automation Consulting LLC, the creator of Automation Primer, and is a member of the Control Engineering Editorial Advisory Board. Edited by Mark T. Hoske, content manager, Control EngineeringCFE Media, mhoske@cfemedia.com.

KEYWORDS: Machine vision, automation tutorial

  • Machine vision basics
  • Vision system components and factors for selection
  • Units and communications for machine vision.

CONSIDER THIS

Complex sensing applications may be simpler with a machine vision system.

Register and view a related digital report on machine vision on the Control Engineering ebooks page.


Author Bio: Frank Lamb is founder and owner of Automation Consulting LLC and member of the Control Engineering editorial advisory board.

Related Resources