Machine Vision Comes of Age

Vision technology is taking its place as a major sensing technology. Control engineers are finding it can be an indispensable tool for gathering system status information that can feed automatically into control systems—especially when there is motion involved. As vision technology has matured, image sensors have grown into multi-megapixel resolution, while frame grabbers and eventually merged into....


When I first started dealing with machine vision technology about 35 years ago, digital cameras were a laboratory curiosity, frame-grabber boards were just arriving as commercial products, and image processing was a black art. As vision technology matured, image sensors grew to multi-megapixel resolution, while frame grabbers grew in sophistication and eventually merged into the camera’s electronics. Finally, cameras even absorbed image processing computers as well. Imaging software—originally a library of utilities for functions called “thresholding,” “blob analysis,” and other arcane operations—morphed into wizards that engineers could use to create useful applications even if they had only rudimentary understanding of machine vision concepts.

Verifying the proper packing of aluminum cans in cartons using laser-based point scanners would require N+1 sensors, where N is the number of cans per case. A single smart camera can do the job.

Today, vision technology is ready to take its place as a major sensing technology. Control engineers are finding it can be an indispensable tool for gathering system status information that can feed automatically into control systems—especially when there is motion involved.

Many vision suppliers have dropped terms like “camera,” and “machine vision system” in favor of terms like “vision appliance,” and “vision sensor,” which better describe what their offerings can accomplish for engineers. Along the way, emphasis has shifted from system specifications to discovering what vision can do for you. What vision can do is collect huge amounts of real-world data very rapidly, and turn it into information concisely describing what engineers need to know.

With the new emphasis on what vision can do, instead of what it is, prices for some units have actually come down. Inexpensive units designed with just enough resources to do a specific class of tasks can now compete on price as well as performance with more traditional point-sensor arrays gathering real-time data in control applications. The advantage is that vision systems can be easier to set up, easier to train, and easier to maintain than complex multi-unit point-sensor arrays.

To be clear on the difference, point sensors measure a single parameter at a single location in space. Examples include thermocouples, inductive proximity sensors, position encoders, and laser beams. Vision systems, on the other hand, use area or line-scan camera technology to acquire data from a large number of points within a clearly defined field of view (FOV). They also cover a wide spectral range, from infrared (IR), through visible light (VIS), to ultraviolet (UV). This image-acquisition technology is backed up by powerful computing resources that extract exactly the information needed by the control system, whether that is measuring the temperature of a certain point on a motor, reporting the statistical distribution of quality-parameter variations, or checking proper placement of components on a printed circuit board.

Harley engineers catch the vision

The idea that vision, when used cleverly, can replace a surprising number of sensors is not new. Seven years ago I wrote an article on what was then a new robotic inspection system at the Harley-Davidson motor assembly plant. The system had been built by a Midwestern system integrator, whom I interviewed for the article.

Three ways to verify a threaded fastener: observe proper seating, count threads protruding from the assembly, or observe deformation of a lock washer.

Initially, the system was installed to verify correct installation of the twin-cam 88 engine timing chain. Needless to say, word spread fast among the engineers at Harley that a vision system was being installed and nearly everyone had an idea on how they would like to use it. By the time installation was complete, the system was programmed to perform 30 tests, including checking to make sure the sprockets were fully tightened down.

Ensuring that a fastener is tightened does not seem the sort of thing a vision system would be good at, but it is. There are at least three ways vision can verify that a nut is torqued in a manufacturing environment:

  • Verify proper seating — When an automated system installs a nut, it drives the nut home against something else, such as a washer or a captured part that the nut is intended to retain, the machine vision system can easily determine whether the nut is seated against it. If not, there will at least be a dark shadow exposing a gap, instead of a thin line where the nut meets the surface it should be pressed against.

  • Count threads — Today’s precise and repeatable automated manufacturing systems make every component the same, including the number of threads on any threaded fastener. By measuring the amount of thread protruding from a nut, a vision system can actually observe how much a bolt stretches when properly torqued.

  • Observe deformation — If the nut captures a split-ring or star lock washer, a wave washer, or even a captured deformable component, a vision system can see whether and by how much the captured part has deformed due to compression.

Since writing that article, I’ve seen a large number of applications where engineers used vision to solve difficult sensing problems by thinking outside the box. The box, for machine vision is, of course, automated inspection.

Seeing outside the box

Everyone knows that you can use machine vision to inspect assemblies, gauge dimensions, and read logos. It’s out-of-the-box applications that are the most interesting, and potentially the most useful for control engineers.

For example, last year I ran across an application where an electric utility wanted to monitor the level in oil-filled insulators atop high-voltage transformers. To efficiently transport large amounts of electric power, utilities boost the voltage into the megavolt range. Dissipative losses in transmission lines scale with the current carried, and the current required to transport a certain amount of power is inversely proportional to the voltage. A transmission line carrying, say, 1.2 megavolts can carry 10,000 times the power per ampere that a transmission line operating at 120 V can before exceeding its safe current level. To boost the voltage, or return it to service level, utilities use transformers.

High voltage transformers, however, are subject to power-robbing internal discharges. To prevent such discharges, transformer builders fill all the interior spaces with high-purity oil, which is a non-conductor. In addition to filling all the spaces, liquid oil resists damage from electrical discharges, so it has a much longer service life than solid insulators, such as epoxy or even glass.

The utility realized that the transformer oil picks up heat from the current-carrying copper transformer windings and transports it out to the case. If the oil level is low, the case will not be completely filled. The side wall will be warm up to the oil level, and cooler above it. The utility used an infrared camera to measure the height at which this abrupt temperature change occurred, and thereby measured the oil-fill level remotely without having to go past the safety fence enclosing the substation. Nothing had to be shut down, and a single picture could measure levels in many transformers. While the particular application was not automated at the time, it is easy to see how such a technique could be automated and used to control the level of any warm or cool liquid.

Because machine vision can measure any variable whose value translates into a difference in UV/VIS/IR emission or reflection, and observe up to millions of locations simultaneously, its range of applicability is limited only by the imagination of the engineers using it.

Today’s machine vision systems leverage advanced computer technology, including compact, powerful processors, advanced software algorithms, sophisticated user interfaces, and high-speed networks to acquire data, reduce it to the information needed, and deliver it to the control system on an as-needed basis. Packaged as “smart vision” products, these systems offer control engineers flexible sensing technology at a competitive cost-of-ownership level. ce

Also read:

Vision Sensors Error-Proof Oil Cap Assembly

Vision for Process Control

Author Information

C.G. Masi is a senior editor with Control Engineering. Contact him by email at

No comments
The Engineers' Choice Awards highlight some of the best new control, instrumentation and automation products as chosen by...
The System Integrator Giants program lists the top 100 system integrators among companies listed in CFE Media's Global System Integrator Database.
The Engineering Leaders Under 40 program identifies and gives recognition to young engineers who...
This eGuide illustrates solutions, applications and benefits of machine vision systems.
Learn how to increase device reliability in harsh environments and decrease unplanned system downtime.
This eGuide contains a series of articles and videos that considers theoretical and practical; immediate needs and a look into the future.
Robotic safety, collaboration, standards; DCS migration tips; IT/OT convergence; 2017 Control Engineering Salary and Career Survey
Integrated mobility; Artificial intelligence; Predictive motion control; Sensors and control system inputs; Asset Management; Cybersecurity
Big Data and IIoT value; Monitoring Big Data; Robotics safety standards and programming; Learning about PID
Featured articles highlight technologies that enable the Industrial Internet of Things, IIoT-related products and strategies to get data more easily to the user.
This article collection contains several articles on how automation and controls are helping human-machine interface (HMI) hardware and software advance.
This digital report will explore several aspects of how IIoT will transform manufacturing in the coming years.

Find and connect with the most suitable service provider for your unique application. Start searching the Global System Integrator Database Now!

Mobility as the means to offshore innovation; Preventing another Deepwater Horizon; ROVs as subsea robots; SCADA and the radio spectrum
Future of oil and gas projects; Reservoir models; The importance of SCADA to oil and gas
Big Data and bigger solutions; Tablet technologies; SCADA developments
Automation Engineer; Wood Group
System Integrator; Cross Integrated Systems Group
Jose S. Vasquez, Jr.
Fire & Life Safety Engineer; Technip USA Inc.
This course focuses on climate analysis, appropriateness of cooling system selection, and combining cooling systems.
This course will help identify and reveal electrical hazards and identify the solutions to implementing and maintaining a safe work environment.
This course explains how maintaining power and communication systems through emergency power-generation systems is critical.
click me