High Horsepower Vision Sensors

Technology advances have made vision sensors easy to setup and use. Beyond that, suppliers differ in how they characterize these technologies. In general, vision sensors are seen as having more capabilities, complexities, and costs than photoelectric sensors, but less than embedded or PC-based vision systems.

By Mark T. Hoske, editor-in-chief November 1, 2005

Additional Reading

Other relevant Control Engineering vision-related coverage includes:

  • Vision Systems

  • Sensors: Getting into Position

  • Research Machine Vision Looks Well Beyond Inspection

  • Vision systems increase performance, lower cost, get easier

AT A GLANCE
  • Vision sensor

  • More capable than photoelectrics

  • Fewer features than vision systems

  • Easier setup; lower cost

Sidebars:
Sensing elements: CMOS or CCD?
Get the picture: online vision gallery
Exclusive Application: Better vision: infrared sees the unseen

Technology advances have made vision sensors easy to setup and use. Beyond that, suppliers differ in how they characterize these technologies. In general, vision sensors are seen as having more capabilities, complexities, and costs than photoelectric sensors, but less than embedded or PC-based vision systems. However they’re classified, it seems clear that vision sensors should bring high-horsepower sensing benefits to many more applications than previously.

‘A vision sensor trades performance and functionality for ease-of-setup, smaller size, lower power, and lower cost,’ says Ben Dawson, director of strategic development, ipd, a Dalsa Coreco group. ‘The technology is similar, but full-blown vision systems have higher performance hardware and more general software. A major difference is the target market. Vision sensors are aimed at end-users while vision systems are generally integrated into products by an OEM.’

Machine vision systems and vision sensors are ‘very different animals,’ according to Jeff Schmitz, corporate business manager for vision sensors, Banner Engineering. ‘Vision systems are costly, custom-designed, multiple-component tools suited to extremely difficult applications that can justify their high cost… in the $15,000 to $20,000 range. Additionally, they use custom software and require a PC to process an image.’

While Cognex says its Checker sensor (cover image) isn’t a vision sensor, the device seems to share some characteristics with others in the space. John Keating, Cognex Corp.’s product marketing manager for Checker sensors, calls it ‘a high-end sensor that is a better solution for applications that could have required multiple photoelectric sensors.’ However, Checker, he says, can acquire and process 30,000 images per minute, detect presence, and inspect part features, without a photoelectric sensor. (Checker’s eye is a 128×100-pixel image sensor.) Unlike vision sensors, though, Checker can provide deterministic outputs, for registration or web cutting/perforating/printing applications, Keating says. ‘Checker doesn’t have vision tools,’ but sensors designed to detect if a part has the required features. Step-by-step setup is similar to that of photoelectric sensors. While vision tools and vision systems provide data, Checker provides a pass/fail result, he says.

All machine vision has common components: lighting, optics (lenses, image sensors), image capture, image processing and analysis, and communications, explains Pierantonio Boriero, product line manager, Matrox Imaging. The difference between low-end and ‘full-blown’ machine-vision systems is the level of integration and performance, not functionality, Boriero suggests. A smart camera (low-end vision system) integrates most components into one package, at a lower level of performance; a high-end vision system allows user selection of components (though image capture and image processing software are likely from the same vendor), which offers the best possible performance, he says.

Usually vision sensors have one purpose or task, such as reading a bar code or verifying color, says Kyle Voosen, vision product manager, National Instruments. ‘But operators are beginning to use some vision sensors in simple machine vision tasks, such as checking for presence or counting objects. Vision sensors are blurring the line between industrial measurement sensors and machine vision systems.’

On the high end, vision sensors can be versatile, stand-alone, armed with a number of algorithms, about 200 software toolsets, different lighting and lenses, and cost $5,000 to $20,000, says Robert Lee, strategic marketing manager, Omron Electronics LLC. With most general-purpose PC-based vision systems, the software, a frame grabber, and I/O cards are purchased separately, Lee says. Less versatile, application-specific systems can cost $5,000 and more. New-generation smart-vision sensors are easy to set up, run $1,000 to $2,000, and have a limited toolset for many general applications, he says.

Advantages, limits

Dawson of ipd, says, ‘Vision sensors are good for a quick ‘spot’ of vision, to monitor part dimensions or check for a limited class of defects. They are designed to be quickly and easily set up by an engineer who might not be familiar with vision technology. Vision sensors currently can’t keep up with data rates of more than a few megabytes (or megapixels) per second.’ Demanding or specialized applications, such as LCD panel inspection, require a full-blown machine vision system, Dawson suggests.

Cognex’s Keating says low-end vision sensors were created by removing features or tools from a vision sensor. That leaves the complexity of the original, without the capability. Checker applications include the ability to control part orientation for assembly or the cutting of material on a web or on a labeling or wrapping machine, Keating says, allowing product and label registration without a registration mark.

Today’s vision sensors can be comparable in cost to embedded vision systems or PC-based vision systems, NI’s Voosen says. For simple, obvious vision tasks, such as ensuring a label is present, the sensors work well, he says. More important than perceived low cost is software simplicity. Where PC-based vision systems are programmed and smart cameras are configured, vision sensors are self-learning, Voosen explains. Often a ‘learn’ button is the only input an operator has, and a single ‘pass/fail’ line is the only output. While this greatly simplifies system setup, it also reduces possible applications that vision sensors can address, Voosen adds.

Lee, of Omron, says low-end vision sensors have become more cost effective, smaller for tighter mounting, nimble, and have multiple toolsets for diverse applications. Newer smart sensors, such as the Omron ZFV, has an embedded 1.8-in. monitor that makes programming the unit simple through icon-driven menus, Lee says, with real-time high quality CCD imaging. ZFV can detect 10,000 parts per minute, without requiring extra equipment, such as a laptop-PC or complicated software. With ZFV, Lee says, it’s ‘target the camera, teach the sensor, and go.’ With built-in lighting and a reduced set-up time to operation, these smaller sensors now change the cost equation, he says. Limits on CCD size, lighting, or resolution may make the application unsolvable with simple sensors, requiring a higher-end system. Low-end vision sensors are usually preprogrammed or flashed with specific programming that cannot be easily modified, but ZFV is programmed to solve a variety of applications, Lee says.

New applications

A vision sensor does not require a PC to process the image or convey results to a PLC or other control device, Banner’s Schmitz says. ‘Because vision sensors are not designed for a specific application, they can more easily be moved from one application to another. And vision sensors are easier to set up, configure, and support, so someone on the line can do it. Granted, vision sensors cannot handle the extremely tough applications that require a vision system. But low cost, in the $1,000 to $3,000 range, allows use in applications that could never justify the cost of a vision system.’

Low-end systems are often designed to tackle sensor-like, presence/absence applications, says Joshua Jelonek, machine-vision team leader, vision-and-laser marking division, Keyence Corp. of America. With low price, low pixel count, low features, ‘Expectations on performance should be kept low, as these sensors were designed to run fast and cheap, with the bare minimum of product features,’ Jelonek says.

Dawson, of ipd, says vision sensors range from very low-end products hard-wired for one function, like measuring a spot of color, to flexible and capable vision sensors, such as ipd’s vision ‘Appliances.’ Using vision sensors, standard tasks (inspecting a label on a can or bottle) can take a few hours, compared to months of engineering time to develop and qualify a demanding vision task, Dawson says. ‘The vision sensor’s rapid and easy setup has opened a new application space, where vision is treated as just another standard component in a control process rather than as the driving factor in the process or machine design.’

Automotive industry

Mark Sippel, Cognex principal-product marketing manager for In-Sight vision sensors, says automotive manufacturers and suppliers are a good example of an industry is that is benefiting from vision, in part, due to lower cost and ease of use. ‘Product packaging applications are also benefiting from these advantages. Any industry requiring barcode marking, especially 2-D barcoding, can take advantage of this technology and its low-cost means of tracking, primarily due to vision sensors’ ability to read these complex codes.’

Checker is in many of the same industries where vision is used, Keating explains, and is expanding the market and user base. ‘New people are using Checker to solve applications in other areas of the production process.’ It is doing well in markets where vision sensors are not used in great volume due to cost and complexity, Keating says, including food and beverage industries.

Machine vision and smart cameras are ideal for many applications, says Boriero from Matrox, particularly those on the factory floor where extreme throughput is not a factor, such as reading ID marks. Manufacturing engineers with little or no programming experience are some of the newest users of machine vision, since some smart cameras on the market can be configured rather than programmed. However, since processing power is limited on smart cameras, application complexity will determine if a ‘full-blown,’ PC-based system is a better candidate.

Voosen, from NI, says vision sensors often exceed expectations when applied as advanced industrial sensors and often disappoint when thought of as low-cost vision systems. To illustrate this, consider proximity determination, presence checking, and bar-code reading, all applications that operators can implement with the right industrial measurement sensors and not with vision sensors, Voosen says. ‘However, a vision sensor offers greater reliability and performance than a simple measurement or inspection sensor. The same analogy holds true for other vision technologies, such as embedded vision systems and PC-based vision systems. Counting objects, locating parts, and taking optical measurements are applications that operators could implement with the right vision sensor instead of an embedded vision system.’

‘At the low-end market tiers,’ says Omron’s Lee, ‘vision sensors are beginning to address complicated applications, such as date-lot coding, width, area, and position measurements. Previously, customers required the flexibility of a system with a number of inbuilt tools to address different applications. With lower-end smart sensing systems, speed, lighting, mounting, simplicity, less uptime, digital quality continuous improvement, track and traceability, and self teaching are incorporated.’

End of high end?

Thanks to advances in embedded processor technology, a new, middle class of vision products has emerged in machine vision. ‘This bourgeoisie-type class of vision sensors can solve 80-90% of machine vision applications for a fraction of the cost of typical high end systems,’ says Jelonek of Keyence. The industry appears to be moving away from high-end machine vision systems, Jelonek says, for two reasons:

1) High-end systems offer a litany of complex tools and functions, but many of these tools and functions are not necessary to complete common machine vision applications. A low-end, or general purpose, vision sensor will provide many of these same tools, but in a condensed, easy-to-use format that filters out all the unnecessary extras. And they’ll typically do this for a fraction of the cost. From an application solving perspective, purchasing a high-end system would be like paying for a car stereo with a six-disc CD changer when all you do is listen to the radio.

2) As technology improves and the cost of powerful integrated circuit chips decreases, low-end and general-purpose vision sensors will become evermore capable. In image processing power, the line that divides low- from high-end is becoming increasingly blurred. This improvement in capability, coupled with the simple user interface provided by general-purpose systems, allows manufacturers to see higher return on investment through reduced hardware expenditures and faster integration time.

Parting vision

From a machine builder’s perspective, the concept of detecting patterns to control a machine is new, adds Keating. In packaging, this will allow manufacturers to use more packaging for graphics, without ‘quiet zones’ around large rectangular registration marks. Manufacturers seek ways to get rid of those marks he suggests.

Overall, vision sensors have ‘made routine vision applications much easier and faster to deploy,’ ipd’s Dawson says, but ‘you still have to properly light and image whatever the sensor is looking at. This, unfortunately, requires some expert knowledge that is not easy to build into a vision sensor. As vision sensor capabilities and algorithms improve, we expect more robust performance with light variations.’

Sensing elements: CMOS or CCD?

Sensors vary in sensing elements, and vision systems are no exception. Two digital technologies referred to in vision sensors/systems are CMOS and CCD. Kodak’s Terry Guy, product marketing manager, Image Sensor Solutions Group segment specialist, talked with Control Engineering recently about each. Guy’s been active in the vision industry in roles including as an Automated Imaging Association (AIA) board member. Kodak manufactures CMOS and CCDs.

CMOS (complementary metal oxide semiconductor) is helping make smart cameras more affordable by adding functionality to the sensing element, such as timing circuits, amplifiers, and analog-to-digital converter, with 8-, 10-, or 12-bit digital data. While these elements are integrated, traditionally image quality for CMOS is somewhat less, allowing more unwanted signal into the picture. Quality is getting closer to CCDs, making them more viable options for many applications. Most take in light with a ‘rolling shutter’ mechanism, in the worst case, creating a vertical line on a black background, with more spatial distortion, requiring more careful lighting. Quality is good for 2-D dot codes. Cost is an order of magnitude less than CCDs, perhaps $50 versus $500 for a 1,000 pixel-squared array. Chipmakers are creating CMOS with CCD performance, challenging in the 3-5 million-pixel range by year-end 2005.

CCD (charge-coupled device) is only a sensor. Timing, amplifier, and other external components are separate. CCD has a broader range of illumination and is more sensitive in low-light conditions. Noise source sensitivity is a little higher before adding external operations. CCDs are engineered specifically for imaging, and so have higher image quality and lower artifacts and can process frame rates in real time. The sensing element takes in light row by row, for higher image quality. Cost, quality, and integration needs are greater.

Get the picture: online vision gallery

A picture is worth more than a 1,000 words, especially with vision systems. Three links further illuminate why.

Ring light that surrounds the sensor lens hides raised features;

Low angle light accentuates rough areas and raised edges. It hides shiny areas;

Dome light evenly lights the whole part;

Backlight shows only the outline;

On-axis light can reveal a fingerprint; and

Area light at 30 degrees can highlight a raised area and reduce glare.

Exclusive Application: Better vision: infrared sees the unseen

The application seemed simple enough. All Boston Engineering had to do was find cracks in ceramic igniter elements.

While defects are not always visible to the human eye, an ultrasonic sensor or some other conductive sensor should be able to determine the defects. However, when some industrial sensors did not achieve the precision or reliability needed for the application, Boston Engineering used the Flir-A10 FireWire camera to view the elements in infrared. The results were obvious. When energized: elements built correctly heated evenly; and defective or cracked elements heated unevenly.

To process the images, Boston Engineering used National Instruments LabView and the NI Compact Vision System to acquire and process infrared images. While several algorithms could have solved the problem, Boston Engineering found that edge detection was the most straightforward and reliable.

‘To implement the test, we drew many parallel lines across the image and analyzed the individual profiles for the peaks. If one of the profiles did not reach the same peak on each side, we considered it a failing part because of the uneven heating,’ said Erik Goethert of Boston Engineering. Goethert adds that the implementation allows ‘us to find the cracks in a non-contact and non-destructive manner. These cracks are invisible to the naked eye but become apparent in the IR spectrum because of the varying emissivity between the substrate and the cracks, which results in the thermal differences. Flexibility of NI LabView and the NI vision software tools helped us create efficient custom analysis routines for the inspection criteria.’


Related Resources