On Track with Machine Vision

Want to increase inspection time by factor of 10, raise quality, lower costs, augment worker satisfaction, improve safety, and lower variability? Advanced machine vision technologies can help. Users, system integrators, and vision vendors have tracked these results: Get on track with benefits machine vision can bring to you..
By Mark T. Hoske, Control Engineering September 1, 2007
3D cameras inspect railroads at 30 mph
Counts free-falling parts in clumps
Cutting edge robotic vision
Speed, accuracy for textile manufacturing
Quality inspection with accurate rejection
ONLINE extra

Want to increase inspection time by factor of 10, raise quality, lower costs, augment worker satisfaction, improve safety, and lower variability? Advanced machine vision technologies can help. Users, system integrators, and vision vendors have tracked these results:

  • Automated 3D inspection at 30 mph, compared to 3 mph for human inspection.

  • Robotic repair of infinitely variable products.

  • Web inspection for manufacturing and structural defects at 200 m per minute—more than 10 times faster than humans.

  • Counting 450 clumped objects per second with irregular shapes, variable sizes, with better than 99% accuracy.

  • Inspection and rejection of products of varied colors and sizes with more than 10 failure criteria, at 95-99% accuracy at 1,200 per minute. Cost was one-third less than expected; return on investment, 9 months.

Get on track with benefits machine vision can bring to you. Envision the following.

3D cameras inspect railroads at 30 mph
Would you rather walk the rails seeking imperfections or ride at 30 mph and allow 3D cameras to do the job at 70,000 ties per hour, hundreds of miles per day? Ability to analyze a full 9-foot-wide cross-tie has been long considered the “holy grail” of track maintenance—augmenting worker safety, comfort, and maintenance planning.

Consistent and verifiable tie inspection has challenged railroad track and tie maintenance when inspectors walk the track and make split-second judgments on each tie. Why?

Human inspectors’ judgment of “good” or “bad” changes over time;
No two inspectors seem to be sufficiently consistent when grading the same track;
Each inspector can devote barely a second to the evaluation; and
Track conditions are many and varied.
Nagle Research of Austin, TX, found 2D inspection unsuitable due to surface contaminants common to cross-ties. Nagle Research integrated the Sick Ranger high-speed 3D vision into a rail-equipped pickup truck to examine tie geometry without regard for color or contrast. The product, called the Georgetown Rail’s Aurora 3D Track Inspection

Georgetown Rail says the system accurately inspects wood ties, concrete ties, spikes, anchors, and tie plates. It also measures gauge (distance between rails) and detects rail seat abrasion.

Nagle wrote a suite of custom analysis software that processes gigabytes of data and generates detailed reports on more than a dozen tie conditions, linking back to the 3D image of any tie in question. Reports available 48 hours from inspection can combine variables to meet customer needs for information on bearing, curvature, heading, mile post, tie location global positioning satellite (GPS) coordinates, rail joint and joint tie detection, rail seat abrasion on single ties and tie clusters, rail cant, tie spacing, and tie, tie plate and spike grading.

Aurora, in use across North America, is rapidly expanding into European and other markets.

A video demonstrates the 3D inspection system capabilities.


Counts free-falling parts in clumps
A strong demand for systems that accurately measure counts, times, and positions of objects falling in high speed and high rate in industries that make ball bearings, chemical pellets, seeds, pharmaceuticals, and other products. Such systems can improve manufacturing processes and quality control. Previously developed techniques have fallen short:

John Deere counts 450 parts per second using the V I Engineering machine vision system with National Instruments components and software, two line scan cameras and two linear backlighting units, each centered to areas of interest, perpendicularly and in a same plane. An alignment fixture helps with adjustment.

Grease belt systems do not take real-time measurements and require extensive post-measurement processing.
The LED/photodetector grid provides real-time, high-speed measurement, but has poor spatial resolution, limiting the capability to measure small objects (&4 mm), and is unable to resolve multiple objects forming a clump.
A machine vision-based system with one line scan camera demonstrated better results than the grease belt and LED/photodetector grid methods, but the one-camera design did not distinguish object clumps, or multiple objects that are too close and appear to be one object.

The goal was to design and develop a system for John Deere to measure time spacing and XY position of fast-falling objects, distinguishing objects in a clump, while delivering counting accuracy higher than 99%, at 200 parts per second.

V I Engineering devised a machine vision system with IEEE 1394 line scan cameras and backlighting units that more than doubles that rate. Special image acquisition algorithms were developed that exceed the goal. Minimum detectable object size is under 1 mm, maximum is more than 25 mm, and the rate of falling objects can exceed 450 per second. National Instruments LabView, the NI Vision Development Module, and NI-Imaq for IEEE 1394 were used in system development. A NI PCI-8252 IEEE 1394 interface card plugs in to a PC and connects two line scan cameras. PXI Express technology helped improve system specifications and machine vision.

The system developed exceeds original performance criteria and meets budget, to identify, match, count, and measure objects with irregular shapes. John Deere uses it to improve product design and manufacturing processes.

With the backlighting, each falling object appears as a black particle in a white background regardless of surface condition, brightness, and color of the falling object, so the vision algorithm does not need to be adjusted according to an object’s appearance. The vision algorithm matches and identifies all objects in the two camera images and separates “clumped” objects.

Telecentric lenses could have eliminated object distortion, but compensation was achieved via software, saving money.


Cutting edge robotic vision
SIR, an Italian machine builder specializing in robotics, created a unique automated work cell for re-working (grinding and surface finishing) that uses PatMax vision tools from Cognex. Re-working knives is among tasks previously thought to complex for automation. The task requires many decision skills, since production is random, and no knife is exactly like another. Knives lose their original shape over time from repeated wear, making it impossible calculate a onemotion profile.

A robot positions a knife under the vision system, which provides a real-time shape estimate based on each knife’s wear. The vision system cycle recognizes the type of knife handle. Then the blade is scanned to calculate points necessary to reconstruct the original shape. After getting the profile, the the first blade analysis discriminates all anomalous points. Too many deep niches trigger an exception to the normal work cycle.

After vision system analysis, the system selects a standard profile to restore the knife’s original shape. A second analysis cycle verifies degree of blade use to correct working parameters, such as speeds and incidence angles. Next step is to decide what point to start the work from, to avoid ruining knife handles. Even finishing points may be decided, after considering angles and the shape of the knife tip. The robotic arm moves the knife to grind each side separately, and then scours the knife to make the blade edge smoother and homogenous. At the end, cold trimming eliminates tailings.

The Kuka robot has a Cognex vision system: a MVS-8501 image acquisition card, VisionPro software with PatMax, Blob, and Caliper software tools. A standard resolution analog camera connects to the card. Lighting can alternate as needed.


Speed, accuracy for textile manufacturing
Textile manufacturing is renowned for product variability, and small defects created at one stage of manufacturing carry to the next. Textile industry products range widely, from traditional woven or knitted fabrics for clothing to fiberglass and technical textiles for automobiles and body armor. Traditionally, two inspections are performed. First, machine operators keep an eye on processes and adjust to keep production within acceptable bounds. At production speeds exceeding 150 meters per minute, human quality assurance is limited to identifying gross defects. Second, detailed off-line inspection occurs after manufacturing, resulting in more manufacture-induced faults. The process can requires multiple inspectors to keep pace, and their abilities to apply defect standards vary.

Shelton Vision Systems developed its Shelton WebSpector surface inspection system. It runs up to 200 m per minute on a fiberglass production line, checking for manufacturing and structural defects in real time with level of accuracy and consistency impossible for human operators even at 20 m per minute. It can identify and accommodate points of variation, including product size, width, color, production speed, environment, and textile construction complexity. It uses the Dalsa Spyder series of line scan CCD (charge coupled device) camera, which work well at low light levels and provide a better price / performance ratio than other systems, says Shelton. The Dalsa X64-CL iPro frame grabber acquires data; Dalsa WiT 8.3 is vision software based on visual programming.


Quality inspection with accurate rejection
Mold-Rite Plastics Inc., a manufacturer of containers and closures for the pharmaceutical industry, sought to improve the quality of automated production inspection of tightly controlled pharmaceutical container closures. Different colors and sizes of caps and closures were to be inspected for more than 10 failure criteria, at 1,200 caps per minute. A qualification test procedure was designed to ensure failure capture rates in the 95% to 99% range, well beyond prior inspection methods.

Dave Cross, Mold-Rite automation manager who installed the system, said caps range from 1 in. to 4 in. in diameter, and come in many colors from white to black. Also, the cap liner ranges from white to black to foil.

Siemens worked with Cross and a few others at Mold-Rite in four visits about three months apart, doing mock ups and learning about machine vision and changes in machine designs. (About $12,000 for system integration was allocated to the project, outside of machine vision system cost.) Siemens promised 95% or better performance.

Although human eyes see 20 caps per second as a blur, a pocket wheel or conveyor belt presents each part to the camera as lights flash to “stop” the action.

A custom user interface enables an operator to quickly set up the vision system to inspect different cap and liner color combinations.

A Siemens Simatic 1744 Visionscape Accelerated Frame grabber is mounted in a fast PC connected to two CM1100 progressive scan double speed cameras, says Glenn Symonds, Siemens Energy & Automation machine vision systems marketing manager for USA. A DF-150-3 red ring light and a BL75x75 red back light illuminate the caps. A Simatic Opto IO board provides an interface to the PLC, receiving triggers and control information and sending pass / fail and system status information, Symonds adds. Benefits include one system throughout the plant capable of working on all caps types, allowing operators and technicians to become experts. Cross says Mold-Rite “spent $7,000 to $8,000 on the mockup and early prototype design modifications. The entire cost of system, was about two-thirds of what we expected,” he says, careful to not mention the total. Return on investment fell to 9 months from a year, he says.


Please read the extra material that follows, beyond what appeared in the print edition, above.
-BW Rogers, PPT Vision—Sticky business: 4 cameras inspect caulk containers
-More part counting: distortion, compensation
-More lighting, software, and PCs
-Cap inspection: What worked, what didn’t
-Vision: prior articles, companies, integrators
-Camera catches a broken tie

Sticky business: 4 cameras inspect caulk containers
Ensuring packaging quality for caulk tubes is a sticky business, because of tube shape and line speed. In the application below, four cameras produce seven images. Special lighting is used to accommodate a variety of colors. All inspections are completed within 2 seconds per caulk container.

Dave Boehm of BW Rogers Co. explains how the sticky application works and the advantages of the vision technology used: Four PPT Impact T20 cameras inspect an empty caulk container before product is loaded into the tubes.

The vision systems look for four major defects:
1. Nozzle position;
2. Foil presence inside the tube;
3. Shape of the open end of the tube; and
4. The seal between the metal end of the tube and the cardboard side of the tube.

Images show some of the views captured.

Camera 1 looks at the nozzle end of the container. A 3.5-in. strobe ring light lights up the area so the metal around the nozzle appears bright. When the nozzle is missing or not positioned correctly, this area will appear dull or dark.

Camera 2 looks at the open end of the container. Two cameras with 5-in. LED (light-emitting diode) lights inspect the sides of the tube. The trigger that starts inspection on Camera 1 starts the inspection for presence or absence of foil with Camera 2. Because the nozzle is translucent, the light positioned at the nozzle end will appear as a bright spot to Camera 2, unless the foil is in place to block out the light. After foil inspection, Camera 2 is triggered again, with a second 3.5-in. ringlight positioned to light the tube edges. A circle gauge tool is used to measure the roundness of the tube opening.

Cameras 3 and 4 inspect the side of the tube. They are triggered simultaneously before the part is rotated 90 degrees. After the part is rotated, a second trigger provides four images of the tube, one every 90 degrees. In moving 90 degrees the tube will travel about 4.5 in. For this reason, two 5-in. LED linear lights light the top and bottom of the tube. A polygon ROI (region of interest) is placed along tube edges to look for defects, such as cracks, dents and broken seals.

Camera 1 Camera 2A
Camera 2B
Cameras 3 & 4


Part counting: distortion, compensation, resolution, accuracy
In the John Deere implementation, two IEEE 1394 line scan cameras, with 1,024 pixels, are from Imaging Solution Group; the two linear lights are manufactured by Advanced Illumination, the cameras provide object resolution of better than 1 mm in a 6 in. by 6 in. measurement area. Edge diffraction around small objects requires adjusting the threshold value to measure objects 1 mm or less in diameter. For synchronization, the cameras are externally triggered using a pulse train signal from an NI PCI-6601 counter/timer card. With this single-source triggering of the scanning lines and the precise physical alignment of the two cameras, an object appears in both camera images at the same vertical position. A Dell PC acquires images, processes images, and runs an object classification algorithm, as well as displays, generates, and reports results.

The main sources of the image distortion are lens distortion at the edge of the field of view and the perspective error of the lens due to the lens’ proximity to the objects. Lens distortion causes an object to change size and shape when it is closer to the edge. The perspective error causes an object to change size when it is at a different distance from the lens. Both distortions can cause time and position measurement error and a miscount of falling objects.

The software calibration method uses a calibration target to mimic a calibration grid target often used in lens field calibration. A thin cylindrical target is placed and moved in a 15 by 15 uniformly distanced grid map. Images are acquired and the positions of the target in both cameras are measured when the target is in each position. After the grid is finished, the field calibration map is basically a distorted grid image. Applying the simple calibration function in the NI vision library converts images to a uniform, undistorted image. All pixels are converted to real-world coordinates that are in millimeters. The size of the object in the image is also calibrated against its distance from the lens. After the calibration process, all objects’ coordinates and sizes are corrected in the measurement result.

With clumping, by counting and crosschecking the objects between the two camera images at the same vertical locations in the synchronized images, an algorithm can identify and distinguish objects in clumps. In rare situations that multiple objects appear in both cameras as a single object, the clump has a larger size, and we can tell the approximate number of objects by the size. This situation has a very low probability of happening, so the approximation has proven to have little effect on counting accuracy.

The spatial resolution of the system is determined by the camera pixel number, lens quality, lighting condition, scanning rate, and the physical dimension of the field of view. The camera has 1,024 pixels. It covers more than a 150 mm field of view. Each pixel covers about 150 micrometers, which equates to a spatial resolution of 150 micrometers. A 1 mm ball bearing was used to test the minimum detectable object size of the system. The system can easily count and measure these ball bearings. Current hardware could resolve objects as small as 0.5 mm, but that hasn’t been tested.

The time resolution of the system is determined by the line scan rate of the camera and the speed of the image processing. The camera has a maximum line rate of 10 kHz, which translates to 100 microseconds of time spacing between scan lines. By using different line scan cameras available in the market that have faster line scan rates, the time resolution of the system can be easily improved.

Because the cameras’ line scan is triggered by an external precision pulse signal, the accuracy of the object timing measurement is mainly determined by the time resolution. It is estimated to be 200 microseconds.
System specifications can be improved by using higher-performance components, such as cameras with higher line rate, more pixels, and frame grabbers with PCI Express technology.

Lighting, software, and PCs at Shelton Vision Systems
Shelton Vision Systems has been using equipment from Dalsa and Coreco (acquired by Dalsa) since 2001 to help locate inherent variations in natural, raw products. While some modern yarns remove sources of variations, the process of spinning, dyeing, knitting, and finishing still creates finished material faults, holes, and shade differences in a finished garment.

A full-web width electronic defect map is created, which may be reviewed prior to being input to a cut plan optimization software package. The software provides the most efficient cut plan for a parent roll to be split into small rolls for shipment and end use.
WebSpector automation copes with large style and product ranges with an automatic training function that recognizes a previously unseen product and starts to train the system on the first few meters of it. A few seconds later, parameters are set automatically and stored in the database for future use, avoiding operator intervention for new product training. That function is particularly useful where companies run several thousand product variations with new products coming on line weekly.

An image of each defect is stored with identifying data to classify the defect by type in real time. This happens prior to physically marking the web in line with the defect or, in the case of component parts, rejecting a part. The latter could happen in a converting process, making paper filters, for example.

Shelton Vision Systems was at the forefront of using PCs for the system processing platform of machine vision inspection systems and initially received some criticism from competitors as early PCs could not match the power of purpose-built processors. However, with the planned development of PCs for the games industry it was clear this situation would alter greatly. Since that decision in 1999, Shelton has seen dramatic per annum increases its system processing power, with little or no increase in PC cost. The additional benefit of a PC-based system is the worldwide familiarity with PC technology.

Mark Shelton, company founder and CEO, remarks that “The deciding factor for our selection of the Dalsa Spyder camera was sensitivity at low light levels. And, the WiT software was chosen for ease of use, power and portability.” Compatibility among Dalsa cameras, frame grabbers and WiT software is important. Shelton adds, “They work well as a complete package and they can be applied to many applications. Technical support for WiT software is also very good– this is extremely important to us as our applications are cutting edge. We switched from another vendor because their technical support was poor.”

Shelton states: “Our applications push machine vision products to the limit. We use the brightest light sources available with the most sensitive line scan cameras. Our customers want us to inspect at faster speeds. To do this, we need more sensitive single CCD (non-TDI) cameras, not faster line rates. We are planning to use the new Spyder 3 CameraLink version and also the Gig-E version.”

Shelton Vision Systems has found that by sourcing components from specialists in the industry– such as Dalsa – they are able to provide superior technical and performance advantages to customers in the areas of camera sensitivity and PC processing power – two areas deemed critical to their systems’ success.

Products inspected by WebSpector include fiberglass sheet, cellulosic and glass paper, aluminum coil, lithographic coating, and textiles. The system uses line scan cameras often in two or three planes of view, each with a different method of illumination (such as transmitted back light, diffuse top light and low angle top light) to enable full defect detection, for marking (web process) or rejection (discrete parts). The system records the full web on each plane of view at full system resolution for playback to assist in validation and initial set up. This feature also is useful for remote Internet support and training.

Camera catches a broken tie
If you’re on a train, or downwind of a freight line, there’s some comfort in knowing the patent-pending Georgetown Rail’s Aurora 3D Track Inspection System can catch rail bed defects, like this broken tie.

More about Shelton Vision Systems
The adaptable WebSpector surface inspection system is used for many applications with little alteration. It can operate on line, and be fitted to an existing process (such as fiberglass sheet production) or as part of a purpose-built stand alone inspection machine (built by Shelton) for batch-to-batch high speed inspection. An example of this is for car headliner fabric, which is generally a foam laminated textile in large rolls prior to being split into smaller rolls or cut into blanks.

On line and stand alone selection will depend on which offers the most benefit. The main benefits of the system are: to avoid shipping defective product; to avoid adding value (cost) to an already defective product; to providing throughput increases in the inspection process; and to collect accurate inspection data for Six Sigma product and process improvement. Very often the system also is used as a marketing tool by the user to reassure customers and prospects of their commitment to quality and process control.

System designs are scaleable from a single light source, camera, and PC configuration to multiple cameras, light sources, and PCs. By following a principle of using the best available building blocks for the system hardware, the company is able to maintain a sustained system performance development path. Traditional competitors, who produce their own hardware, find difficult to keep pace with Shelton’s performance, which uses a high-quality building block approach, the company says.

www.isgchips.com Imaging Solution Group

Cap inspection: What worked, what didn’t
System design and setup of machine vision provide a lot of learning. Dave Cross, Mold-Rite Plastics Inc. automation manager, who installed the cap inspection system there, told Control Engineering that “three other companies’ systems offered 50% to 75% accuracy at that rate, with signals often rejecting the cap ahead, the cap behind, or all three. The original request sought 95% accuracy.”

Caps can warp if packaged while too hot, so the system sees if threads are short, or not complete, or if the cap is out of round, Cross explains. They can warp if packaged while too hot. Rejects send an alert to the molding department. “Our quality assurance people were involved to set the specification,” Cross says. “Now a mechanic on the floor can set it up and keep it tuned. The screen does the counting, which is noted on a work order at end of a shift, then transferred to a database.” Live histogram data views of readings from the various defect counters allow operators to visually adjust tolerances based on what appears to be “normal” from the graphs. No real knowledge of vision is required to set up the system, only knowledge of what is, and what is not, a good cap. Electronic transfer of that data may be among future upgrades.

Two cameras with wide angle optics acquire at the same instant. The first camera looks down inside the cap’s threads to check cap wall integrity, rim, and liner. The second camera looks at the cap from the side to ensure that the child resistant closure insert is seated fully in the cap. A diffused low angle ring light illuminates the inside of the cap and the child resistant insert. A back light helps find cracks in the cap.

Related reading, resources on machine vision
Also from Control Engineering , read:

Machine Vision: Now is the Time (Product Research article, products)

Machine Vision Product Research May 2007 (Survey results in Resource Center; registration required)

Machine Vision: Not Just for Metrology Anymore (robotics applications)

Measure Up with Machine Vision (tricks and tips, references, setup hints)

Buyer’s guide search on machine vision

Automation Integrator Guide contains companies with machine vision expertise.