Machine vision tops sensors in flexibility for Ford body panel selection

Inside Machines: Ford improved flexibility by switching from sensors to machine vision for body panel inspection, increasing reliability, and improving quality and information flow. The noncontact machine vision inspection system avoids significant maintenance required with sensor replacement, easily accommodates new models and design changes, and reduces overall inspection costs.

By John Lewis February 6, 2014

Many threaded copper studs are used during final assembly to attach components to automobile body panels such as wheel wells. Automotive body panel inspections and bar code reading work better with machine vision than with a sensor array. Traditionally, the panels are inspected for the presence of studs by using a robot to present the part to an array of proximity switches. But the low mean time between failure of the switches and the need to install more switches whenever there is a new model or design change results in heavy maintenance costs. Ford has improved on the conventional method by using machine vision to inspect for the presence of studs and also read barcodes on the body. Machine vision systems are highly reliable and can handle new models or design changes with a quick change in the program.

The machine vision system Ford selected for this application offers templates for interfacing with popular programmable logic controllers (PLCs) and robots, easy programming, and a compact and rugged design well suited to the production environment at an affordable cost.

47 mpg rating

Ford invested $555 million in its Flat Rock Assembly Plant to build a state-of-the-art, fully flexible body shop capable of producing multiple vehicles. Ford added 1,200 jobs at the plant tied to production of the Ford Fusion and will continue to produce the Ford Mustang there. Ford is also upgrading the plant’s paint shop with an environmentally friendly 3-Wet paint process. The next generation Fusion offers a broad selection of fuel-efficient powertrains in the midsize car segment—two EcoBoost-powered gasoline engines, a normally aspirated four-cylinder engine, a hybrid, and a plug-in hybrid. The new Fusion Hybrid’s unprecedented 47 mpg EPA rating makes it America’s most fuel-efficient, nonrechargeable sedan. With each new major plant program, Ford is significantly increasing the flexibility of its equipment and facilities to build multiple vehicles at one location. By 2015, Ford will be able to produce 25% more derivatives per plant than 2011 globally.

As part of the drive to increase the flexibility of the Flat Rock Assembly Plant, Ford closely examined its current inspection methods. It’s critical to ensure that all studs are in place on body panels before they are attached to the vehicle body because assembling a panel with missing studs makes it necessary to interrupt the assembly process while the faulty panel is removed for repairs. The copper studs are assembled to the panels by stud welding guns that hold the studs in place and draw an arc between the stud and the body panel.

By 2015, Ford will be able to produce 25% more derivatives per plant than 2011 globally.

Proximity switches used in the past to inspect the studs had a relatively high failure rate because the studs on each body panel coming down the line can potentially bump the switches as part of the inspection process. Different models, variants, and design changes often use different stud layouts, so additional proximity sensors must be added for each layout. The traditional approach required considerable time from maintenance staff to replace failed proximity sensors and to add new sensors in response to design changes and new models and variants.

More flexibility, less maintenance cost

“We decided to switch to machine vision on this application to improve flexibility and reduce maintenance expenses,” said Scott Vallade, controls engineer for Ford. “We have many body panel inspection applications for the Fusion, so our goal was to find an economical solution that would address all of these applications. With the large number of applications, we were also interested in reducing implementation time by finding a tool that’s easy to program and can be customized with a standard input/output scheme that will work with all of the plant’s robots and programmable logic controllers to enable the integrators setting up each application to focus on the vision problem. We wanted an economical solution that could survive in the plant environment.”

The cameras selected are “the best match for our body panel inspection applications,” Vallade said, supporting many communication protocols. The vendor “set up a custom template that communicates with all the equipment in our plant so that our integrators can focus on programming the vision application.”

The vision systems include preconfigured drivers, ready-to-use templates, and sample code to accelerate system setup and ensure smooth communication with factory automation robots and controllers. Included are drivers, templates, and sample code for open standard industrial Ethernet communications protocols, such as MC Protocol, EtherNet/IP, and Profinet for connection to a wide range of PLCs and other automation devices from Mitsubishi, Rockwell Automation, Siemens, and other manufacturers. Preconfigured drivers, ready-to-use templates, and sample code are available for robots by ABB, Denso, Fanuc, Kawasaki, Kuka, Motoman, and Staubli.

Vision without code writing

A flexible user environment makes it possible to set up virtually any inspection application graphically without writing a line of code. Working from an image of the part, the user begins by finding the vision system on the network and is guided through triggering the vision system and setting up the scale and nonlinear calibrations. The user can select from a library of vision tools to inspect the part. The user selects the data to be sent and the protocol for communicating with a PLC, robot, or human machine interface (HMI) for data collection and archiving. In the deployment mode, tool graphics, a results table, and a filmstrip control are available for validating and troubleshooting the application. The cameras are contained in a 75 mm by 55 mm by 47 mm IP67 package designed to survive in the factory environment.

In current applications, the camera is stationary and the robot moves the part into position for inspection. Future applications also will use the robot to move the vision system into position to inspect stationary body panels. The number and orientation of the studs determines how many cameras are required to inspect all of the studs. Current applications include the cowl and dashboard assembly and the left and right wheel housings. There are 15 to 17 studs on variants of the cowl and dash assembly and 10 to 12 studs on variants of the wheelhouse. Each of these panels is inspected with two vision systems.

The vision systems are programmable so they can accommodate new models and design changes with a simple program change.

The camera connects to either the robot or the PLC using the EtherNet/IP protocol. The PLC or robot tells the camera which model is being inspected and which program to use. The robot positions the part in front of the camera or cameras and the robot or PLC sends a signal to the camera or cameras to acquire an image. The camera inspects the part and based on the program determines whether the part passes or fails the inspection. The camera then sends a signal to the robot or PLC. If the part fails the inspection, then the PLC signals an operator to replace the bad panel.

Integrator develops application

Each application is implemented by a vision integrator who makes the decision on the best lighting and vision tools for the application. Two approaches have been used to date. One is based on the blob tool, which recognizes an object based on its shape. The second, based on the histogram tool, compares the graphical representation of the tonal distribution of the digital image to the saved representation of a good part. The vision systems also are used to read a 2D barcode on the body. The barcode is passed to another system that checks to ensure there are no open issues with the vehicle before it is released from the body shop.

“The initial cost of purchasing and setting up a vision system is higher than a dozen proximity sensors,” Vallade said. “However, proximity sensors generate downstream expenses, such as the cost of replacement sensors and the labor and downtime required for maintenance. We also need to consider the extra work required to prepare for a design change for new variants as well as the changeover that may be required when switching from one variant to another. By switching to machine vision we have substantially reduced the downstream costs by installing a noncontact inspection system that will last for many years without requiring any significant maintenance. The vision systems are programmable so they can accommodate new models and design changes with a simple program change. The bottom line is that machine vision will substantially reduce our overall inspection costs.”

– By John Lewis, market development manager, Cognex Corp. Edited by Mark T. Hoske, content manager, CFE Media, Control Engineering and Plant Engineering, mhoske@cfemedia.com

ONLINE

See related links below for machine vision and automotive manufacturing.

www.controleng.com/machinevision

www.cognex.com 

Key concepts

  • Ford improved flexibility by switching from sensors to machine vision for body panel inspection.
  • Noncontact machine vision inspection system avoids proximity sensor maintenance, reducing overall inspection costs.
  • Vision systems accommodate new models and design changes with a simple program change.

Consider this

Do you look at lifecycle costs, including upgrades and maintenance, when choosing automation technologies?


Author Bio: John Lewis, contributing editor, Association for Advancing Automation (A3).