Future of machine vision for robotics

Engineers from Ford Motor Co. and General Motors Co. discussed technology advances needed for next-generation machine vision to help robotics and motion control. Suppliers in the room were listening, suggesting that this wish list of automotive robotic machine vision capabilities may be among future features, according to a presentation at the 2015 A3 Business Forum. Faster machine vision setup and recovery, better 3D tools, and offline simulation were among advances mentioned.

By Mark T. Hoske February 21, 2015

Machine vision systems now widely used for automotive applications still present opportunities and challenges, according to engineers at General Motors Co. (NYSE: GM) and Ford Motor Co. (NYSE: F), who shared their personal machine-vision technology development wish lists at the 2015 A3 Business Forum on Jan. 22. A3 Business Forum was Jan. 21-23, in Lake Buena Vista, Fla., with sessions from A3 organizations: Robotic Industries Association (RIA), Motion Control Association (MCA), and AIA (Advancing Vision and Imaging). A3 stands for Association for Advancing Automation.

Frank Maslar, Ford Motor Co., technical specialist in advanced manufacturing engineering group in the power train organization, discussed machine vision parameters and needs.

Characteristics of the automotive assembly process include less than 5 mm variation in location in all directions, repetitive and similar parts, differences in surfaces, and parts that stay in the station for 20 to 30 sec. Lines run 24/7. Ease of recovery from vision failure is important, with the cost of downtime running thousands of dollars per minute in an automotive application. Vision is specified as part of standard processes. If machine vision is broken, the line stops. Any vision system used must perform at 6 Sigma, with three defects per million. 

Machine vision lighting: Less art, more science

Stephen Jones, General Motors Co., lead engineer for powertrain manufacturing engineering and certified advanced level vision professional, said that art and science are not equal. Engineering a machine vision system requires science and should be treated as such, rather than art. Rules are needed to quantify results ahead of time.

Jones asked audience members to imagine a mechanical engineer talking about the performance characteristics of I-beams in a construction project with the same level of uncertainty that now can accompany machine vision lighting.

Machine vision application development is iterative, but it doesn’t work in all cases. Preparation must improve to understand the variation in components between what is designed and actual performance. Also, there’s a significant lack of application expertise and better tools are needed to compensate, Jones said. If a part is moist, for example, that makes a huge difference in vision performance.

Jones said that with 300 mm 12-in. LED bar lights for machine vision, it would be useful to have documentation providing wavelengths, intensity of light, uniformity across the field of projection, and degradation across standard distances. Half the products examined recently didn’t quantify lighting output. Without such specifications, there was no way to answer with any certainty if it was possible to use a particular light as a replacement.

"If configuration of a line changes and a camera had to move back 8 inches, what would the impact be?" Jones asked. "We don’t know without light intensity information." 

Surface variance, machine vision

With metal surfaces, reflection properties have a huge impact on vision quality and can vary widely based on the cutting tool path and depth.

Depending on the surface, when the same part is rotated 45 degrees, results in the camera can range from nice to awful.

"Given the variance, how can we specify a solution?" Jones asked.

Showing varied images, one dark and one light, of the same gear in tray, with an increase in exposure time, Jones asked, "If quality is poor with the first, can a second photo be taken? With a tray of parts, can we stitch the corners together from multiple images? Those options are not easily configurable." With smart cameras that integrate a processor, lens, and often sub-optimal lighting, a deeper dive is often needed into the software to consistently get the best image among six, he suggested.

With a smart camera, it’s hard to accommodate future variations. When purchasing a camera, it may be possible to get or guess the right feature set for the current application or future applications.

"We just don’t understand future variation or impacts," Jones said. 

Better machine vision resource allocation

Cycle times in the automotive industry are different than many other industries. Longer cycle times mean that processing resources are most often idle. In the future, 150 GigE cameras may be linked in multiple networks, with a programmable logic controller (PLC) triggering requests. Computing power would be distributed in a multi-core processor within one software development environment with all tools available at every station.

3D machine vision improvements

Maslar looks forward to 3D data visualization. At present, 2D images display 3D data. Systems are needed to work in a 3D space, with data presented as a solid model, providing the ability to zoom in and rotate. At present, 3D vision data doesn’t offer those capabilities; 3D edge tools also are needed, along with blob, pattern matching and other sophisticated vision tools for 3D environments.

At present, there’s a 3D point cloud. There’s need to have texture and color in a one-for-one representation of a part in a unified work space. It needs to be intuitively easy to use and work with all data types. 

3D vision guidance, modeling

3D robotic guidance systems have been available for 10 years, but they are still too hard to use. There should be a standard between machine vision systems and robot controllers so they plug and play, said Maslar.

3D robot guidance should take advantage of offline programming for vision systems. Application programming for vision systems should use virtual tools. Solid models of parts should include surface texture and provide representations of what the machine image should look like, but that’s not currently being done, Maslar said.

Offline programming for vision systems should happen virtually before anything is bolted together, Maslar said. Error recovery of robotic work cells should take advantage of the simulation environment by displaying faults graphically with embedded error recovery hints, Maslar said.

"Today most interactions with robots occur through the text-based teach pendant. We should be taking advantage of the virtual tools by bringing the simulation to the factory floor. Error recovery could be executed virtually," Maslar added, to lower risk related to robot movements.

– Mark T. Hoske, content manager, Control Engineering, mhoske@cfemedia.com

ONLINE

Key concepts

  • Automotive applications heavily depend on machine vision; if machine vision fails, the line stops.
  • Machine vision advances will continue to help robotics and other motion control applications.
  • Faster machine vision setup and recovery, better 3D tools, and offline simulation are among desired capabilities.

Consider this

Since automation applications heavily rely on machine vision, where can it be more effectively applied in your applications? 

ONLINE extra

www.a3automate.org 

https://corporate.ford.com/careers/departments/manufacturing.html 

https://corporate.ford.com/careers/departments/product-development.html 

Innovation & Advanced Propulsion Technologies | GM Powertrain 

See a related machine vision article below and visit the Control Engineering discrete sensors and machine vision page


Author Bio: Mark Hoske has been Control Engineering editor/content manager since 1994 and in a leadership role since 1999, covering all major areas: control systems, networking and information systems, control equipment and energy, and system integration, everything that comprises or facilitates the control loop. He has been writing about technology since 1987, writing professionally since 1982, and has a Bachelor of Science in Journalism degree from UW-Madison.