Smart Cameras Resolve Control Issues
Image analysis for machine vision applications is essentially an exercise in data reduction. The raw data stream is a torrent. A single black and white image from a 1,000 pixel x 1,000 pixel image sensor reporting 16 gray levels contains roughly 500 kB of data. At a standard frame rate of 30 f/s, that amounts to 15 MB/s.
Image analysis for machine vision applications is essentially an exercise in data reduction. The raw data stream is a torrent. A single black and white image from a 1,000 pixel x 1,000 pixel image sensor reporting 16 gray levels contains roughly 500 kB of data. At a standard frame rate of 30 f/s, that amounts to 15 MB/s. The amount of data a control system actually needs is much smaller. A pass/fail inspection application, for example, needs exactly 1 bit!
While many application programs report considerably more information than that, the fact remains that image analysis invariably reduces the data flow by several orders of magnitude. That data reduction actually occurs in stages. The first stage typically reduces data content by one to three orders of magnitude by picking out particular frames from the data stream and manipulating them to highlight features of interest. Another order of magnitude or more comes from perceiving what the highlighted features represent.
For example, an OCR reader may put out a single byte for each character it reads. A vision system recognizing a person wandering unauthorized into a robot workcell may put out a few computer words to specify the person’s existence, location, and velocity.
Finally, one reaches the decision level, where the safety system puts out two bits: one says if there’s a problem, the other says whether the threat calls for a slowdown or a complete halt.
Smart cameras provide the opportunity to perform some or all of that data reduction right in the camera. There are two advantages to reducing data early. First, the smaller the data stream, the faster it can be reported. Second, the closer the computer is to the data source, the sooner it can get started reducing the data.
Smart cameras combine the basic components required for any vision application into one unit. These components include:
Optics, which capture light from the scene to be viewed and form it into an image.
Sensor electronics , which comprise a photoelectric array to convert the image to an electronic signal.
Frame grabber electronics , which acquire signals representing individual frames and store them in digital memory.
An image analysis computer , which extracts useful information from the digitized image(s).
All machine vision cameras include the optics and sensor electronics in one package. Smart cameras integrate the frame grabber electronics and an image analysis computer as well. This integration provides significant advantages beyond data reduction speed. Obviously, there is a space savings, since what otherwise would be packaged in three boxes fits into one.
The technology also makes the user’s job easier. The smart camera vendor has taken on the tasks of choosing compatible system elements, connecting them so they work to the best of their ability, and installing image-analysis software. Smart-camera vendors also generally provide a development environment to ease the job of writing software needed specifically for the given application: System integrators no longer need be machine-vision experts; they only need to be application experts.
Applications highlighted elsewhere in this article demonstrate a few of the ways developers have used smart cameras to create control systems that are more capable than would be possible otherwise.
C.G. Masi is a senior editor with Control Engineering. Contact him at firstname.lastname@example.org
System tells robot which way is up
A Canadian autoparts OEM contacted system integrator Spoko Integrators about an application where a metal forming operation dumped parts onto a conveyor. A robot subsequently picked the parts up and placed them in carefully aligned stacks. Because parts left the metal forming station chaotically, Spoko engineers believed that a vision system would be the best way to determine each part’s location, rotation, and orientation so that the robot would know how to handle it individually.
“The main vision challenge,” Les Konczyk, vision guided robotics specialist at Spoko Integrators, points out, “was that many of the parts differ only by the presence or absence of small surface pins or cavities. Traditional 2D cameras and lighting could identify the shape and position, but could not reliably determine which side was up.”
The integrator worked with vision-component manufacturer Sick to solve the application using the company’s IVC-3D200 smart camera. These units use laser triangulation to capture multiple profiles of the part under scrutiny and generate a 3D image. To avoid distortion artifacts, an encoder triggers the smart camera to ensure that the distance between profiles remains constant, regardless of conveyor speed.
Image-analysis software running on the camera’s built-in processor uses height information to identify features embossed into each part’s surface during the metal-forming operation, and determines the part’s orientation. A coordinate transformation utility allows the camera to simplify integration by providing position information in the robot’s own coordinate system.
After determining the part’s location, rotation, and orientation, the camera’s processor sends x and y coordinates, rotation, and orientation to the robot over Ethernet.
Inspection system combines high performance and flexibility
DWFritz Automation Inc. builds sophisticated, custom automation equipment. Capabilities include robotics; machine vision; high precision, high speed assembly; and automated inspection. A medical-technology customer challenged the firm to create a system using high-performance machine vision, a high-precision, six-axis robot, and sophisticated software that would enable high-throughput inspection of many types of medical implants and small parts, while lowering manufacturing costs. Other primary objectives were to ensure NIST measurement traceability and comply with Food and Drug Administration (FDA) regulations.
Maximum flexibility was a driving force in the concept. The system had to allow thousands of recipes to be programmed by the manufacturer, which currently has more than 600. Most importantly, the fast-growing medical company needed to gain more control over its complex manufacturing process. The system feeds measurements automatically and electronically into the company’s statistical quality management system.
Previously, there was considerable human intervention in the customer’s operation, plus extensive use of go/no-go gauges, micrometers, optical comparators, and video measuring equipment. All that measuring equipment needed to be calibrated and maintained, leading to significant recurring cost. Human inspectors had great difficulty obtaining repeatable results, especially with challenging external measurements, such as radius.
The integrator designed a custom system incorporating a Denso six-axis robot, two high-performance, high-resolution Cognex smart cameras with telecentric lenses made by Edmund Optics, custom end-effectors, and an industrial PC running Cognex’ VisionPro machine-vision software.
The robot picks parts up individually from trays containing 25-100 units and positions each before the cameras. The system then measures up to 55 dimensions on each part with micron-level repeatability. If the parts meet the customer’s stringent quality requirements, they are returned to the tray. If not, the robot places them in a reject tray for subsequent analysis.
The system has throughput of better than one part per second, on average, with repeatability better than two microns, depending on the parts’ features. Automated calibration and verification for the robot and vision system, along with NIST traceable accuracy, means the medical company can focus on optimizing its manufacturing process for higher yields. Since the system can be calibrated with the same vision tools, the customer can run multiple systems on the production floor with excellent system-to-system repeatability.
Smart camera provides clarity for glass inspection
Thorsten Gonschior, president and founder of Spectral Process, was confident he could provide a lower-cost alternative when one of his customers asked him to retrofit an existing glass-bottle inspection machine. “That original system had an optical component to locate the defects, but no processor,” he says. “Since the component was no longer available on the market, we thought about completely replacing its part of the inspection process with a scalable subsystem.”
The subsystem takes over inspecting the sealing surface at the bottle’s opening. Bottle openings are particularly critical because any defect in the sealing surface can prevent the cap from making an airtight seal. When that happens, carbonated beverages go flat, and burrs, chips, or sharp edges may injure consumers.
A number of technologies can evaluate bottle-opening quality. Some plants, for example, rely on mechanical inspection systems that directly contact the container, such as filling the bottle with compressed air and plugging it with a gauge to look for leaks. Those techniques, however, are slow, marginally reliable, and potentially can damage bottles.
While high-end camera-based inspection systems can catch defects more reliably, Gonschior explains that many glass manufacturing plants worldwide cannot afford to spend $357,000 to $735,000 (€250,000 to €500,000) for a high-end inspection system. Even though they are under pressure to maintain quality control, cost is an issue.
Manufacturers are caught between a rock and a hard place. They must maintain high quality standards. They also must improve the production process if they want to maintain a competitive edge. On the other hand, they cannot allow high equipment costs to make operations unprofitable.
These considerations motivated Gonschior to develop his Opening Inspector system. It uses advanced machine vision technology to check openings of hollow glass containers (like bottles) for cracks, inclusions (bubbles), and pressed artifacts.
The system can be retrofitted for a wide variety of glass inspection machinery. It consists of a Matrox Iris P-Series smart camera, a custom-designed light source, and a power supply. Using a smart camera eliminates a lot of what Gonschior calls “additional engineering”—developing the housing, computer, electrical connections, etc. Software functions built into smart cameras can also reduce software-development costs—a major component of embedded-system cost.
As the core of the system, the smart camera performs visual inspection while acquiring data from other sensors, and controlling actuators. The software application uses a number of modules in the Matrox Imaging Library (MIL), in particular blob analysis, edge finding, and metrology. These functions are critical to measuring inner and outer diameters, as well as locating inclusions, cracks, and over-pressed structures.
A number of complex subsystems can be integrated into the Opening Inspector with Ethernet connections. For example, Gonschior is planning to use a 2D actuator with high resolution to control a labeling arm. “Integrating network-capable third party devices into the Matrox Iris network is easy and straightforward,” notes Gonschior.
“Glass has a bad reputation when it comes to illumination,” notes Gonschior. Indeed, both the material and its shape create illumination challenges.
Gonschior developed a custom lighting solution to resolve many of these issues. The key was a diffuse light that concentrates reflections at damaged spots. “It sounds easy, but the development was anything but,” he recalls.
Gonschior says smart-camera technology makes it possible to offer the system at a fraction of the cost of many competing glass-inspection systems.
Moreover, the scalable design makes it possible to simply add cameras to accommodate increased throughput.
New system creates flexible safety zone
Machine vision can be used to set up a sophisticated workcell-safety system with warning and shutdown areas.
Traditional mechanical and barrier methods for machine guarding are only as safe as the people using them. Machine safety equipment maker Castell used machine vision to create a sophisticated 3D imaging technology solution that trumps traditional guarding technologies for flexibility and reliability. Called the QuadCam system, the company says it virtually eliminates the potential for human error and tampering.
Unauthorized human activity is one of the most common causes of machine-safety system failures.
For example, poorly designed safety systems often make maintenance and repair activities difficult or impossible. After attempting to work within poorly designed safety-system constraints, workers may disable the safety system to complete their tasks more efficiently. The safety system is therefore disabled just when it’s most needed. Resulting accidents and injuries then count as safety-system failures.
This vision-based safety system allows users to define warning and shutdown zones through a computerized user interface. When a warning zone is breached, a controller provides visual and audible warnings to workers and sends a signal to the machine-control system to slow the equipment to a “safe speed.”
Safe-speed systems constrain machine movements to speeds slow enough for humans to escape accidental collisions. Humans can react within about a quarter second to perceived danger from moving equipment. Machine structures should move slowly enough so that a human working in close proximity can see the movement, search his or her surroundings for a safe exit, and get out of the way before being struck.
If the person or object (such as an automated forklift) enters a shutdown zone, the system sends a signal to halt the machinery. Warning and shutdown signals are sent within milliseconds of an intrusion, fast enough to prevent accidents and equipment damage. Once shut down, machinery can’t be restarted until the detection zone is clear of unauthorized objects and people.
The company’s 3D detection technology triangulates images from four smart cameras to identify objects and people moving in and around the work zone, distinguishing between one, two, or more targets, with real-time tracking of all targets within the system’s field of view.
Mounted high above the machinery, the imagers can view multiple potentially dangerous areas of human and machine interaction, and are tamper-resistant.
By combining cues based on object size, shape, and movement, the detection system can distinguish between targets representing potential safety issues, and unimportant background objects. Ignoring background objects avoids nuisance alarms, which is another major cause of safety-system failures.
The company says its system is designed for user flexibility. During initial installation, plug-and-go harnesses enable simple imager installation, wiring, and even relocation of imagers when necessary. Reconfiguring the system is simple via a Microsoft-Windows-based PC interface. Warning and shutdown zones can be quickly redrawn with the click and drag of a mouse.
When mounted 11 ft above the floor, one imager can protect a zone measuring up to 8 ft x 10 ft. Up to 10 imagers, creating 10 protection zones, can be managed by a single controller monitoring up to 800 sq ft. Smart camera technology, which makes it possible to distribute image processing work where it’s needed, makes the system feasible.
|Search the online Automation Integrator Guide|
Case Study Database
Get more exposure for your case study by uploading it to the Control Engineering case study database, where end-users can identify relevant solutions and explore what the experts are doing to effectively implement a variety of technology and productivity related projects.
These case studies provide examples of how knowledgeable solution providers have used technology, processes and people to create effective and successful implementations in real-world situations. Case studies can be completed by filling out a simple online form where you can outline the project title, abstract, and full story in 1500 words or less; upload photos, videos and a logo.
Click here to visit the Case Study Database and upload your case study.