Integrator Update: Robotic controls double throughput
Bottlenecks can occur at every stage of the manufacturing process, limiting productivity and causing problems that increase production costs. Savvy production managers know that no matter what the process, and no matter what the current rate of production, there’s always a weakest link that can be improved to increase a factory’s contribution to the company’s bottom line. In this regard, the equipment that handles the infeed of raw materials is just as important as the machinery that does the processing. An advanced vision-guided robotic infeed system can double throughput compared to other methods. For example, a 3D vision system has helped a large coffee roasting plant increase its green bean infeed rate by 100%, while eliminating a safety and material waste problem that was costing the company 100,000 lb of lost beans per year.
The coffee roaster was having problems with a robot whose job was to unload pallets of 150-lb burlap bags containing raw beans and place the bags one-by-one on a conveyor ultimately feeding the roaster. The gripper on the end of the robot arm was pinching the bags awkwardly and causing them to tear, spewing coffee beans over the floor of the unloading area. The robot moved slowly and relied on “feel” and memory as it attempted to locate the next bag to move. As the plant is charged with processing some 650,000 bags of coffee beans per year, the cost of the robot’s errors in lost beans and lost productivity was mounting. Loose beans on the floor posed a safety concern. With these factors in mind, plant managers decided to upgrade the bean bag handling system, and enlisted the help of a system integrator with expertise in advance automation systems, including smart robotic workcells guided with machine vision systems. The control system used incorporates an advanced 3D vision system with high-end PC-based software to build a 3D model of the environment in which the infeed robot operates.
For the coffee roaster application, the system models each pallet of bean bags, one pallet at a time, using distance measurements obtained via laser triangulation. Bags are loaded 20 bags per pallet in four layers of five bags per layer. A new computer model is constructed for every tier of bags on the pallet. The model is then run through an advanced algorithm that identifies unique features of the bags, and determines the precise position and orientation of each bag in that tier. The position and orientation is then used to dispatch the robot to each bag for pickup.
2 cameras, 2 lasers
Many laser distance measuring systems use only one laser and video camera. This implementation employs two video cameras and two lasers, building a highly accurate visual model to optimize the robot’s motion. The method used, called laser triangulation, is a highly accurate distance measuring technique when the surface being scanned is substantially perpendicular to the cameras.
Though multiple laser triangulation systems are available in the market, few allow customizing of the triangle shape. Most require that the cameras be angled identically toward the subject from opposite sides of the focal point (an isosceles triangle). By studying the types of problems that infeed robots typically handle, the engineers involved learned that positioning and angling the lasers and cameras differently could provide a better model of the surface contours of the materials to be handled, and built this capability into the 3D vision system.
For the coffee bean roaster, cameras are located about 7 ft above the pallet and pointed at the pallet from different angles, providing a 53-in.-wide field of view (+/- 30 or 40 degrees). This enables the entire top surface of the pallets, which have a maximum dimension of 48 sq. in., to be mapped.
To meet the precision requirements of the application, Concept Systems selected an industrial video camera capable of producing 30,000 samples per second, though for this particular application, only 690 samples/second are taken.
Laser triangulation, image assembly
As the laser beams scan the pallet of coffee bags, the beams appear to the cameras, which move in lockstep with the lasers, to move up and down in the cameras’ field of view as they pass over the contours of the bags’ surfaces. If a point in the line described by the beam moves upward in the camera’s field of view relative to where the beam would be if it was scanning a flat surface, it means that point on the surface being scanned is getting closer to the camera. The opposite is true for laser points that are seen to move downward. The profile of the surfaces of the top bags on the pallet is constructed by noting where the laser line appears in the cameras’ field of view, converting the point into XYZ coordinates and entering it into a 3D model.
Using two cameras creates a challenge for processing the video images, however. The two camera images taken from different angles need to be “stitched” together to provide a continuous picture of the coffee bags and pallets. But each lens has some parabolic distortion at the edges, and the angles of vision differ from each camera. A large portion of the software for the system deals with processing the distorted images to remove the distortions and fit them together.
Another design challenge was metal object glare in the field of view, which can distort the image. Different things cause glare, but it is most often caused by reflections of the laser light from pallet nails. Polarized filters placed on the cameras dulled the effects of the glare. While effective at lessening glare, the filters also dimmed the laser light.
The lasers were 100 mW units that emit light in the 680 nanometer range. The class 3a lasers normally would not produce injury if viewed only momentarily with the unaided eye. While the beam is visible to humans, more energy is delivered in the near infrared part of the spectrum. (The 100 mW lasers are about as light as the 40mW lasers in the visible range.)
The system integrator developed a 3D modeling program, identifying how the coffee bean bags are oriented on the pallet by looking at the bag outlines. Precision is required since the bags may be oriented slightly differently on each tier of the pallet. The previous robot controls had a problem with this.
To determine the optimal pickup point on the bag, the software goes to the 3D model and divides the image of the bag in half in the north-south and east-west dimensions to arrive at a target point. Since the camera image is calibrated in robot coordinates, the robot can go directly to this spot and begin picking up the bag. As each layer of bags is removed, the 3D model is updated for the next layer of bags on the pallet. When presented with a new pallet or full tier of bags on a partial pallet, the software directs the robot arm to the highest bag. When the cameras see no more bags, the pallet must be empty. A PLC is signaled to eject the pallet and bring in a new one.
Get a better grip
Each processing infeed system differs in the method the robot uses to acquire and hold the raw materials being moved, and the 3D vision system can be customized to work with different robotic mechanisms.
For the coffee bean roaster application, the system integrator used a local engineering firm to design a new end effector (the “business end” of the robot arm). The previous system used a pair of plier-like grippers with only two points of contact on the bags, but the new end effector provides 16 points of contact. It pierces the bags and picks them up using pneumatically operated tines that penetrate the bag and then rotate outward. When the hard tines penetrate the bag, they push the burlap threads out of the way without tearing them. Pneumatics was selected because it is a highly reliable power source, keeping downtime to a minimum. The pneumatic hoses have hard outer surfaces to avoid wear and are quick to replace when necessary.
What paid for the system was:
1. Improved safety from eliminating loose beans on the floor
2. Reduced downtime associated with cleaning up spills, and
3. Reduced loss of beans.
According to Glen Lawson, project manager at the coffee company: “Higher speed bag handling is an extra bonus. At six bags per minute, twice what we were capable of handling before, we can actually unload beans faster than the plant is capable of processing the beans.”
Now that the infeed bottleneck has been removed, the coffee roaster plant managers can move on to the next area for productivity improvement.
Material infeed application technology expertise
- System integration: Concept Systems Inc., an Albany, Ore., supplier of advanced automation systems, has expertise in building smart robot workcells
- Concept Systems VisionFeed-3D advanced vision-guided robotic infeed system
- Engineering firm Trimike Creations Inc. designed a robotic end effector to grip coffee bags more effectively
- Sick Ranger Model E video camera, capable of 30,000 samples per second
- Lacey-Harmer 100 mW lasers emit 680 nanometer light; the class 3a lasers won’t injure if viewed momentarily with the unaided eye.
– Michael Gurney is on the management team at Concept Systems Inc., a system integrator and provider of advanced automation systems. Edited by Mark T. Hoske, CFE Media, Control Engineering, www.controleng.com.
More on laser triangulation http://bit.ly/kSCrHC
Need an integrator? Find one here:
Panel design video by Concept Systems www.controleng.com/videos or see link below.