Model offers robots precise pick-and-place solutions

MIT researchers developed SimPLE, which is designed to learn to pick, regrasp and place objects using the objects’ computer-aided design model. See video.

By Anne Wilson August 20, 2024
Courtesy: John Freidah, MIT Department of Mechanical Engineering

Robotics insights

  • SimPLE enables precise pick-and-place tasks without prior experience, using a robot’s CAD model to adapt to diverse objects, enhancing flexibility and accuracy in industry.
  • By integrating visuotactile sensing and regrasp planning, SimPLE surpasses AI methods, achieving over 90% success in placing diverse objects, showcasing practical benefits in automation.

Pick-and-place machines are a type of automated equipment used to place objects into structured, organized locations. These machines are used for a variety of applications — from electronics assembly to packaging, bin picking, and even inspection — but many current pick-and-place solutions are limited. Current solutions lack “precise generalization,” or the ability to solve many tasks without compromising on accuracy.

“In industry, you often see that [manufacturers] end up with very tailored solutions to the particular problem that they have, so a lot of engineering and not so much flexibility in terms of the solution,” said Maria Bauza Villalonga PhD ’22, a senior research scientist at Google DeepMind where she works on robotics and robotic manipulation. “SimPLE solves this problem and provides a solution to pick-and-place that is flexible and still provides the needed precision.”

A paper by MechE researchers published in the journal Science Robotics explores pick-and-place solutions with more precision. In precise pick-and-place, also known as kitting, the robot transforms an unstructured arrangement of objects into an organized arrangement. The approach, dubbed SimPLE (Simulation to Pick Localize and placE), learns to pick, regrasp and place objects using the object’s computer-aided design (CAD) model, and all without any prior experience or encounters with the specific objects.

SimPLE, an approach to object manipulation developed by Department of Mechanical Engineering researchers, aims to “reduce the burden of introducing new objects to make it so that robots can interact still precisely but more flexibly,” says doctoral student Antonia Delores Bronars SM ’22.

SimPLE, an approach to object manipulation developed by Department of Mechanical Engineering researchers, aims to “reduce the burden of introducing new objects to make it so that robots can interact still precisely but more flexibly,” says doctoral student Antonia Delores Bronars SM ’22. Courtesy: John Freidah, MIT Department of Mechanical Engineering

“The promise of SimPLE is that we can solve many different tasks with the same hardware and software using simulation to learn models that adapt to each specific task,” said Alberto Rodriguez, an MIT visiting scientist who is a former member of the MechE faculty and now associate director of manipulation research for Boston Dynamics. SimPLE was developed by members of the Manipulation and Mechanisms Lab at MIT (MCube) under Rodriguez’ direction.

“In this work we show that it is possible to achieve the levels of positional accuracy that are required for many industrial pick and place tasks without any other specialization,” Rodriguez said.

Using a dual-arm robot equipped with visuotactile sensing, the SimPLE solution employs three main components: task-aware grasping, perception by sight and touch (visuotactile perception), and regrasp planning. Real observations are matched against a set of simulated observations through supervised learning so that a distribution of likely object poses can be estimated, and placement accomplished.

In experiments, SimPLE successfully demonstrated the ability to pick-and-place diverse objects spanning a wide range of shapes, achieving successful placements over 90% of the time for 6 objects, and over 80% of the time for 11 objects.

“There’s an intuitive understanding in the robotics community that vision and touch are both useful, but [until now] there haven’t been many systematic demonstrations of how it can be useful for complex robotics tasks,” said mechanical engineering doctoral student Antonia Delores Bronars SM ’22. Bronars, who is now working with Pulkit Agrawal, assistant professor in the department of Electrical Engineering and Computer Science (EECS), is continuing her PhD work investigating the incorporation of tactile capabilities into robotic systems.

“Most work on grasping ignores the downstream tasks,” said Matt Mason, chief scientist at Berkshire Grey and professor emeritus at Carnegie Mellon University who was not involved in the work. “This paper goes beyond the desire to mimic humans, and shows from a strictly functional viewpoint the utility of combining tactile sensing, vision, with two hands.”

Ken Goldberg, the William S. Floyd Jr. Distinguished Chair in Engineering at the University of California at Berkeley, who was also not involved in the study, says the robot manipulation methodology described in the paper offers a valuable alternative to the trend toward AI and machine learning methods.

“The authors combine well-founded geometric algorithms that can reliably achieve high-precision for a specific set of object shapes and demonstrate that this combination can significantly improve performance over AI methods,” said Goldberg, who is also co-founder and chief scientist for Ambi Robotics and Jacobi Robotics. “This can be immediately useful in industry and is an excellent example of what I call ‘good old fashioned engineering’ (GOFE).”


Author Bio: Anne Wilson, Department of Mechanical Engineering, Massachusetts Institute of Technology (MIT)

Related Resources