Universities driving robotics research and development

Researchers at multiple universities are developing and researching potential uses for robots in industries such as inspection, medical, and automotive.

By Tanya M. Anandan April 26, 2019

Several universities are driving robotics research and continue to attract and recruit renowned faculty to their robotics rosters. They have interdisciplinary master’s and doctoral programs in robotics. They’re spawning successful spinoffs and they embrace a comprehensive approach to robotics research and education.

Robotics is a multidisciplinary sport. The traditional areas of study, mechanical engineering, electrical engineering and computer science, have broadened into biological systems and cognitive science. Many of the top university robotics programs are attacking robotics challenges from all angles and making fascinating discoveries along the way.

Human-robot interaction

The Robotics Institute at Carnegie Mellon University (CMU) is one of the oldest in the country and the first to offer graduate programs in robotics. The institute encompasses the main facility on CMU’s campus in Pittsburgh, Pennsylvania, the National Robotics Engineering Center (NREC) in nearby Lawrenceville, and Robot City in Hazelwood.

The Robotics Institute is under the CMU School of Computer Science. Researchers take a comprehensive approach to robotics, studying robot design and control, perception, robot learning, autonomy, and human-robot interaction (HRI).

In fact, according to Martial Hebert, the institute’s director, HRI is a central theme. “Much of the work in robotics has less to do with robots. It has to do with people,” he said. “Understanding people, predicting people, and understanding their intentions. Everything from understanding pedestrians for self-driving cars, to understanding coworkers in collaborative robot manufacturing, any application that involves interaction with people at any level.”

One of the ways CMU is trying to better understand people is by studying humans’ body language. Researchers built a life-sized geodesic dome equipped with VGA cameras, HD cameras, and depth sensors to capture images from tens of thousands of trajectories. The result is dynamic 3-D reconstruction of people, their body poses, and motions.

Humans speak volumes through body movements, posture, and facial expressions without talking. The CMU Panoptic Studio was built to capture these subtle nonverbal cues and create a database of our body language to help robots better relate to humans. The research is ongoing with datasets now available for full-body motions, hand gestures, and 3-D facial expressions.

Machine learning and robot intelligence

Hebert said machine learning is another big area for CMU. The idea is to have a robot learn from its own actions and data, and learn to get better over time. Examples include manipulators that learn how to grasp, or drones that learn how to fly better. CMU’s collaboration with Honeywell Intelligrated to develop advanced supply chain robotics and AI is designed to harness the power of machine learning to control and operate multiple robotic technologies in connected distribution centers.

“It’s a material handling application that includes sorting packages and moving packages around distribution centers at very high rates,” Hebert said. “We’re past the stage where robots only do repetitive operations. They have to be able to make decisions, they have to be able to adapt to the environment. Things are not always in the same place or where they should be. That’s where machine learning and autonomy come into play. All of it comes together in this type of application.”

The project is underway at the university’s NREC facility, where CMU researchers help conceptualize and commercialize robotic technologies for industrial and government clients.

Despite CMU’s world of autonomous vehicles, Hebert said they focus less on the physical aspect of robotics research compared to robot intelligence. This is a recurring theme we hear inside and outside academia, the attention on algorithms, or the software side of robotics.

Kaarta, for example, makes a 3-D mobile scanning and mapping generation system that puts advanced simultaneous localization and mapping (SLAM) technology in real time. The 3-D digital model is generated right in front of you on a handheld touchscreen interface, without the need for post-processing. At its heart is the patent-pending advanced 3-D mapping and localization algorithms, a product of CMU’s robotics lab.

“Our contribution was to take massive amounts of data from the sensors and optimize it very quickly and efficiently,” Hebert said, who credited advanced mathematics and algorithms for the feat.

The system’s compact size and customizable imaging hardware allow it to be mounted to ground or aerial vehicles, such as drones, for interior and exterior use. Right now, the company’s products are directed toward infrastructure inspectors, surveyors, engineers, architects and facilities planners. But imagine the possibilities for first responders, hazmat teams, law enforcement, and down the road, for self-driving cars.

Search-and-rescue robots

The Snakebot robot undulates its way into tight spaces and sticky situations, where the environment may be unhospitable and unpredictable for people. Snakebot was on the ground for search and rescue efforts after a disastrous earthquake hit Mexico City.

Howie Choset, a professor of computer science and Director of the CMU Biorobotics Lab where Snakebot was developed, said they are proud of the robot and its accomplishments. Challenges do remain, however.

“The challenges are how to move (locomotion), where to move (navigation), creating a map of the environment, and providing the inspector with good remote situational awareness,” Choset said.

A camera on the front of the robot helps the operator see the immediate area around the robot, but this has limitations in low-light conditions and highly cramped environments. In disaster scenarios, sensors for perceiving sound and smell may be more useful in detecting signs of life.

Choset envisions snake robots destined for manufacturing applications such as inspecting tight spots inside aircraft wings, or installing fasteners inside airplane wings or boats, and painting inside car doors. He also hopes to see these robots at work in the nuclear industry.

Medical robotics

Another snake-like robot developed in the Biorobotics Lab has made significant headway in medical robotics. Unlike the snake robots used in search and rescue or industrial applications, the surgical snake is a cable-driven robot.

Choset explained the difference. “Imagine a marionette that has little wires that pull on different parts of the doll. A cable-driven robot is one where internal cables pull on the links to cause the joints to bend. The motors don’t have to be on board, so you can get away with a lighter mechanism, or in my case, use bigger motors.”

This is in contrast to the locomoting robot that crawls through pipes, where all of the motors are on board.

“I think minimally invasive surgery is a great area for robotics,” Choset said. “The challenges are access, how to get to the right spots, and once you’re there, developing tools, end effectors and other mechanisms to deliver therapies and perform diagnostics. Situational awareness, or being able to really understand your surrounding environment, is the next step after that.”

The biorobotics team at CMU envisions minimally invasive no-scar surgery in the snake robot’s future. But in the meantime, the technology has already found success in transoral robotic surgery and has been licensed to Medrobotics Corporation.

Self-driving cars

The University of Michigan may be renowned for its football program, but its self-driving vehicle research put Michigan Robotics on the map – literally. The Mcity Test Facility, located 40 miles outside Detroit, is a one-of-a-kind proving ground for testing connected, autonomous vehicle technologies in simulated urban environments.

The 32-acre site on U-M’s campus in Ann Arbor has miles of roads with intersections, traffic signs and signals, sidewalks, simulated buildings, obstacles such as construction barriers, and even the occasional “dummy” to test pedestrian avoidance technology. It’s the quintessential outdoor lab for researchers envisioning a local network of connected autonomous vehicles by 2021.

“Self-driving is probably what we’re known for the most,” said Dmitry Berenson, a professor of engineering at U-M. “That’s a real strength here. We have the U-M Transportation Research Institute (UMTRI) that has been conducting self-driving work for many years, even before it was popular. We’re very close to the auto manufacturers, so we can very quickly set up meetings and integrate with them, and get feedback. Well-established relationships with Toyota and Ford are pushing self-driving technology forward.”

Berenson is director of the Autonomous Robotic Manipulation (ARM) Lab, which he founded two years ago when he joined U-M. Algorithms are still his passion.

“Michigan is doing something really important, which is pushing the boundaries on algorithms to get robots into unstructured environments in the real world,” Berenson said. “We have people working on this in terms of aerospace applications, all the way to legged locomotion, to manipulation like my group, to self-driving. There’s a huge push in self-driving technology. Some of our faculty have startups in this area.”

U-M Professor Edwin Olson cofounded May Mobility in 2017. The startup’s autonomous shuttle service is currently operating in downtown Detroit and charting new territory in other Midwestern cities. As Director of the APRIL Robotics Lab, Olson is known for his work in perception algorithms, mapping, and planning. The licensed intellectual property behind these self-driving shuttles was developed in his lab.

Another U-M faculty member, Ryan Eustice, is senior vice president of automated driving at Toyota Research Institute, who has worked with SLAM technology.

“SLAM is crucial technology for self-driving cars,” Berenson said. “They don’t know where they are without it.”

Eustice is Director of the Perceptual Robotics Laboratory (PeRL), a mobile and marine robotics lab at U-M focused on algorithm development for robotic perception, navigation, and mapping. He worked on the Next Generation Vehicle (NGV) project with Ford Motor Company, the first automaker to test an autonomous vehicle at Mcity.

Robotics raises the roof

Ford has a legacy stake in Michigan Robotics. A $75 million facility currently under construction on the U-M Ann Arbor campus will be named the Ford Motor Company Robotics Building in recognition of the automaker’s $15 million gift to the engineering college. The 140,000-square-foot building will house a three-story fly zone for autonomous aerial vehicles, an outdoor obstacle course for legged robots, and a high-bay garage space for self-driving cars. Ford will also establish an on-campus research laboratory occupying the fourth floor, where the automaker’s researchers will be able to easily collaborate with the university’s faculty and provide hands-on experiences for students.

The facility will also include classrooms, offices and lab spaces. Bringing students, faculty and researchers together under one roof in a space dedicated to robotics will encourage fluent interaction and the exchange of ideas. In effect, a culture designed to study the problems and solutions of robotics from all angles, including mechanics, electronics, perception, control and navigation, an approach university leadership refers to as “full spectrum autonomy.”

Toyota Research Institute has also dedicated funding to U-M research efforts. “They value our robotics and self-driving technology not because they think it will advance their interests tomorrow, but 5 or 10 years down the road,” Berenson said.

Robot manipulation and grasping

In his lab, Berenson is developing algorithms for robotic motion planning and manipulation. The research includes grasping in cluttered environments and manipulation of deformable objects, such as rope or cloth that are malleable and change shape when handled.

“We have deformable objects, we have piles of clutter, some of which we may have seen before, some we haven’t. We have to manipulate them anyway,” Berenson said. “We can’t wait for someone to perfectly model the environment, and give us all the parameters and tell us where everything is, and provide a CAD model of every object. That’s great in a factory, but it won’t work in somebody’s home.

“You will never have a perfect model of how this rope or cloth will behave. We have to be able to manipulate despite that uncertainty,” he continued. “For example, we’re able to put a placemat on a table in a particular position and avoid obstacles. We can do those types of tasks without knowing most of the parameters of the deformable object, like its stiffness or the friction values.”

Berenson believes the challenges involved in picking up deformable objects such as cables, clothing, or even muscle tissue can be overcome by representing the object and task in terms of distance constraints and formulating control and planning methods based on this representation. Enabling robots in this way could allow medical robots to perform tedious tasks in surgery or make hospital beds, and in home service, allow robots to handle clothes and prepare food.

“We’re really excited about this work because we believe it will push the frontier on what robots can do with very limited information, which is essential for getting robots to work in people’s homes or in natural environments.”

The ARM Lab is also working on algorithms for shape completion. This is particularly advantageous when you have a cluttered environment like a pile of clothing or other objects that need to be sorted.

“If you have a laser scanner and you scan something, you only see the front part of it. You have no idea what’s behind that or how far the object extends,” Berenson said. “We’ve been working on algorithms that allow us to basically fill in the part of the object that we don’t see.”

His team is taking advantage of a lot of the work already done by other researchers in deep neural networks for 3-D reconstruction. Through machine learning, the algorithm has learned to look at a partial scan of an object and infer the parts of the shape it cannot see by looking at thousands of previously scanned objects. It turns out many household objects are very similar, so Berenson said they can get a good prediction on household objects.

The research team is also using sophisticated robotic technology to test and verify their motion planning and manipulation algorithms. They are able to grip and manipulate everyday items of different shapes, weight, and fragility.

Tanya M. Anandan is contributing editor for the Robotic Industries Association (RIA) and Robotics Online. RIA is a not-for-profit trade association dedicated to improving the regional, national, and global competitiveness of the North American manufacturing and service sectors through robotics and related automation. This article originally appeared on the RIA website. The RIA is a part of the Association for Advancing Automation (A3), a CFE Media content partner. Edited by Chris Vavra, production editor, Control Engineering, CFE Media, cvavra@cfemedia.com.

Original content can be found at www.robotics.org.


Author Bio: Contributing editor, Association for Advancing Automation (A3).