Researchers developed Text2Robot, a platform that uses AI to design functional robots from simple commands.
Developed by engineers at Duke University, Text2Robot is a computational design platform that enables users to create robot concepts based on text descriptions of form and function. The system will be demonstrated at the IEEE International Conference on Robotics and Automation (ICRA 2025) in Atlanta this May. In 2024, it was recognized in the innovation category at the Virtual Creatures Competition at the Artificial Life conference in Copenhagen, Denmark.

Text2Robot uses AI techniques to generate into physical robot designs from user-provided text descriptions. It begins with a text-to-3D generative model, which creates a 3D physical design of the robot’s body based on the user’s description. The initial body design is then adapted into a functional robot model by incorporating practical manufacturing constraints, such as the placement of electronic components and the design and positioning of joints. Evolutionary algorithms and reinforcement learning are used to optimize the robot’s structure, motion and control systems to support task execution.
For example, if a user simply types a short description such as “a frog robot that tracks my speed on command” or “an energy-efficient walking robot that looks like a dog,” Text2Robot generates a manufacturable robot design that resembles the specific request within minutes and has it walking in a simulation within an hour. In less than a day, a user can 3D-print, assemble and watch their robot come to life.
“This rapid prototyping capability opens up new possibilities for robot design and manufacturing, making it accessible to anyone with a computer, a 3D printer and an idea,” said Zachary Charlick, co-first author of the paper and an undergraduate student in the Chen lab.

Text2Robot may contribute to advancements in how people design and interact with robots. Potential applications include educational tools for children and creative projects involving motion-enabled sculptures. In residential settings, the system could support the design of robots tailored for specific tasks or layouts. In outdoor or emergency scenarios, it may help create robot models suited for diverse environmental conditions.
The framework currently focuses on four-legged robots, with future research aimed at expanding its capabilities to include additional robot types and incorporating automated assembly to simplify the design and production process.“This is just the beginning,” said Jiaxun Liu, co-first author of the paper and a second-year Ph.D. student in Chen’s laboratory. “Our goal is to empower robots to not only understand and respond to human needs through their intelligent ‘brain,’ but also adapt their physical form and functionality to best meet those needs, offering a seamless integration of intelligence and physical capability.”
Currently, the robots are limited to tasks such as walking in response to speed commands and navigating uneven surfaces. Researchers plan to add sensors and other components to enable functions such as climbing stairs and avoiding moving obstacles.
Edited by Puja Mitra, WTWH Media, for Control Engineering, from a Duke University Pratt School of Engineering news release.