Next generation: Robots that see
Autonomous robotic boom
Nearly every autonomous mobile robot requires sophisticated imaging capabilities, from obstacle avoidance to visual simultaneous localization and mapping. In the next decade, the number of vision systems used by autonomous robots is expected to eclipse the number of systems used by fixed-base, robot arms.
A growing trend is the adoption of 3D vision technology, which can help robots perceive even more about their environment. From its roots in academic research labs, 3D imaging technology has made great strides as a result of advancements to sensors, lighting, and most importantly, embedded processing. Today 3D vision is emerging in a variety of applications, from vision-guided robotic bin picking to high-precision metrology and mobile robotics. The latest generation of processors can handle the immense data sets and sophisticated algorithms required to extract depth information and quickly make decisions.
Robotic stereo vision
Mobile robots use depth information to measure the size and distance of obstacles for accurate path planning and obstacle avoidance. Stereo vision systems can provide a rich set of 3D information for navigation applications and perform well even in changing light conditions. Stereo vision technology is the practice of using two or more cameras offset from one another while looking at the same object. By comparing the two images, the disparity and depth information can be calculated, providing accurate 3D information.
While the increased performance of embedded processors has enabled algorithms for uses such as 3D vision with robotics, there still remains a range of applications untapped that require additional performance. For example, in the medical industry, robotic surgery and laser control systems are becoming tightly integrated with image guidance technology. For these types of high-performance vision applications, field programmable gate arrays (FPGAs) manage the image preprocessing or use the image information as feedback in a high-speed control application.
FPGAs are well suited for highly deterministic and parallel image processing algorithms in addition to tightly synchronizing the processing results with a motion or robotic system. This technology is put to practice, for example, during laser eye surgeries where slight movements in the patient’s eyes are detected by the camera and used as feedback to auto-focus the system at a high rate. Additionally, FPGAs can support applications such as surveillance and automotive by performing high-speed feature tracking and particle analysis.
Due to rapidly advancing technologies in processing, software, and imaging hardware, cameras are everywhere. Machines and robots in the industrial and consumer industries are becoming increasingly intelligent with the integration of vision technology. The accelerated adoption of vision capabilities into numerous devices also means that many system designers are working with image processing and embedded vision technologies for the first time, which can be a daunting task.
Valuable resources are available to system designers and others simply interested in vision technology. The Embedded Vision Alliance (EVA) is one such resource, which is a partnership of leading technology suppliers with expertise in embedded vision technology. The EVA is available to empower system designers to incorporate embedded vision technology into their designs through a collection of complimentary resources, including technical articles, discussion forums, online seminars, and face-to-face events.
- Carlton Heard is National Instruments vision hardware and software product manager. Edited by Mark T. Hoske, content manager, CFE Media, Control Engineering, firstname.lastname@example.org.
- Events & Awards
- Magazine Archives
- Digital Reports
- Global SI Database
- Oil & Gas Engineering
- Survey Prize Winners