The future of embedded vision in manufacturing
Embedded vision can be used in many different ways for a variety of functions, and the future of embedded vision technology could take several different paths for manufacturing applications.
While embedded vision systems were only recently enabled through the miniaturization of cameras and processors, there’s enormous commercial potential for compact vision and imaging.
From the automotive sector to robotics and consumer electronics, embedded vision can be used in many different ways for a variety of functions. In this way, the future of embedded vision technology could take several different paths, but there are a few definitive ways it’s expected to mature.
Embedded vision in 3-D perception and deep learning
It’s clear the next frontier in embedded vision involves the combination of 3-D perception and deep learning. These two technologies, enabled by embedded vision capabilities, are only just beginning to integrate but will one day enable vision applications that are currently far from our reach.
As embedded vision facilitates the growth of 3-D perception and deep learning, processors will need to continue gaining computing power without sacrificing energy-efficiency or costs to enable powerful deep learning. Simultaneously, the productivity of software development in deep learning initiatives must improve while deep learning capabilities advance for effective deployment in computer vision applications.
3-D perception and deep learning applications
While 3-D vision and deep learning currently exist, they haven’t yet matured to their full potential. As this maturation happens, applications such as object detection, classification and semantic segmentation become possible, which have a number of commercial uses.
Embedded vision, using 3-D perception and deep learning, will lead to highly flexible and intelligent robots in factory settings. These robots could instantly detect new parts and learn how to handle them, theoretically allowing them to perform entirely new tasks with little to no instruction.
These technologies also contribute to visual simultaneous localization and mapping (SLAM) capabilities, which are an essential part of autonomous operations in cars, trucks, robots, drones and more.
While embedded vision is still relatively new, the future is bright. Deep learning and 3-D perception will be important technological advances, but far from the only way that embedded vision will transform entire industries.
This article originally appeared on the AIA website. The AIA is a part of the Association for Advancing Automation (A3). A3 is a CFE Media content partner. Edited by Chris Vavra, production editor, Control Engineering, cvavra@cfemedia.com.
ONLINE extra
See related articles from the AIA linked below.
Original content can be found at www.visiononline.org.
Do you have experience and expertise with the topics mentioned in this content? You should consider contributing to our WTWH Media editorial team and getting the recognition you and your company deserve. Click here to start this process.