Machine vision applications integrating with the cloud

Advances in cloud computing have improved the quality of service (QoS) for machine vision enough to the point where it is viable for industrial applications.

By Winn Hardin, AIA March 16, 2018

Machine vision has always been in the business of big data, acquiring and processing countless gigabytes of images, and then extracting information to make a decision about a given object or task. Gigabytes of data per minute quickly turn into terabytes, or even petabytes, for applications like remote sensing and web inspections, which generate massive amounts of data.

As the dataflows increase in size, they also multiply in number, prompting many industries to look for off-site computing and storage solutions. Enter the cloud. But can cloud computing respond fast enough for machine vision applications? Is the quality of service (QoS) sufficient for industrial applications? As the reach of machine vision extends beyond the factory floor, the answer, increasingly, is yes—even for industrial applications.

Life in the cloud

"Without having some form of cloud or Internet of Things (IoT) integration, it becomes challenging to collect and manage large amounts of image data in a very disciplined manner," said Darcy Bachert, founder and CEO of Prolucid Technologies. "In the last five years, there have been significant advancements in core cloud technologies that can make all of this happen."

Bachert cited companies like Google, Microsoft, and Amazon Web Services that have developed technologies for massive scale with storage and analytics, all while keeping the information secure. "One of IBM’s protocols, MQTT, is specifically designed to interface with low-power distributed devices in order to implement QoS and assure that any type of data transmission is guaranteed," he said.

Beyond huge storage and computational power, public cloud providers are also offering machine learning and deep learning services. One example is TensorFlow, a framework for deep learning research and application development commonly used in machine vision. Deep learning is showing potential in everything from advancing disease detection to handling greater product variation on the production line.

These open source tools, along with advancement in imaging and image-based models, mean "rather than having to hire a team of PhDs and data scientists, you can now train these models against emerging data sets in a much simpler, easier way to drive value," Bachert said.

Bachert estimated half of the machine vision projects his company develops have a cloud component. Among the biggest implementers of vision and imaging in the cloud is the medical device industry. Prolucid is working with a customer using an ultrasound-based imaging device to acquire images and provide classified values such as demographic factors and general location.

"This provides researchers with enough information to give context to the ultrasonic images so they can be used for clinical research in diagnosis or biopsies," Bachert said.

To protect a patient’s privacy when collecting and analyzing data from medical imaging devices, Prolucid employs several security strategies. One procedure is to "de-identify" or eliminate personal information such as first and last name, date of birth, and postal code.

Also, Prolucid has a policy to secure data in transit and at rest, detect data breaches at the data center and the device level, alert the customer, apply a fix, identify other vulnerabilities, and recover data in the event of a catastrophic breach.

From the cloud to the ground

In a manufacturing environment, machine vision in the cloud generates some concern over Internet bandwidth and latency issues, which can slow down inspection processes and result in data loss and possibly safety issues for equipment and workers. "With a machine learning application, you still have the real-time inspection process," Bachert says. "What changes is how you tackle it."

For example, in a defect classification application, the manufacturer would use the cloud to collect a classified validation data set and develop a machine learning model. The model is then taken down from the cloud and deployed into a real-time process being done at the edge—which means at the edge of the network near the source of the data, i.e., at the manufacturing location.

"Because of this, we don’t have to worry about latency," Bachert said. "In every system we design, the real-time process component needs to be able to operate with and without the cloud being connected."

This hybrid approach of cloud and edge computing represents potential growth for machine vision integrators. "In the next 12 to 18 months, we expect that some of our clients will be doing analysis in the cloud, where the cycle times are higher and they don’t need an instantaneous response, just in case there’s any lag in the network," said Cyth Systems CEO Andy Long.

Long attributes the increased interest among manufacturers to the successful use of cloud computing in other areas. "Ten years ago, no one would have foreseen a self-driving car, but now there is a huge awareness about the data that is gathered and processed in the cloud to operate these vehicles," he said. "We have conversations with clients who say, ‘We don’t know what we want to do yet, but our executive team is telling us that we have to find a way to invest in this disruptive technology."

The deep reach of IoT

As manufacturers look to automate more of their processes, machine vision cloud-based production systems will play a big role. "We are doing a lot of assembly verification projects for our customers, where the goal is to provide a system that doesn’t require any programming and instead uses AI and a cloud-based processor to do all the work," Long said. "The people who used to manually inspect parts are now responsible for training the system how to identify a good part or a bad part. It doesn’t require any machine vision knowledge per se."

Using the cloud to simplify machine vision implementation also gives manufacturers unprecedented freedom to take the technology for a test drive. "The speed of analysis on the front end is quicker than all of the programming you would have to do in a traditional machine vision system," Long said. "You can now experiment a lot more quickly and frequently to decide whether the technology solves a certain problem."

Even if manufacturers are reluctant to analyze their imaging data over the Internet, they’re using the cloud in other ways – most notably in remote monitoring. Omron Microscan Systems, Inc., offers an interface called CloudLink, which allows users to monitor live machine vision inspections via the Web. Meanwhile, with its Machine Vision Cloud product, ImpactVision Technologies can remotely monitor a customer’s vision system performance, change inspection criteria, and perform maintenance.

The Triniti light controller from Gardsoft Vision is another example of how IoT is reaching not only into every corner of the factory but also into every corner of the machine vision system itself. The web-enabled Triniti lighting controller provides intelligence and precision control of lighting systems and operations, including fixed and variable data about the lighting properties, model information, usage information, and optical and electrical characteristics. GenICam and GigE Vision standards compliance allows easy integration with other system components, as well as facilitating download of part number recipes from factory host computers.

"Parameters like highest operating temperature, duty cycle, and hours of usage become important to perform proper maintenance or repurpose the light," said John Merva, Gardasoft’s vice president, North America. "Triniti allows users to easily access information to make the best decisions possible about their lighting, and the overall performance of the machine vision system itself."

Beyond medical and manufacturing applications

In the digital age, libraries and academic institutions may be the last bastions of the printed word, but even they know the importance of going paperless. i2s manufactures several types of book scanners to preserve historical publications while allowing numerous users to access the digital assets through the cloud.

The company’s CopiBook series uses an area scan camera that can scan an A2-size page (420 x 594 mm) in 0.3 seconds, as compared with 4 seconds using a line scan camera. CopiBook can then increase productivity of the scanning process by more than 30 percent overall.

File sizes vary depending on the length of the book and the type of content within, but with i2s’s ability to place the digitized books in the cloud, the sky is literally the limit. For example, a 500-page book could generate between 200 and 500 MB of data, but i2s’s customers often need entire collections scanned.

"Standard collections can have 10,000 books, which is 2 to 5 TB for a project as a whole," said i2s CEO Xavier Datin. "We also have some customers with 500,000 books, which takes the project up to 250 TB."

As the IoT delivers its promise of connecting data, devices, and people, machine vision is finding new ways to utilize data analysis, deep learning, and all that cloud computing has to offer. 

Winn Hardin is contributing editor for AIA. This article originally appeared in Vision Online. AIA is a part of the Association for Advancing Automation (A3), a CFE Media content partner. Edited by Chris Vavra, production editor, CFE Media, cvavra@cfemedia.com.

Original content can be found at www.visiononline.org.