Vision and Discrete Sensors

Discrete sensors measure presence, proximity, distance, vibration, direction, motion and other parameters. Discrete sensor often are named by function or technology, such as proximity sensor or laser sensor. Machine vision technologies use sensors and software to see, collect measurements, analyze inputs and record or decide based on the measurement.

Vision and Discrete Sensors Content

Machine vision, automation streamline logistics and warehousing operations

Machine vision and automation advancements are improving logistics and warehousing operations by taking advantage of developments with autonomous mobile robots (AMRs), deep learning and more.

Machine Vision Insights

  • Due to staffing shortages and major supply chain issues, manufacturers are willing to emphasize automation more to meet surging demand. Machine vision and sensing technologies are a major part of this trend.
  • Autonomous mobile robots (AMRs) and automated guided vehicles (AGVs) used machine vision and sensing technologies to move around freely and a key part around automation when it comes to logistics.
  • Machine learning (ML) and deep learning (DL) in embedded platforms are being used to solve applications that were once very challenging as well as opening up new insights gleaned from the sensing and machine vision advancements being developed on the manufacturing floor.

Worldwide, 131 billion parcels were shipped in 2020, according to the 2021 Pitney Bowes Parcel Shipping Index. By 2026, that number is expected to more than double, accelerated by a global pandemic and growing e-commerce industry. With the magnitude increase in retail purchases made online, the need to automate logistics, warehouse, and shipping processes has become a top priority.1

Package measurement, quality inspection, barcode reading, optical character recognition/optical character verification (OCR/OCV), and material handling optimization, which many companies currently carry out manually, are key stages of the shipping industry value chain that lend themselves to automation.

“Logistics, warehousing, and shipping organizations are struggling to operate faster. But speeding things up means accuracy and precision are imperative because there’s no time to deal with errors. And then there are the staffing issues,” said Mark Wheeler, director of supply chain solutions, Zebra Technologies. “When you put those three things together, what you get is a market that’s very open to trying new things by combining existing and new technologies in innovative ways.”

Much of this innovation centers around machine vision.

Vision-guided robotics

In a warehouse or distribution center, pallet loads typically mark the beginning and end of the warehousing process. Upon entry to a facility, pallet loads are either depalletized into individual cases or stored as full pallets. Depalletizing applications have transitioned from using mostly manual labor to relying on vision-guided robotics. Machine vision accelerates this process by localizing the next package to pick while the robot is placing the previous load on the conveyor.

“Most packages arrive at, and leave from, warehouses as pallet loads,” said Garrett Place, business development, robotics perception, ifm efector, inc. “Their journey through the modern warehouse is at the heart of most machine vision applications in logistics.”

Ben Carey, senior manager, logistics vision products, Cognex Corporation, agreed. “Machine vision applications in logistics span four areas: gauging, inspection, guidance, and identification. Each of these areas is present from the inbound receiving processes through sorting all the way to outbound check points.”

Autonomous mobile robots (AMRs)

Ask a machine vision solution developer about the best way to bring repeatability to a use case, and they will likely say something about limiting the number of variables. After all, variables create edge cases. But most warehousing and logistics operations move packages that can be any color, size, shape, and material. This degree of variability makes technology selection — and solution creation — extremely difficult.

“The Amazon Pick challenge in years past is a perfect example of this,” Place said, “and a primary reason most machine vision use cases in logistics are multicamera and multimodal. One camera and one technology are just not enough to manage the variability in these types of applications.”

John Leonard, Zivid product marketing manager, concurred. “The major applications are depalletization and palletization of boxes entering and leaving a facility. In between these in/out operations are mostly piece-picking operations and order picking to fulfill orders,” he explains. “These are accomplished using different methods, which vary from place to place.”

These methods include autonomous mobile robots (AMRs) guided by onboard 3D vision. AMRs can, for instance, travel autonomously to walls of bins to find and select items. Robots can also pick items fed by a conveyor. Other mobile robots may carry items to vision stations so that the type and amount of goods can be inspected.

Zebra's FlexShelf Guide, which provides flexible configurations for bin sizing and spacing, expands the types of items that can be picked using AMRs. Courtesy: Zebra Technologies/A3 Zebra’s FlexShelf Guide, which provides flexible configurations for bin sizing and spacing, expands the types of items that can be picked using AMRs. Courtesy: Zebra Technologies/A3

Automatic guided vehicles (AGVs)

Alternatively, for full pallet load storage, many warehouses deploy automatic guided vehicles (AGVs) to pick and store pallets for retrieval. During travel, AGVs rely on machine vision for pallet pose and obstacle detection. Machine vision code reading tracks pallet and caseloads throughout the process.4

When full pallets are ready to leave a facility, AGVs manage the movement while robotic arms convert caseloads to full pallets. These ready-to-ship pallets are then weighed and measured before entering the truck, making pallet dimensioning another strong use case for machine vision in logistics.

“The industry has undergone a shift, moving from assessing shipping fees strictly by weight to charging by dimensional weight––making accurate dimensional measurement more critical than ever,” said Daniel Howe, regional development manager – Americas, LMI Technologies. “Smart 3D sensors are a key driver for greater automation for processes in packaging and logistics, including volume dimensioning, sizing, sorting, and surface defect detection.”

Many AMRs and AGVs rely on the ifm efector O3R platform for robotic perception. It consists of compact camera heads (VGA cameras and time-of-flight sensors) and a vision processing unit (VPU) with NVIDIA Jetson TX2 for the evaluation of the data. Up to six camera heads can be connected to the Linux-based device, including sensors from other companies.3

High demand for increased speed, throughput

While there are many challenges in logistics and warehousing applications, the demand for greater speed and increased throughput is constant. Challenges include items wrapped in transparent poly bags that present imaging challenges due to how they reflect light. Other piece-picking operations may require color as part of the item detection process, which may necessitate 3D vision that supports color information in the image.

Calibration is a big challenge with all 3D cameras as they are engineered to work in the range of micrometers and the knocks, temperature fluctuations, and vibrations common in industrial settings can easily affect the calibration and thus the accuracy of 3D cameras, according to Leonard.

“Some cameras, such as Zivid 3D cameras, are specifically designed and built to operate in industrial settings and are rated to IP65 and have automatic calibration features,” Leonard said. “This means if the temperature changes by say 5 degrees due to a large roller door being opened and closed, a very common occurrence in a logistics warehouse, then the camera adjusts for this to remain perfectly calibrated.”

Box volume dimensioning and void filling

LMI has developed the ultrawide field of view (FOV) Gocator 2490 sensor, which is specifically designed to provide a fast and accurate parcel dimension measurement for shipping. Another application measures boxes to provide an accurate volumetric measurement for determining dimensional weight. Boxes may be traveling on a conveyor at speeds of 2 m/s. A single wide field of view Gocator 2490 smart sensor can scan and measure complete box dimensions (W x H x D) with a 1 m X 1 m scan area at a rate of 800 Hz and provide resolutions of 2.5 mm in all three dimensions (X, Y, Z), according to Howe.

“Competing camera-based systems typically offer just 3-to-5-millimeter resolution in the X, Y, and Z axes. However, each of our sensors vary in measurement range and resolution so it is essential to pick the correct one for your application,” Howe said. “The Gocator 2490 has a high enough resolution to measure not only the dimensions of a variety of parcel sizes but even detect subtle defects in the packaging. This in-line inspection functionality allows a pass/fail decision to be triggered if a package with a defect is detected.”

The Gocator 2490 has also opened up opportunities to solve more advanced packaging applications like void filling, which involves scanning an open package with items in it and determining how much packaging material is required to fill the empty space. For this application, a dual camera configuration helps avoid occlusion within the box or tote.

Deep learning on the edge

Because challenges for machine vision in logistics arise when multiplying complexity in an application — for example, trying to detect different types of objects of varying dimensions in random orientations on a high-speed conveyor — traditional rules-based machine vision for detection/inspection would struggle in these situations.

However, easy-to-use machine learning (ML) and deep learning (DL) in embedded platforms is emerging to solve previously challenging applications. For example, Cognex recently launched In-Sight 2800 with edge learning that is easy to setup with no programming required. The In-Sight 2800 gives fast and accurate classification of everything from boxes to totes to poly bags and runs entirely onboard the smart camera, according to Carey.

“Technologies such as edge learning on the In-Sight 2800 increase package detection rates, leading to less manual rework and enabling better order accuracy through more advanced material handling automation,” Carey said. “Our customers benefit from increased processing speed with less manual interaction, allowing these companies to manage fluctuating demand without changing headcount, which continues to be a challenge in today’s labor-constrained environment.”

Democratizing machine vision

Most of the technologies being deployed in the modern warehouse, including 2D and 3D cameras and increased compute power, for example, are iterations of previously known approaches. What is somewhat new is the utilization of all these technologies in multicamera, multimodal strategies with large processing capability, in combination with ML, to manage the application.

“We used to see single vendor solutions in the warehouse,” Place explains. “We now see a combination of vendors and technologies, each with their own strengths, deployed in unison to solve the challenge. This approach will continue to unlock use cases previously untouched by machine vision. Think of it as a democratizing of machine vision in warehousing and logistics.”

It’s difficult to put a finger on a single technology advance that is unlocking new use cases for machine vision in warehousing and logistics. Of course, cameras are providing better, more repeatable data and compute is faster, but nothing has changed the game. The biggest advance is in how easy the components are to use in a multi-technology approach to solving problems in the warehouse.

“Logistics is moving toward robotics as a primary method to manage the massive growth in the industry,” concludes Place.  “Robotics is an integration problem. Machine vision, with all of its complexities, is moving from a single camera focus to one that reduces friction on the integration of all of the components required for the modern warehouse. This approach will take us to the next step in this journey.”

– This originally appeared on the Association for Advancing Automation’s (A3) website. A3 is a CFE Media and Technology content partner. Edited by Chris Vavra, web content manager, Control Engineering, CFE Media and Technology,


  1. LMI Technologies – LMI Technologies: 3D Smart Sensors for …,
  2. Zebra Technologies Corporation – Zebra Technologies …,
  3. ifm efector O3R Democratizes Robotic Perception – Robotics …,
  4. Automated high-speed pallet storage and data capture. (n.d.). Www.Cognex.Com. Retrieved April 20, 2022, from
  5. Zebra adds to fulfillment stripes with Fetch Robotics AMRs …,

Vision and Discrete Sensors FAQ

  • What is a discrete sensor?

    A discrete sensor is a type of sensor that is designed to detect the presence or absence of a specific condition or physical characteristic. Discrete sensors typically have two states: an "on" or "active" state when the sensor is detecting the condition or characteristic it is designed for, and an "off" or "inactive" state when it is not. Types of discrete sensors include presence, proximity, distance, vibration, direction, motion and other parameters.

  • What is the function of machine vision?

    The function of machine vision is to provide a way for machines to ""see"" and interpret the visual world around them, much like the human eye does, though often more quickly, with greater attention to details than humans. Machine vision systems use cameras, image processing software and other sensors to capture and analyze visual data, and then use the information to make decisions and control other devices or processes.

    One of the key functions of machine vision is to inspect and analyze parts or products for quality and defects. This can include tasks such as identifying the presence or absence of certain features, measuring dimensions or positions or detecting defects, such as cracks or contaminants. Machine vision systems can also be used for tasks such as automated inspection, guidance and navigation and for monitoring and controlling industrial processes.

    Machine vision systems are widely used in manufacturing, inspection and quality control, as well as in robotics, transportation and security. They increasingly are being used in other fields such as healthcare, agriculture and entertainment.

  • What are the main types of discrete sensors?

    Examples of discrete sensors include:

    1. Limit switches, which detect the physical presence or absence of an object by sensing when it comes into contact with a switch.
    2. Proximity sensors, which detect the presence of an object by sensing its proximity to the sensor.
    3. Photoelectric sensors, which detect the presence of an object by sensing its interruption of a light beam.
    4. Reed switches, which detect the presence of a magnetic field.

    Discrete sensors are used in a wide range of applications, such as industrial automation, robotics and process control. They are simple, robust and reliable, and are often used to detect the presence or absence of objects, or to sense when a machine or process has reached a specific point or condition.

  • What is machine vision in simple terms?

    In simple terms, machine vision allows machines to ""see"" and understand the world around them, similar to humans, often with greater detail and speed. This technology is used in a wide range of applications, including quality control in manufacturing, navigation for autonomous vehicles, and security systems.

    Machine vision typically involves the use of algorithms to identify and track objects, recognize patterns, and make decisions based on the visual data. The technology is constantly evolving, and new developments in fields such as artificial intelligence and computer vision are leading to even more advanced applications of machine vision.

Some FAQ content was compiled with the assistance of ChatGPT. Due to the limitations of AI tools, all content was edited and reviewed by our content team.

Related Resources