Filter out the noise: strategies to avoid excessive data and spikes

New software and input/output (I/O) devices can help users eliminate useless data through effective diagnostics, error localization and filter designs to extract accurate figures.
By Andy Garrido, Beckhoff Automation October 17, 2018

Spikes in vibration, temperature, electrical consumption, or other measurements, could indicate mechanical and electrical flaws. EtherCAT I/O terminals with built-in signal processing can reduce noise during data collection. Courtesy: Beckhoff AutomationAdvances in control technology, networking capabilities and the rise of cloud and edge computing have all made it easier to collect and disseminate high-value data. The data can be sent to dashboards across the enterprise to create more efficient, insightful systems. However, inaccurate data can negatively affect analyses, creating misleading results and potentially prompting poor decisions by plant personnel. Tracking down the causes and eliminating them is difficult without the correct tools. 

Source of the error?

With data acquisition systems, the key challenge could be an excess that hinders decision-making, a mechanical issue in the system, a fieldbus error, or a random spike that is an annoyance rather than a reason for corrective action. The situation is similar to feedback at a concert where noise cuts through the music, making songs difficult to understand, much less enjoy. It is impossible to fix this feedback without understanding whether it’s the result of problems with an instrument, the sound system or the cabling—in other words, the signal sender, the signal receiver, or something in between.

Using available filters in some input/output (I/O) terminals, diagnostic tools, and custom filters designed for data acquisition systems can help an engineer can determine where the error lies. This is the first step to reducing or eliminating useless data, which will lower networking overhead, reduce data storage requirements, and lead to more actionable analysis. 

How to improve diagnostics and error localization

If data show regular spikes in terms of vibration, temperature, electrical consumption or other measurements, this could indicate mechanical and electrical flaws, such as a malfunctioning fan, or installation errors, including loose or incorrect connections on the I/O rack or field sensors. Identifying issues with the signal sender—the "instrument"—does not have to be complicated or time consuming. Installing EtherCAT I/O terminals with built-in signal processing can eliminate a significant amount of noise during data collection.

High-end analog terminals provide flexible configurations, and they offer finite impulse response (FIR) filters up to the 39th order and infinite impulse response (IIR) filters up to the sixth order, which are easily configurable with PC-based control software. Even basic analog I/O terminals offer some useful filtering options.

If problems persist even with I/O-level filters, a practical next step would be to use advanced diagnostic tools in PC-based automation software to pinpoint errors in the network or the field devices. EtherCAT networking technology is especially capable of rapid, automatic error localization. The fieldbus transmits cyclical diagnostic messages to the EtherCAT master, and LED lights also can display the status of devices, network connections, I/O and the power supply.

For more accurate data collection, it’s important to track down the causes of all issues, even if they do not overstrain the network’s self-healing capacity. These could include damaged cables, electromagnetic compatibility (EMC) influences, or defective push-in connectors. The cyclic redundancy check (CRC) of the EtherCAT protocol also can detect bit faults during data transfer.

Using diagnostic tools to test a new system, including the configuration of I/O hardware, can prevent issues during commissioning and production. The built-in network topology recognition available through EtherCAT checks if individual devices match the saved configuration. These error localization tools are just as beneficial when issues arise in legacy systems. By targeting the "instruments" causing the spikes, engineers can improve data acquisition and system reliability. 

How to eliminate noise with oversampling and custom filters

When gathered data offer unusable results, use diagnostic tools to track down issues in the field and active filter designs to eliminate errant signals and spikes. Courtesy: Beckhoff AutomationMechanical and electrical issues in the field are not the only causes of bad data; some spikes and noise occur randomly as quirks of essentially normal operation. In addition, engineers may want to eliminate some signals that are constant but useless, such as a fan that creates resonance due to vibration. In the same way that a sound technician at a concert can identify and filter frequencies causing feedback using a sound mixing console, a controls engineer can use tools located in the control software to isolate and filter out frequencies that distort data.

Oversampling technology helps engineers identify exactly where these issues are occurring by examining signals in shorter intervals. This method can also identify issues in the field other methods cannot pinpoint because of lower resolution and precision. It is easiest to examine the results of oversampling on a graphical display, which shows the spike on the signal wave and helps users select the best coefficient digital filter.

To filter out noise, some automation software provides selectable filter designs, such as Chebyshev, Butterworth, and Inverse-Chebyshev. Chebyshev is an analog filter design that allows for sharper frequency cutoff, but it does this at the expense of time domain, which creates more ripples in the pass band as the order of filter increases. Butterworth has a greater time domain with a flat pass band, but it does not support as sharp a cutoff as Chebyshev. Inverse-Chebyshev requires a frequency domain of zero, providing a sharp cutoff.

Beyond the filter designs, the actual filter types in any software should include low pass, high pass, band pass, and band stop. As an input for digital filter function blocks in the programmable logic controller (PLC), the filter coefficients can eliminate useless signals without requiring fundamental changes to the software or hardware. 

Better data fundamentals

Custom filter design becomes necessary in applications that need to gather large amounts of data in microseconds, such as semiconductor manufacturing and in research labs. Most manufacturing applications only require communication rates in the millisecond range, so these filters might prove excessive. However, the oversampling principle can still provide important information to track down and resolve mechanical and electrical issues.

Quite often, eliminating problems at the hardware level will resolve most issues with inefficient data acquisition. Therefore, identifying mechanical problems via the control system and using its built-in filters or leveraging EtherCAT I/O terminals are the best places to start.

Even if a filter becomes necessary, fixing I/O and field-level problems first will help engineers fine-tune the system to avoid awkward data acquisition situations in the future. This will ensure the controller makes informed decisions and adjusts machines based on the most accurate and useful data available.

Andy Garrido, I/O product marketing, Beckhoff Automation. Edited by Mark T. Hoske, content manager, Control Engineering, CFE Media,

KEYWORDS: Data acquisition, data quality

Data acquisition: diagnostics and error localization

Data filtering can help data quality

Data tuning eliminates problems.


Have you had challenges tracking down the causes and eliminating data quality gremlins?