System Error Budgets, Accuracy, Resolution

Any system design requires development and analysis of system error budgets. System engineers must determine the necessary levels of accuracy for system elements, including field sensors, actuators, signal conditioning modules (SCMs), and controlling units (PCs and PLCs). In addition, error budget considerations need to include software algorithm integrity and operating system compatibility, or...

AT A GLANCE

Accuracy, resolution

Error budget

Sources of error tables

Tutorial, examples graphs

Any system design requires development and analysis of system error budgets. System engineers must determine the necessary levels of accuracy for system elements, including field sensors, actuators, signal conditioning modules (SCMs), and controlling units (PCs and PLCs). In addition, error budget considerations need to include software algorithm integrity and operating system compatibility, or the degree of software ‘openness.’

For example, accuracy and resolution of software algorithm calculations must be compatible with measurement accuracy. When analyzing system accuracy needs, the topic of resolution requires attention as it relates to overall accuracy. Often, distinguishing between accuracy and resolution is misinterpreted in determining system needs. While accuracy and resolution are related, significant differences exist.

A reading device that has a specified accuracy of

Accuracy is the measurement device’s degree of absolute correctness, whereas resolution is the smallest number that can be displayed or recorded by the measurement device. For example, measuring 1 volt within

For the following examples, the digital display quantizing error (

Is it accurate? It depends

Suppose a voltage source is known to be exactly 5.643 volts. Now imagine using a digital voltmeter that is 100% accurate, but has only three display digits and is defined as ‘3-digit resolution.’ The reading would be 5.64 volts. Is the reading accurate? There was an accurate source and an accurate voltmeter, yet the reading does not represent the actual voltage value. Some may say that our 100% accurate voltmeter gave us a reading error of 3 millivolt or 0.0532%. The reading could be considered in error unless only a 3-digit reading is desired. In cases where source and instrument accuracy are 100%, resolution of the reading instrument and the acceptance of the observer determine what constitutes ‘accuracy.’

Assume a 100% accurate source of 5.643 volts, but in this case the 3-digit display digital voltmeter has alution and the observer determine what constitutes ‘accuracy.’

Next, consider measuring the precise 5.643-volt source using a 5-digit display digital voltmeter with a specified accuracy of

Taking the same measurement using a 6-digit display digital voltmeter, again, with a specified accuracy of

Clearly, these examples show the relation between accuracy and resolution and how each situation has to be evaluated based on the system requirements and the observer’s acceptance of ‘error.’

Typically, sensor signals are conditioned with signal conditioning modules (SCMs), selected, and then converted into a usable number, either for analytical process control or observation.

In the graphic ‘Typical instrumentation signal flow,’ assume sensors havemean-square) calculations are often used as opposed to the worst case maximum and minimum. RMS error is defined as the square root of the sum of each error squared, {(E1)2+ (E2)2+ (E3)2}.

Number values from the converter function are presented to the observer with a display unit that has its own error and resolution specifications. In addition, the analytical processing function imports this numerical value to use in a complex mathematical operation, which may be based on an empirical model that achieves results using computation shortcuts, which, in turn, have additional accuracy and resolution specifications. A detail error budget must, therefore, contain numerous factors to correctly determine ‘system accuracy.’

Resolution isn’t accuracy

Suppose your project manager drops by, makes small talk about how the control system project is going, then asks if you and your team have any needs. Before leaving, he comments that while reviewing project purchases he noticed that the analog-to-digital converter (ADC) module for the main controller is an 8-channel differential input unit with 16-bit resolution, which he thought would give one part in (216-1) accuracy, something on the order of 0.0015%. He says he also noticed that all the SCMs you purchased were only 0.03%. Politely, he asks you to explain. You begin your explanation by pictorially representing one possible ADC system, as illustrated in the ‘Typical instrumentation signal flow’ graphic. ADCs are advertised as having ‘n’ bit resolution, you mention, which often is misunderstood to mean accuracy.

Specifications do need to be closely examined, however, to determine the unit’s accuracy. ‘Conceptual ADC system’ graphic depicts a typical scheme used to convert an analog signal to a digital representation for computer manipulations or display. In this typical representation, semiconductor switches select analog input signals, which are captured (sampled for a small slice of time) and held in a sample and hold amplifier function block (sample and hold amplifier; SHA). This SHA function may also contain a programmable gain function to selectively scale each analog input. Once a signal slice is captured, an n-bit counter begins counting.

Counter contents are converted to an analog voltage using switched resistors or current sources. When this analog signal equals the input SHA signal, counting halts and the counter contents are made available as a digital representation of the sampled analog input value. This process can sample analog inputs at blinding speeds in the 10-MHz range to provide digital representations of time-varying analog inputs; it also has numerous sources of error that collectively degrade true accuracy. It is not necessarily determined solely by the n-bit resolution specification.

Here are five errors associated with using a typical ADC scheme as illustrated:

Sampling speed. From Nyquist Sampling Theory, if the analog signal changes rapidly, then the ADC must sample at least twice as fast as the changing input. Many applications use a sampling rate at least 10 times the highest frequency present in the input signal. Sampling slower than one-half the signal frequency will result in inaccurate readings. Most ADC sampling speeds are adequate for slow varying process control signals.

Input multiplexed errors. The input multiplexer circuit may have OpAmp buffers on each input line that could introduce errors, such as in voltage offset, current bias, and linearity. In addition (and more common) are the two major multiplexer errors: (a) cross talk between channels, that is, the signal from an ‘on’ channel leaks current into the ‘off’ channels, and (b) signal reduction through voltage division caused by the finite non-zero resistance of the semiconductor switches and the finite input impedance of the circuitry that follows.

Sample and hold amplifier (SHA). This function is an OpAmp-based circuit with capacitive components all designed to switch, buffer, and hold the sampled analog voltage value. Consequently, there will be linearity, gain, power supply rejection ratio (PSRR), voltage offset, charge injection, and input bias current errors.

Converter. In the counter, comparator, and ADC circuit there are such errors as overall linearity, quantizing error (defined as uncertainty in the least significant bit [LSB], typically

Temperature . All analog circuit functions within the ADC unit are subject to temperature errors, therefore an overall temperature error specification is assigned to an ADC.

Data for ‘Typical 14-bit ADC errors…’ table were taken from various analog-to-digital converter manufacturers. This isn’t a complete ADC error analysis; values shown illustrate that internal errors collectively contribute to the definition of an ADC’s overall accuracy.

An algebraic sum model on these errors suggests athe ADC resolution, defined as approximately 1/(2n-1). To determine actual ADC accuracy, the manufacturer’s ADC specifications should always be carefully examined, along with specifications for sensors and SCMs.

For more information, visit www.dataforth.com .

Accuracy calculation table

Method
Correction
Percent

Max: (1+E1)*(1+E2)*(1+E3)
* 1.00330215
0.330215037%

Min: (1-E1)*(1-E2)*(1-E3)
* 0.99670215
– 0.329785037

RMS Max: (1+RMS Error)
*1.00256710
0.256709953

RMS Min: (1-RMS Error)
*0.99743290
– 0.256709953

Source: Control Engineering with information from Dataforth

Typical 14-bit ADC errors: Temperature change 30 °C;

Function
Cause of Error
Typical

Multiplexer
Cross Leakage
0.01 ppm

Switch Resistance
0.003

Sample and hold amplifier (S&H)
Nonlinearity
100

Gain
100

PSRR
10

Voltage offset
80

Bias input current
0.08

Converter
PSRR
60

Nonlinearity
122

Quantizing
60

Gain vs. temperature
300

Zero vs. temperature
300

Source: Control Engineering with information from Dataforth