System Error Budgets, Accuracy, Resolution

Any system design requires development and analysis of system error budgets. System engineers must determine the necessary levels of accuracy for system elements, including field sensors, actuators, signal conditioning modules (SCMs), and controlling units (PCs and PLCs). In addition, error budget considerations need to include software algorithm integrity and operating system compatibility, or...




  • Accuracy, resolution

  • Error budget

  • Sources of error tables

  • Tutorial, examples graphs

Any system design requires development and analysis of system error budgets. System engineers must determine the necessary levels of accuracy for system elements, including field sensors, actuators, signal conditioning modules (SCMs), and controlling units (PCs and PLCs). In addition, error budget considerations need to include software algorithm integrity and operating system compatibility, or the degree of software 'openness.'

For example, accuracy and resolution of software algorithm calculations must be compatible with measurement accuracy. When analyzing system accuracy needs, the topic of resolution requires attention as it relates to overall accuracy. Often, distinguishing between accuracy and resolution is misinterpreted in determining system needs. While accuracy and resolution are related, significant differences exist.

A reading device that has a specified accuracy of

Accuracy is the measurement device's degree of absolute correctness, whereas resolution is the smallest number that can be displayed or recorded by the measurement device. For example, measuring 1 volt within

For the following examples, the digital display quantizing error (

Is it accurate? It depends

Suppose a voltage source is known to be exactly 5.643 volts. Now imagine using a digital voltmeter that is 100% accurate, but has only three display digits and is defined as '3-digit resolution.' The reading would be 5.64 volts. Is the reading accurate? There was an accurate source and an accurate voltmeter, yet the reading does not represent the actual voltage value. Some may say that our 100% accurate voltmeter gave us a reading error of 3 millivolt or 0.0532%. The reading could be considered in error unless only a 3-digit reading is desired. In cases where source and instrument accuracy are 100%, resolution of the reading instrument and the acceptance of the observer determine what constitutes 'accuracy.'

Assume a 100% accurate source of 5.643 volts, but in this case the 3-digit display digital voltmeter has alution and the observer determine what constitutes 'accuracy.'

Next, consider measuring the precise 5.643-volt source using a 5-digit display digital voltmeter with a specified accuracy of

Machine Control

Taking the same measurement using a 6-digit display digital voltmeter, again, with a specified accuracy of

Clearly, these examples show the relation between accuracy and resolution and how each situation has to be evaluated based on the system requirements and the observer's acceptance of 'error.'

Typically, sensor signals are conditioned with signal conditioning modules (SCMs), selected, and then converted into a usable number, either for analytical process control or observation.

In the graphic 'Typical instrumentation signal flow,' assume sensors havemean-square) calculations are often used as opposed to the worst case maximum and minimum. RMS error is defined as the square root of the sum of each error squared, {(E1)2+ (E2)2+ (E3)2}.

Number values from the converter function are presented to the observer with a display unit that has its own error and resolution specifications. In addition, the analytical processing function imports this numerical value to use in a complex mathematical operation, which may be based on an empirical model that achieves results using computation shortcuts, which, in turn, have additional accuracy and resolution specifications. A detail error budget must, therefore, contain numerous factors to correctly determine 'system accuracy.'

Machine Control

Resolution isn't accuracy

Suppose your project manager drops by, makes small talk about how the control system project is going, then asks if you and your team have any needs. Before leaving, he comments that while reviewing project purchases he noticed that the analog-to-digital converter (ADC) module for the main controller is an 8-channel differential input unit with 16-bit resolution, which he thought would give one part in (216-1) accuracy, something on the order of 0.0015%. He says he also noticed that all the SCMs you purchased were only 0.03%. Politely, he asks you to explain. You begin your explanation by pictorially representing one possible ADC system, as illustrated in the 'Typical instrumentation signal flow' graphic. ADCs are advertised as having 'n' bit resolution, you mention, which often is misunderstood to mean accuracy.

Specifications do need to be closely examined, however, to determine the unit's accuracy. 'Conceptual ADC system' graphic depicts a typical scheme used to convert an analog signal to a digital representation for computer manipulations or display. In this typical representation, semiconductor switches select analog input signals, which are captured (sampled for a small slice of time) and held in a sample and hold amplifier function block (sample and hold amplifier; SHA). This SHA function may also contain a programmable gain function to selectively scale each analog input. Once a signal slice is captured, an n-bit counter begins counting.

Counter contents are converted to an analog voltage using switched resistors or current sources. When this analog signal equals the input SHA signal, counting halts and the counter contents are made available as a digital representation of the sampled analog input value. This process can sample analog inputs at blinding speeds in the 10-MHz range to provide digital representations of time-varying analog inputs; it also has numerous sources of error that collectively degrade true accuracy. It is not necessarily determined solely by the n-bit resolution specification.

Here are five errors associated with using a typical ADC scheme as illustrated:

  1. Sampling speed. From Nyquist Sampling Theory, if the analog signal changes rapidly, then the ADC must sample at least twice as fast as the changing input. Many applications use a sampling rate at least 10 times the highest frequency present in the input signal. Sampling slower than one-half the signal frequency will result in inaccurate readings. Most ADC sampling speeds are adequate for slow varying process control signals.

  2. Input multiplexed errors. The input multiplexer circuit may have OpAmp buffers on each input line that could introduce errors, such as in voltage offset, current bias, and linearity. In addition (and more common) are the two major multiplexer errors: (a) cross talk between channels, that is, the signal from an 'on' channel leaks current into the 'off' channels, and (b) signal reduction through voltage division caused by the finite non-zero resistance of the semiconductor switches and the finite input impedance of the circuitry that follows.

  3. Sample and hold amplifier (SHA). This function is an OpAmp-based circuit with capacitive components all designed to switch, buffer, and hold the sampled analog voltage value. Consequently, there will be linearity, gain, power supply rejection ratio (PSRR), voltage offset, charge injection, and input bias current errors.

  4. Converter. In the counter, comparator, and ADC circuit there are such errors as overall linearity, quantizing error (defined as uncertainty in the least significant bit [LSB], typically

  5. Temperature . All analog circuit functions within the ADC unit are subject to temperature errors, therefore an overall temperature error specification is assigned to an ADC.

Data for 'Typical 14-bit ADC errors...' table were taken from various analog-to-digital converter manufacturers. This isn't a complete ADC error analysis; values shown illustrate that internal errors collectively contribute to the definition of an ADC's overall accuracy.

An algebraic sum model on these errors suggests athe ADC resolution, defined as approximately 1/(2n-1). To determine actual ADC accuracy, the manufacturer's ADC specifications should always be carefully examined, along with specifications for sensors and SCMs.

For more information, visit .

Accuracy calculation table




Max: (1+E1)*(1+E2)*(1+E3)

* 1.00330215


Min: (1-E1)*(1-E2)*(1-E3)

* 0.99670215

- 0.329785037

RMS Max: (1+RMS Error)



RMS Min: (1-RMS Error)


- 0.256709953

Source: Control Engineering with information from Dataforth

Typical 14-bit ADC errors: Temperature change 30 °C;


Cause of Error



Cross Leakage

0.01 ppm

Switch Resistance


Sample and hold amplifier (S&H)







Voltage offset


Bias input current









Gain vs. temperature


Zero vs. temperature


Source: Control Engineering with information from Dataforth

No comments
The Engineers' Choice Awards highlight some of the best new control, instrumentation and automation products as chosen by...
The System Integrator Giants program lists the top 100 system integrators among companies listed in CFE Media's Global System Integrator Database.
The Engineering Leaders Under 40 program identifies and gives recognition to young engineers who...
This eGuide illustrates solutions, applications and benefits of machine vision systems.
Learn how to increase device reliability in harsh environments and decrease unplanned system downtime.
This eGuide contains a series of articles and videos that considers theoretical and practical; immediate needs and a look into the future.
Make Big Data and Industrial Internet of Things work for you, 2017 Engineers' Choice Finalists, Avoid control design pitfalls, Managing IIoT processes
Engineering Leaders Under 40; System integration improving packaging operation; Process sensing; PID velocity; Cybersecurity and functional safety
Mobile HMI; PID tuning tips; Mechatronics; Intelligent project management; Cybersecurity in Russia; Engineering education; Road to IANA
This article collection contains several articles on the Industrial Internet of Things (IIoT) and how it is transforming manufacturing.

Find and connect with the most suitable service provider for your unique application. Start searching the Global System Integrator Database Now!

SCADA at the junction, Managing risk through maintenance, Moving at the speed of data
Flexible offshore fire protection; Big Data's impact on operations; Bridging the skills gap; Identifying security risks
The digital oilfield: Utilizing Big Data can yield big savings; Virtualization a real solution; Tracking SIS performance
click me