
Analog to digital: Counting the bits
Dear Control Engineering: I was reading a release about an analogtodigital converter that says it’s 12bit. What does that mean exactly? Is 12bit better than eightbit?
Bit counts are all about trying to convert analog measurements to digital. Let’s say you’re trying to measure the diameter of a coin with a ruler. You put the ruler on the coin and notice that it’s slightly more than 11/16 in. The actual is somewhere between 11/16 and 3/4 in. You can eyeball the measurement and interpolate in your own mind. Machines aren’t as good at that as you are.
Getting a machine to make that measurement requires converting analog to digital. And since digital deals with high and low (on or off, 1 or 0, etc.) you have to break the reading into discrete segments. For a machine, an analog measurement might be given as a voltage, such as 0 to 10 V. To digitize the measurement, you can use a comparator that turns from off to on when the voltage reaches 5 V. Using that, you have just created a onebit A to D converter. If you apply this to your ruler, let’s say anything below 1/2 in. is <5 V. Anything over 1/2 in. is >5 V. Unfortunately, this isn’t very precise. But if you’re clever, you realize that if you add a second comparator, you can effectively double the number of marks on the ruler. You now have a twobit device which gives you marks at quarters. Adding another comparator makes a threebit device and gives you eighths. Every time you add another bit, you get twice as many divisions. So your ruler that has sixteenths is equivalent to fourbits.
If you keep extending the math, a 12bit converter gives you 4,096 units. So relating back to the AtoD converter you mentioned initially, this means that whatever range of measurement you’re dealing with is divided into 4,096 individual units. If you’re using that over one inch, it means each increment is 0.00024414 in. That’s pretty precise and certainly capable of giving you reliable readings to three decimal places. The same applies regardless of what you’re measuring: pressure, temperature, size, flow, level, weight, or whatever the application, the total range span will be divided the same. So for a given range, 12bit conversion with 4,096 units allows you to be more precise than eightbit with only 256 units.
Digital communication methods also pay attention to bit counts. The earliest was the telegraph, which is a onebit device: dot or dash. Getting something faster, such as the teletype, required a higher bit count to allow each character to have its own code. Teletypes use sixbit which allow for 64 different characters. This was fine for a while, but moving to ASCII codes requires 128, or sevenbit. Early word processers changed to eightbit to provide 256 characters.
Digital sound reproduction also uses bit counts. A standard audio CD uses 16bit reproduction. That means that there are 65,536 increments running at 44.1 kHz, so if you are trying to digitize the wave form of one second of music, you have a mosaic that’s 65,536 by 44,100 squares. Poorer quality sound reproduction is only 12bit, and you would probably be able to hear the difference. Some audiophiles consider 16bit to be too crude and insist that 24bit (with about 16.8 million increments) running at 96 kHz is necessary for really accurate sound.
Ultimately, if you’re trying to determine what bit count you need for a specific application, you have to ask how precise the measurement has to be. If high precision over a wide range is necessary, say for robotics or a coordinate measuring machine, 12bit may not cut it. On the other hand, if you’re trying to measure pressure between 0 and 100 psi, and ±5 psi is close enough, even eightbit resolution is overkill.
Peter Welander, pwelander(at)cfemedia.com
Control Engineering
Visit the Control Engineering Process Control Channel.