Question of the Week 2007


February 27, 2007

QUESTION: Can I automate operation of a high vacuum system?

The pumping system recommended for general use consists of a mechanical foreline pump that holds back atmospheric pressure and a turbomolecular main pump that scavanges the final 0.1% of the gas in the vacuum chamber. A bypass valve allows isolating the main pump while evacuating the vacuum chamber with the mechanical pump.

Essentially any vacuum system can easily be automated. While the details vary depending on the application and the vacuum-pumping technology chosen, there are some general principals.

Vacuum systems generally divide into three regions: 'above the table' vacuum chamber, 'below the table' high vacuum or main pump, and foreline or rough vacuum pump. The vacuum chamber is the business office for the application. For vacuum coating, for example, the vacuum chamber is a bell jar with the evaporation heaters, fixtures, coating measurement equipment, and so forth inside. For an electron microscope, it's the sample chamber, electron optics, etc. Being an air-tight vessel designed to withstand 14.7 psi pressure, it typically has thick walls and often weighs hundreds or thousands of pounds. Since actual people often have to reach inside to set things up, vacuum system designers often mount vacuum chambers on support structures that hold them at a convenient work height.

'Above the table' and 'below the table' are jargon terms that arose because most system designs locate the vacuum pumping equipment conveniently in a cabinet built into the vacuum-chamber support structure. That put the cabinet's top at a perfect height for a benchtop—the 'table.' A large main vacuum valve, which necessarily must be a gate valve or butterfly valve to allow adequate pumping throughput, makes it possible to separate the vacuum chamber from the pumping system. So, really 'above the table' means above the main valve and 'below the table' means downstream of the main valve.

Sensors in a vacuum control system are vacuum gauges, and there are a range of types. Some, such as the Penning cold-cathode ionization gauge, are useful only at high vacuum levels. Others, such as thermocouple gauges, are useful only at rough vacuum levels present in forelines. Since different vacuum pump types also work at different vacuum levels, it's appropriate to match the vacuum gauge type with the vacuum pump evacuating the space it senses.

Vacuum levels are measured by absolute pressure in units called Torricellis (typically abbreviated 'Torr' both when written and spoken), which equal approximately 133 Pascals. So, 10 Torr equals an absolute pressure of 1,330 Pascals (approximately). In the vacuum world, however, 1 Torr is a very high pressure! Oil diffusion pumps shouldn't be operated at over 10-3Torr. Vacuum workers invented more jargon to handle this fact by dropping all but the exponent. A vacuum engineer, therefore, might refer to the 'seven scale,' meaning a vacuum between 3x10-7Torr and 3x10-6Torr.

So, in designing a vacuum system, you generally start with the chamber pressure needed for the application. High vacuum applications generally need vacuums better than 10-5Torr—meaning five scale or 'higher.' Particle beams, for example, are happy on the six scale and are ecstatic if you can maintain seven-scale vacuum. If you can get to the eight scale, you're getting into ultra-high vacuum (UHV) territory. Oil diffusion pumps (augmented with cryogenic 'cold traps') aren't much good above seven scale, though. UHV applications pretty much require turbomolecular pumps or something even more exotic.

Actually, turbomolecular pumps aren't that exotic, anymore. In fact, they've become the first choice for most applications because they can operate at almost any vacuum level from rough to UHV, and are pretty much bullet proof. About the only thing you can do to hurt a turbopump is to drop a screw into it while it's running! For that reason, I recommend turbopumps for any automated system. The ideal pumping system for most applications is a turbopump backed by a mechanical forepump and closed off from the vacuum chamber by a gate valve.

A single stage turbopump is little more than a set of fan blades that rotate at tens of thousands of rpm. At high vacuum levels, gas doesn't flow as a fluid. Gas molecules collide more often with the vacuum walls than with each other. A gas molecule moving through the turbopump in the flow direction can slip between turbopump blades. When moving the other way, however, they're almost sure to hit a blade and bounce back toward the 'right' direction. Stack several turbopump stages together with additional stationary blades in between to prevent swirling, and you've got a really effective pump at high vacuum.

The turbopump will also work like an ordinary fan at high pressure (foreline vacuum or worse), but, like a fan, it can't maintain much of a pressure difference. Too much gas leaks back. So, for high vacuum (or better) work, always use a mechanical backing pump. The mechanical pump is a positive displacement pump that can hold back atmospheric pressure, leaving the turbopump to scavenge out those pesky gas molecules that make up the last thousandth of a Torr. Since turbopumps can stand atmospheric pressure, even though they don't pump it effectively, they can be running whenever the pumping system is active. Generally, I recommend slaving them along with the forepump to a single main 'on' switch.

The main valve allows you to close off the vacuum pumping system when you need to. If, for example, you need to work on the pumping system, having a main valve lets you do so without having to put air in the vacuum chamber as well. Just close it before shutting down the pumps, then do your work, and establish high vacuum below the table before opening it again. It can cut the time needed by a factor of, well, several.

Using a gate valve gives you a throttle as well. At high vacuum, the pumping system throughput is proportional to the inlet area. A gate valve allows you to very nicely control the inlet area. If, for example, your pumping system can get you to the eight scale, but you really need to work in, say, a neon atmosphere on the five scale, you can bleed in a little neon gas and control the pressure by throttling with the gate valve. In fact, you could even automate it with a PID loop!

Fast-acting pneumatically controlled gate valves also make good safety valves. If and when seals fail or glass ports crack, having an automatic system to slam the gate valve closed can prevent that high volume of atmospheric gas from flowing through your pumps, which are not really designed for it, anyway. Since gas tends to adsorb onto vacuum system walls and form bubbles in mechanical pump oil, it's best to keep all vacuum equipment under vacuum as much as possible. Of course, if the system contains cold traps or molecular sieves, high pressure will rapidly contaminate them to uselessness. They'll have to be cleaned and degassed before they can go back into service. The same considerations motivate vacuum engineers to add a bypass line that allows pumping the vacuum chamber down to foreline pressure with the main pump and any cold traps or molecular sieves isolated to prevent contamination. Good practice calls for the main and foreline valves to always be closed before opening the bypass.

So, the basic job of your vacuum control system is to sequence operations so that nothing happens at a pressure higher than recommended. Basic pumping system control consists of all or nothing decisions based on vacuum levels. With this simple turbopump-based system, for example, there are 6 inputs and 5 outputs. The 6 inputs are:

1. Penning high-vacuum gauge (H) reading below setpoint;
2. Thermoucouple gauge (T) reading below setpoint;
3. Main valve (M) open/closed;
4. Foreline valve (F) open/closed;
5. Bypass valve (B) open/closed;
6. Pumps on/off (P).

The 5 outputs are:
1. Activate/deactivate pumps (p);
2. Open/close main valve (m);
3. Open/close bypass valve (b);
4. Open/close foreline valve (f);
5. Start application process (o).

Note that I've listed separate signals for, say, opening valves (activation) and signaling that they're open (response). It is certainly possible to (and many engineers do) use the activation signal as the response signal. This gives me the willies because I don't trust any device to operate just because it's being told to. I like to see a separate sensor that independently confirms the action has taken place. For example, a switch mounted on a gate-valve actuator piston can positively confirm that the valve really did close, rather than jamming part way open. On the other hand, I've not added signals to turn the gauge controllers on or off. Both the Penning and thermocouple gauges can survive being left on at atmospheric pressure for short periods of time, though not indefinitely. Other types of gauges, however, cannot and must be interlocked to de-energize at excessive pressure.

Because all of these sense inputs and actuation outputs are binary signals, you can analyze the control system using Boolean algebra, and implement it using logic gates. Gas flows provide timers for sequencing. There is a Boolean equation for every output and every action. For example, to initially pump the system down from atmospheric pressure:

b = (~M)*(~F)*(~T)*P
f = P*(~B)

If I've done my logic correctly, selecting the pump-down sequence will initially turn all of the pumps on (p=1); close the main valve because the thermocouple gauge is not below its set point (T = 0); open the bypass valve (b= 0) because the main and foreline valves are closed (M = F = 0), the thermocouple reads high (T = 0), and the pumps are on; and close the foreline valve for the same reason it closes the main valve.

As the mechanical pump does its job, the thermocouple gauge reading will drop, eventually reaching below its setpoint, at which time the bypass valve closes, which allows the foreline valve to open. Gas floods in from the turbopump, which is still at atmospheric pressure, and prevents the main valve from opening. The bypass valve can't re-open once the foreline valve is open.

As more time passes, the mechanical pump draws the foreline pressure back down to the thermocouple gauge setpoint, at which point conditions are right for the main valve to open. Similar Boolean equations need to be written for other operations, such as returning parts of the system to atmosphere for maintenance.

A final equation (o = H) allows operations in the vacuum chamber to commence only when the high vacuum gauge reads below its setpoint.

For more information about vacuum system control, visit the Control Engineering Website and type 'vacuum' into the search box.

For additional information, visit:

C.G. Masi , Control Engineering senior editor

February 20, 2007

QUESTION: Why do people say Linux isn't a real-time operating system?

To quote Linux for Embedded and Real-Time Applications , by Doug Abbott:

'General purpose operating systems [such as Linux] are tuned to maximize average throughput even at the expense of latency while real-time operating systems attempt to minimize, and place an upper bound on, latency, sometimes at the expense of average throughput.'

According to Abbot, there are several reasons why control engineers do not consider standard Linux suitable for real-time use:

Coarse-grained synchronization means that kernel system calls are not preemptible. Once a process enters the kernel, it can't be preempted until it's ready to exit the kernel. If an event occurs while the kernel is executing, tough luck! The waiting process can't be scheduled until the currently executing process exits the kernel. Some kernel calls can hold off preemption for tens of milliseconds.

Paging —the process of swapping pages in and out of virtual memory—is, for all practical purposes, unbounded. There is no way to know how long it will take to get a page off a disk drive, and so one can't place an upper bound on the time a process may be delayed due to a page fault.

Fairness in scheduling means that the conventional Linux scheduler does its best to be fair to all processes. It may give the processor to a low priority process that has been waiting a long time, even though a higher priority process is ready to run. Try explaining to your insurance company that your braking system react quickly enoughdidn't work because the CD changer had already been waiting 1.5 seconds to get processor time. Linux was just trying to be fair!

Request reordering is a way Linux tries to make efficient use of hardware by reordering I/O requests from multiple processes.

Batching does not refer to what a married man does when his wife and kids go to Grandma's for the weekend. It's a way Linux has of making more efficient use of resources by scheduling similar processes (such as clearing pages associated with different processes out of virtual memory) together in a batch.

These five characteristics often cause the processor to apparently lock up for short periods of time when doing compute or I/O intensive tasks. Of course, as the old project management rule points out: 'How long a minute is depends on which side of the bathroom door you're standing on.'

In a desktop environment, having the mouse stop responding for a few seconds, then jump rapidly is just an annoyance that users learn to ignore. In a real time environment, such as deploying airbags or responding to throttle inputs, it is unacceptable, or even catastrophic.

In fairness (there's that word again), it is important to point out that standard Linux shares these characteristics more or less with other desktop operating systems, such as the various flavors of Microsoft Windows, and Mac OS X. It is, of course, theoretically possible to modify these OSs to change these characteristics. Since Windows and Mac are proprietary operating systems, this change can only be made by their vendors (Microsoft and Apple). Linux, however, is an open-source operating system, so anyone with the requisite skills can modify the kernel to overcome the software's real-time deficiencies. A number of software developers, such as MontaVista and Lynuxworks have made the effort to market Linux distributions with kernels modified for real-time applications.

It is important to remember, however, that no multitasking system can work in real time in the same sense that single-task engines can. For example, in the early 1980s, I was building system controllers out of logic gates and timers that operated on strict Boolean logic lines. The hardware sat there putting out the result of a Boolean expression based on the logic states of a handful of sensor inputs. (Is the turbopump speed above such-and-such a value?) As soon as all the inputs hit the appropriate states, the output shifted to, say, open a gate valve. No time delays (except for propagation delays in the hardware). No uncertainties. Now, that's real time!

Whenever the controller is multitasking, it has to notice that something's trying to get its attention, check its priority compared to the priorities of other tasks and interrupts, then schedule the task in its little queue. In a sense, that forever takes it out of the realm of real time. As a practical matter, though, it is possible to manage the process so that you're guaranteed a response within a certain time window. In that sense, it is possible for Linux to be 'real time.'

For more information about real time control systems, visit the Control Engineering website.

See also, 'Not your Father's RTOS,' by Hank Hogan in the March 2007 issue of Control Engineering .

For additional information, visit these websites:

Source: Abbott, D., Linux for Embedded and Real-Time Applications , 2nd Edition, Elsevier-Newnes, ISBN 978-0-7506-7932-9, 2006.

C.G. Masi , Control Engineering senior editor

February 13, 2007

QUESTION: How do I safely monitor the output of a 25 kV dc power supply?

Measuring high voltages—those well in excess of what transistor-based digital meters can withstand—is a common problem. It becomes especially difficult, however, when voltages begin to exceed 10 kV because these voltages can drive excessive power through most electronic components. Until you learn the trick, you get flames instead of measurments. A 1 MW resistor placed across your 24 kV supply, for example, will dissipate 625 W. The trick is to spread the power dissipation over a large number of resistors.

Transistor circuits are happiest when working with only a few volts. Junction transistors can get into trouble at voltages as low as 10 V (although many units can go higher). Field effect transistors (FETs) extend this range about another order of magnitude, but I haven't yet heard of a single-element solid-state device that could make 25 kV comfortably. Higher voltages require series stacks of reverse-biased junctions with grading resistors to control the voltage drop across each junction.

I'll come back to grading resistors later.

Transistorized voltage measuring equipment intended to measure a few hundred volts and beyond use precision voltage dividers to bring the level down to something that a solid-state meter circuit can survive. To measure, say, 1,000 volts with 1% precision, start with a 999 KW, 1% resistor and a 1 KW, 1% resistor in series across the voltage to be measured.

Be sure to put the 1 KW resistor on the low-voltage end.

Connect your measurement circuit across the 1 KW resistor. The voltage it sees will be 1/1,000 of the supply voltage or 1 V.

The current through the divider will be 1,000 V / 1 MW = 1 mA. The 1 KW resistor will only have to dissipate a measly 1 mW. The 999 KW resistor, however, will have to deal with just under a Watt, so a much heftier unit is needed.

This analysis points out the problem that higher voltages present. Jacking the supply voltage by another factor of ten—to 10 kV—would increase the power dissipated in the same resistor to 100 W because the power dissipated in a resistor increases with the square of the voltage.

At 100 W, we're getting into the realm of pricey zero-temperature-coefficient precision power resistors, heat sinks, and forced-air cooling. At 25 kV, we're starting to look at things that will warm your feet nicely on a cold day, thank you

Of course, burning all that power just to measure the output voltage puts a bit of a strain on the power supply. Better to increase the resistances by, say, a factor of 1,000, which drops the power dissipation by that much. (Power varies inversely with the resistance.) You could do that with a chain of ten 100 MW resistors in series. That drops the power drawn from the 25 kV supply to 625 mW and the dissipation in each resistor to 62.5 mW. Precision resistors for such an application are readily available.

This brings us back to 'grading resistors' as promised. In high-voltage applications, it is important to control not only the voltage between point A and point B, but the gradient as well. Above, I talked about a series of reverse-biased junctions needed to create high-voltage semiconductor devices. Think about a chain of, say, 100 semiconductor diode junctions, each able to stand off 10 V comfortably. The chain theoretically can stand off 1,000 V, but diode junctions—especially reverse biased ones—exhibit notoriously large variability. While they may be rated at 1take enough to 'punch through' its junction. Should that happen, its resistance would suddenly drop to a low value, shifting the strain to all the others. The diode with the next lowest reverse current would then be in jeopardy. Each time one went, the others would come under increasing strain until the whole chain failed.

To prevent this phenomenon, high-voltage engineers add 'grading resistors' to any structure intended to stand off high voltage. Grading resistors bleed a small amount of current (usually on the order of the expected reverse current) to stabilize stacks of stand-off components, such as reverse-biased diodes, capacitors, and stacked insulators. Each stand-off element has a grading resistor connected across it. When any element tries to take up too much of the voltage drop, the current through its grading resistor limits the voltage drop it can maintain.

When you have a grading resistor chain, of course, the way to monitor the supply voltage is to measure the drop across the grading resistor at the chain's grounded end, and multiply by the number of resistors in the chain. I've used this technique to measure voltages as high as 1.5 MV.

For more information about measurement technology, visit the Control Engineering Website and type 'voltage measurements' into the search box.

For additional information, visit these Websites :

C.G. Masi , Control Engineering senior editor

February 6, 2007

QUESTION: How can I convert optical fiber to wireless?

Use wireless networking to report measurement results gathered in a mobile platform back to a stationary base station.

This question refers to measurements taken on a vehicle moving down a long track with measurement data sent back via Modbus on via optical fiber. The system suffers frequent fiber mechanical failures resulting in delays and loss of data.

It is a very good idea to convert any mobile measurement system to wireless when the distance over which the sensing 'vehicle' moves is more than a couple of meters, or when rotations of more than 360° are involved. Tethered systems (where data signals pass over a copper or fiber cable) undergoing repetitive movements just ache to spontaneously disintegrate. Slip rings required to keep rotating systems from winding their cables up are just about the noisiest things known. The advent of cheap, reliable wireless data communications solved both problems. About the only things that I know of that can mess up wireless data communications are metal walls and water. Radio waves at useful frequencies still have trouble getting through both.

There are various form factors available, from short range WiFi networks to satellite-based systems that can carry signals around the globe. Most users, however, can work quite nicely with WiFi and, being based on a consumer-oriented standard, the equipment can be inexpensive.

To create a wireless measurement system, you need some intelligence on the mobile platform carrying the sensors. That intelligence collects data from the sensors and packages it for transmission over a wireless network. Any computing engine capable of carrying a data-acquisition module will do. Many small-footprint embedded computers, such as those available from Kontron in PC/104 format, have analog inputs to connect to sensors and virtually all have PCMCIA or USB ports to connect to data acquisition modules from, say, National Instruments. I have also seen systems based on PDAs. I expect smartphones, like the Palm Treo 650, that combine PDA and cellphone, would work amazingly well—although they would probably eat up free minutes like my dog eats pepperoni pizza.

Vendors, such as Phoenix Contact, provide the required transceivers (often referred to as 'radio sets' or just 'radios') to make the wireless connection between the moving vehicle and the stationary 'base station.' A transceiver includes a radio transmitter and a radio receiver to provide two-way communication. The radio set has intelligence to provide secure coded transmissions. The base station includes a complementary radio set that communicates with a host computer, which stores and processes the data.

The biggest problem is making the choices. A possible starting point is the Phoenix Contact Website, which includes some nice tutorials that may provide more help.

For more information about level wireless measurement technology, visit the Control Engineering Website and type 'wireless LAN' into the search box.

For additional information, visit these Websites :

C.G. Masi , Control Engineering senior editor

January 30, 2007

QUESTION: How can I verify operation of a rotating nozzle in a sealed tank?

discrete control

The microphone output should be white noise amplitude modulated at the nozzle rotation frequency. Source: Control Engineering.

machine control & discrete sensors

Full-wave rectifying the microphone output provides a time-varying dc level mixed with the white noise. Source: Control Engineering.

discrete control

Filtering the rectifier output recovers the modulation envelope riding on top of a dc level. Source: Control Engineering.

machine control & discrete sensors

Running the modulation envelope function through an FFT algorithm produces a spectrum that shows a peak at the nozzle-rotation frequency. Finally, a computer algorithm returns a '1' if there is any frequency bin above a preset threshold (red) and a '0' otherwise. Source: Control Engineering.

This question comes from a pharmaceutical company that uses rotating tank-cleaning nozzles in stainless steel tanks of various sizes. Nozzles are driven by reaction force of the liquid spraying out. In some cases, bearings fail and nozzles fail to turn. When this happens, cleaning effectiveness is drastically reduced. The questioner wants a device to detect the pulsations of a unit that is rotating properly.

The first problem is to pick up the sound. The second is to automatically do something useful with it.

A quick-and-dirty solution to the first problem is:

1. Obtain a small microphone;

2. Tape the microphone outside the tank wall facing outward (away from tank);

3. Connect the microphone to an amplifier and listen for noise as the jet strikes the wall inside the tank as it rotates;

4. Sound should modulate at the nozzle-rotation frequency.

The side of the tank will transmit acoustic signals originating in the tank very effectively to the microphone shell. The whole microphone will move, including the diaphragm. Motion of the vibrating diaphragm against the (relatively) still air is aerodynamically equivalent to moving air vibrating against a still diaphragm. Ergo, you should get really good 'sound' pickup.

There are, of course, contact microphones made especially for picking audio-range vibrations up from solid objects. They have a rod or plunger that directly contacts a pickup (such as a piezoelectric crystal) that converts rod movements into electrical signals.

You could also attach a microelectromechanical (MEMS) accelerometer having sufficient sensitivity and bandwidth to the outside of the tank. The accelerometer will register motion of the tank wall as it vibrates. The amplitude will be the actual tank wall displacement multiplied by the square of the sound frequency (not the nozzle-rotation frequency).

Another method is to put an accelerometer or contact microphone in touch with the pipe supporting the nozzle inside the tank. No matter how nicely the bearings are made or how precisely the nozzle is balanced, you're pretty much guaranteed to hear a hum at the nozzle rotation frequency and/or harmonics. You likely will be able to 'hear' it even where the pipe exits outside the tank. It's important to contact the structure holding the nozzle mechanically. You're unlikely to get anything through the fluid.

Still another method using a MEMS accelerometer assumes that you don't have the washer permanently mounted in the tank, but bring it in through a port when you want to clean the tank. Mount the little accelerometer right on the washer nozzle and report its signal wirelessly. The reason for the wireless connection is to let the nozzle carrying the accelerometer rotate freely without any nasty slip rings. The wireless receiver's antenna would have to be inside the tank to receive a signal, but you should be able to combine it with the pipe or tube carrying the tank-cleaning fluid. When the nozzle rotates, the accelerometer will register the centrifugal force. When it stops, the centrifugal force stops. This requires an accelerometer capable of registering constant loads (in other words a bandwidth from dc to a few hundred Hertz).

The second problem involves data acquisition and analysis. Basically, you need to pick out whatever part of the signal tells you the nozzle is rotating. How to do that depends on which method you use to get the signal.

If you're using the last method (MEMS accelerometer riding on the nozzle), your task is pretty simple. Run the accelerometer output through a dc amplifier to boost its level to 1-10 V. Then, wash it through a low-pass filter with a cutoff frequency of a few Hertz. This will remove most of the noise. If the resulting dc level is above a certain threshold (above the noise but below the normal operating level), you have rotation. If not, then not.

If you are using any of the other pickup methods, there's a lot more work to do. You have an ac output signal that you need to process. Start with an instrumentation amplifier to boost the transducer (microphone or accelerometer) output to between one and 10 V peak audio level.

The audio will be a whooshing sound, as shown in the microphone signal plot, which is basically white noise that has been amplitude modulated at the nozzle rotation frequency. So, you need a detector just like they use for AM radio! It's just a half or full wave rectifier followed by a low-pass filter. The rectifier converts the modulation envelope into a low-frequency ac signal plus a dc average value, and adds that onto the white noise as shown in the rectified signal plot. The low-pass filter removes most of the white noise, leaving a 1-10 V dc output signal that varies to follow the sound amplitude as shown in the envelope function plot.

At this point, you can send the remaining signal into your data acquisition card, which samples the output voltage so many times per second. This sampling rate will vary depending mainly on your data acquisition card, but can usually be set to a few thousand samples per second. Ideally, you'd like the sampling rate to be 3-5 times the rotation frequency, but under no circumstances let if fall below twice the rotation frequency. If it goes below that limit, you get what digital signal processing mavens call 'aliasing,' and you just don't want to go there!The data acquisition card puts out a stream of numbers. Each number represents the average dc level in one of the sample intervals. These numbers go right to the host computer's internal bus. Typically, the data acquisition software will collect these numbers in the computer's memory for later processing.

Now, you've got only one task left. You need to determine the signal strength at the rotation frequency. What I like to do is wash the signal through a fast Fourier transform (FFT) algorithm, which takes the set of numbers and looks for patterns that represent waves at various frequencies, then gives you a plot of amplitude as a function of frequency—a spectrum.

The spectrum should show low-level noise at most frequencies with a peak at the rotation frequency and smaller peaks at the rotation frequency's harmonics. Hopefully, the peaks will be much higher than the noise level.

Finally, you need a program that looks for those peaks. It's a simple threshold algorithm that returns a '1' if it sees a signal bigger than the threshold at any frequency, and a '0' if it doesn't.

In the end, you are left with a TTL-level (transistor transistor logic) electrical signal that is high when the nozzle rotates and low when it doesn't.

For more information about vibration measurement, visit the Control Engineering website at .

Click here to read 'Basics of Signal Conditioning,' Resource Center, free registration required.

For additional information, visit these websites:

C.G. Masi, Control Engineering Senior Editor,

January 23, 2007

QUESTION: How do I measure transformer turns ratio (TTR)?

Figure 1: Use a dual-channel oscilloscope and a clean ac power source to measure effective transformer turns ratio.

First, let's differentiate between the actual turns ratio and effective turns ratio. A transformer's actual turns ratio is the number of turns in the secondary winding divided by the number of turns in the primary winding. It is important because it is the primary determinant of the transformer's output voltage (for a given input voltage).

A simple single-phase transformer consists of two coils of wire wrapped around an 'iron' core. I put quotes around the word 'iron' because transformer cores are almost never made of pure iron, but that is not germane to this description.

These three components each have their own jobs. One coil (the primary) generates a magnetic field from electrical current coursing through it. The core contains the magnetic field, enhancing it and channeling it through the second coil (called, not surprisingly, the secondary). The secondary generates an electromotive force (EMF) in each of its turns based on the magnetic field's rate of change. The EMF times the number of secondary-coil turns is the transformer's output voltage.

In short, the more primary turns, the more magnetic field; the more magnetic field, the faster the field changes; the faster it changes, the more EMF generated in each secondary turn; and the more turns in the secondary, the more voltage across the terminals. When one runs the numbers, one finds that the output voltage is (again ideally) proportional to the turns ratio multiplied by the input voltage.

Of course, things are not usually ideal. Resistance in the primary circuit artificially reduces the primary current. Imperfect containment of the magnetic field by the core reduces the field reaching the secondary. Finally, resistance of the secondary winding drops the voltage seen at the output terminals. The effective turns ratio is, therefore, the ideal turns ratio reduced by these effects.

The only way to determine the actual turns ratio is to cut the transformer open and count the turns. That destroys the transformer, making the result academic.

Figure 1 depicts a setup to measure effective turns. It uses a dual-channel oscilloscope to measure the primary and secondary voltages. It makes no difference whether you use the peak, average, or rms voltages in the calculation.

Use an oscilloscope because o-scope inputs have extremely high impedance—typically well over 10 megohms. High impedance is ideal for voltage measuring instruments because it minimizes measurement errors due to instrument loading. Measure the transformer input voltage across the primary terminals so that the only voltage drops that affect the ratio measurement are the internal losses included in the effective turns ratio definition. Make the measurement without any load on the transformer secondary to eliminate voltage drop across the secondary's internal resistance due to load currents.

Assuming you are using a digital oscilloscope made in the last 20 years, you can set the scope up to measure peak-to-peak voltage on each channel automatically, and likely the instrument will do the math for you as well (the function would be channel B voltage divided by channel A voltage). The result of that division is the transformer's effective turns ratio.

If you have to do this often enough, you can purchase specialized transformer turns-ratio measuring instruments automatically. Their advantage is easier setup. Their disadvantage is reduced flexibility. The digital scope can do lots of other measurements as well.

For additional information, visit these websites:

For information about TTR test equipment visit these websites:

C.G. Masi, Control Engineering Senior Editor,

January 16, 2007

QUESTION: What units are customary for U.S. engineers?

It is tempting to blame the French for the present confusion on what units U.S. engineers should use, but Thomas Jefferson deserves some of the blame as well. Early measurement systems were based on fractions (such as a 1/4 lb of hamburger or 1/8-in. drill). As early as 1585, Simon Stevin suggested using decimal parts for weights and measures. In 1670, Gabriel Mouton conceived what eventually became the 'metric' system. Thomas Jefferson proposed a U.S. metric system that was adopted for money, but not weights and measures, when the new country founded the U.S. Mint in 1792.

Since the U.S. had to start from scratch with its monetary system, anyway, there was approximately zero cost to make decimal denominations for the new currency. Changing weights and measures from the British system (eventually known as British Engineering Units) to a decimal-based system would mean replacing all the measurement equipment in the country as well as all of the nuts, bolts, tools, etc. That seemed an awful waste.

The French had no such compunction. Retooling an entire country meant nothing to revolutionary politicians. Thus, in 1795, the metric system became a practical reality.

As international trade gained importance in various economies, harmonizing weights and measures among countries gained importance as well. Again, if a country committed to changing weights and measures, the metric system was the selection.

Americans, however, having mastered fifth-grade math and having a local economy as big as the rest of the world combined, saw metrification as a lot of cost for little gain. Even when, in 1866, the metric system became legal in the U.S., most citizens responded with a resounding, 'Who cares?'

There are no 'customary' units in the U.S., but, as have virtually all technically advanced nations in the world, the U.S. has officially adopted ISO standard units. These are almost identical to the MKS (meter-kilogram-second) standard most of us learned in Physics 101. In the U.S., however, there is no compulsion to use any specific set of units unless you are in a highly regulated industry.

Many companies have internal policies regarding when to use what units, and those policies tend to be similar for companies within any given industry. Virtually all are gravitating toward ISO standards.

For example, the aerospace industry has been slow to adopt ISO in mission-critical situations for safety reasons. Pilots, especially, often have to make life-or-death decisions rapidly under pressure, and anything that might lead to confusion is, to put it simply, dangerous. The aviation sector has been using a dual-units system for decades, but is moving inexorably to ISO.

Similarly, automotive manufacturers have used SAE (Society of Automotive Engineers) fasteners in places that hobbyists and home mechanics are likely to operate, while employing ISO standards for internal fasteners. As they redesign components, such as engine blocks, they have been introducing ISO standard measurements where earlier designs used SAE.

Thus, your first guide to what units to use is your company's policy. If you're involved in setting company policy, follow your industry's consensus. Finally, when in doubt, use ISO units.

For more information about measurement standards, visit the Control Engineering website at .

C.G. Masi, Control Engineering Senior Editor,

See also:

Have something you always wanted to know, but didn't know who to ask? Email your control-system questions to with 'Questions' in the subject line and question, and full contact information in the body of the email. If we use it, we'll send you an 'Engineer and proud of it!' pocket protector.

Source: 'A chronology of the metric system,' U.S. Metric Association,

January 9, 2007

QUESTION: What is the difference between an instrumentation amplifier and an op-amp?

An instrumentation amplifier incorporates three op-amps in a resistor feedback network that provides variable gain by changing the single resistor Rgain . SOURCE: Wikipedia

Control engineers and data acquisition system designers use both instrumentation amplifiers (in-amps) and operational amplifiers (op-amps) for signal conditioning applications as well as other dc-amplifier and audio-frequency-amplifier applications.

In-amps and op-amps are both monolithic semiconductor integrated circuits intended for a wide range of dc-coupled amplifier applications. Op-amps are more adaptable than in-amps, but are considerably harder to design with.

Operational amplifiers predate even transistors. The earliest ones were built using vacuum tubes and used to create analog computers. The idea was that with the right dc amplifier characteristics, one could build a circuit whose output was some mathematical function of one or more inputs, and that that function could be almost any single-valued function.

For example, circuits could be built to sum the voltages appearing at a number of inputs. Other circuits could form the logarithm of an input, or the inverse logarithm. Combining two circuits to form logarithms of two separate inputs, feeding those outputs to a third amplifier that would form the sum, then to a fourth that would form the inverse logarithm would produce a circuit that would multiply the two input voltages.

In other words, these amplifiers could be used to perform mathematical operations on the input voltages. This, then, is the origin of the name operational amplifier , which was shortened through use to 'op-amp.' The function the amplifier performs is called the transfer function .

While very few people build analog computers anymore, op-amps' flexibility have made them a primary tool for conditioning signals in data acquisition and control systems. Whatever you want for an output, if you can express it in terms of a mathematical function, you can probably build it with op-amps.

The characteristics that make an op-amp are extremely high gain—typically in the tens or hundreds of thousands (80-100 dB), high-impedance differential input, and low-impedance single-ended output. Balanced op-amps can produce both positive and negative output voltages. Others can provide only positive outputs.

Amplifiers with such high gains are highly unstable. For example, a 0.1 V input to an op-amp with a gain of 100 dB would ideally produce a 10,000 V output! Of course, real op-amps generally cannot put out 10,000 V. They're generally limited to 5 or 10 V maximum, so an op-amp running open loop generally runs to the 'rail.' That is, it saturates at the maximum value corresponding to the sign of the input voltage. For example, a 741 op-amp, which is a balanced device that typically has a gain of 500,000 (114 dB), that sees a -.01 V input will drive to the negative rail, or -10V.

Op-amp designers do their magic through closed-loop feedback. To create linear transfer functions (such as linear gain of, say, 5), they use resistor feedback networks. Semiconductor devices, such as diodes, create non-linear transfer functions, such as logarithms.

A particularly useful op-amp circuit consists of three op-amps configured as shown to produce a linear amplifier with variable gain. Changing one resistor ( R gain ) changes the gain for the whole amplifier.

This circuit is the basic in-amp circuit. Certain semiconductor device manufacturers, such as Analog Devices, have very kindly integrated the entire in-amp circuit, and those are the monolithic in-amp semiconductors in use today.

Generally, all of the op-amps and resistors are built into the IC except for R gain . Some devices provide two terminals across which the user puts their own gain resistor; other types provide internal resistors that the user selects by jumping between device terminals. Available gains for most devices run from unity to 1,000, although devices with gains up to 10,000 are available.

Obviously, since an in-amp is just a special case of an op-amp circuit, there's nothing you can do with an in-amp that you can't do with an op-amp, but it will require a lot more components, design effort, and construction effort. Control engineers needing to build special signal-conditioning modules with linear transfer functions find life to be easier if they start with an in-amp. If, however, you need a non-linear transfer function, you're forced to use op-amps.

For more information about machine vision TECHNOLOGY, visit the Control Engineering website at .

—C.G. Masi, Control Engineering Senior Editor,

SOURCE: Kitchin, C. and Counts, L., A Designer's Guide to Instrumentation Amplifiers, 3rd Edition, Analog Devices, Inc., 2006. This book is available through the Analog Devices website at .

See also:

Have something you always wanted to know, but didn't know who to ask? Email your control-system questions to with 'Questions' in the subject line and question, and full contact information in the body of the email. If we use it, we'll send you an 'Engineer and proud of it!' pocket protector.

January 2, 2007

QUESTION: What are the 5 basic steps that must be used in the design and analysis of a control system?

1. Postulate a control system and state the system specification to be satisfied.
2. Generate a functional block diagram and obtain a mathematical representation of the system.
3. Analyze the system using any of the analytical or graphical methods applicable to the problem.
4. Check the performance (speed, accuracy, stability, or other criterion) to see if the specifications are met.
5. Optimize the system parameters so that step 1 is satisfied.

Source: 'Introduction to Control Systems, Third Edition,' by D K Anand and R B Zmood, Butterworth-Heinemann, 1995.

No comments
The Engineers' Choice Awards highlight some of the best new control, instrumentation and automation products as chosen by...
The System Integrator Giants program lists the top 100 system integrators among companies listed in CFE Media's Global System Integrator Database.
The Engineering Leaders Under 40 program identifies and gives recognition to young engineers who...
This eGuide illustrates solutions, applications and benefits of machine vision systems.
Learn how to increase device reliability in harsh environments and decrease unplanned system downtime.
This eGuide contains a series of articles and videos that considers theoretical and practical; immediate needs and a look into the future.
Robot advances in connectivity, collaboration, and programming; Advanced process control; Industrial wireless developments; Multiplatform system integration
Sensor-to-cloud interoperability; PID and digital control efficiency; Alarm management system design; Automotive industry advances
Make Big Data and Industrial Internet of Things work for you, 2017 Engineers' Choice Finalists, Avoid control design pitfalls, Managing IIoT processes
Motion control advances and solutions can help with machine control, automated control on assembly lines, integration of robotics and automation, and machine safety.
This article collection contains several articles on the Industrial Internet of Things (IIoT) and how it is transforming manufacturing.

Find and connect with the most suitable service provider for your unique application. Start searching the Global System Integrator Database Now!

Big Data and bigger solutions; Tablet technologies; SCADA developments
SCADA at the junction, Managing risk through maintenance, Moving at the speed of data
Flexible offshore fire protection; Big Data's impact on operations; Bridging the skills gap; Identifying security risks
click me