Tech Tips November 2006
November 28, 2006
TECH TIP OF THE WEEK:
Use the right feedback device for the application
Closed loop systems use feedback signals for stabilization, speed, and position information. There are a variety of devices to provide this data, such as the analog tachometer, optical encoder, Hall sensor, and resolver. The table in Figure 1 summarizes their various features. The table also indicates what are the most appropriate uses for each.
Tachometers resemble miniature motors, but the similarity ceases there. The tachometer is not used for a power delivering device, but for a signal-providing device. As the tach armature rotates, a voltage is developed at the terminals. The faster the shaft turns, the larger the voltage magnitude (i.e. the output signal is directly proportional to speed). The output voltage polarity depends on rotation direction. Such analog, or dc, tachometers provide the simplest, most direct method for sensing speed information, which they can deliver directly to a meter for visual speed readings, or to a drive for velocity feedback.
For example, consider a lead screw assembly that must move a load at a constant speed. Say that the motor must rotate the lead screw at 3,600 rpm. With a tachometer voltage constant of 2.5 volts/krpm, the voltage at the tachometer terminals should be:
3,600 rpm x 2.5 V/krpm = 9 V.
If the voltage reading is, indeed, 9 V, then the motor/load is rotating at 3,600 rpm. The servo drive will try to maintain this voltage to assure the desired speed.
Voltage constant , also called voltage gradient or sensitivity, represents the output voltage generated from a tachometer when operated at 1,000 rpm. Voltage constant may also be expressed in volts per radian per second (V/rad/s).
Ripple , also called voltage ripple or tachometer ripple, appears as an ac signal superimposed upon the average, or dc level.
Linearity is a measure of how far away from perfect the tach is. An ideal tachometer would have a perfectly straight line voltage vs. speed characteristic. Design and manufacturing tolerances alter this straight line. The maximum difference between actual and theoretical curves is linearity error.
A digital tachometer, often called an optical encoder or simply an encoder, is a mechanical-to-electrical conversion device. As the encoder's shaft rotates, an output signal results proportional to angle through which it rotates. The output signal may be a train of square waves or sinusoidal waves, or may be a code providing absolute position. Encoders fall into two basic classes: absolute and incremental.
Absolute encoders provide a specific code for each shaft position throughout 360 degrees. Absolute encoders may employ either contact or non-contact means to sense position. The contact scheme incorporates a brush assembly to make direct electrical contact with the electrically conductive paths on a coded disk. The non-contact scheme uses photoelectric detection to sense position. The number of tracks on the coded disk may be increased until the desired resolution or accuracy is achieved. Since position information is coded directly on the disk assembly, it will not need to return to a 'home' position after recovering from a power failure.
Incremental encoders provide either pulses or a sinusoidal output signal as they move. The controller derives distance information by counting pulses as the shaft rotates from a home position. Opaque lines printed on a disk interrupt a light beam. A photodetector receives the beam and puts out a pulse train. Typically, an incremental encoder system puts out two signals whose phase relationship informs the controller of the rotation direction.
Line count , the number of pulses per revolution, determines the positional accuracy.
Output signal from the photo sensor can be either a sine or square wave signal.
Number of channels can be one or two. The two-channel version provides a signal relationship to obtain motion direction as mentioned above. Some systems also provide a zero 'index' pulse to indicate crossing the home position.
In a typical incremental encoder application, an input signal loads a counter with the position the load must be moved to. As the motor accelerates, the encoder generates pulses at an increasing rate until the shaft reaches a constant run speed. During the run period, the pulses appear at a constant rate that directly relates to motor speed. Meanwhile, a counter counts encoder pulses. At a predetermined count, the controller tells the motor to slow down to avoid overshooting the desired end position. When the counter is within 1 or 2 pulses of the desired position, the controller tells the motor to stop.
Hall sensors are solid-state devices that sense magnetic fields. In some motors, a 4-pole magnet wheel is attached to the rear of the motor shaft. As this magnetized wheel passes by the Hall sensor, the Hall output changes state. In other motors, the actual rotor magnets are used.
Some brushless motors employ three Hall sensors to provide electronic commutation information. The sensors provide three square wave signals phased 120 degrees apart so the controller knows which electrical winding to energize at any given time. Since the sensors' outputs look like series of square waves, the controller can use the time between pulses to derive speed information. Today, many encoders come with tracks that mimic Hall signals.
Like tachometers, resolvers look similar to small motors. One end has terminal wires, and the other end has a mounting flange and a shaft. Internally, a signal winding revolves inside a fixed stator. As the winding moves, the output signal changes in direct proportional to the angle through which the rotor turns.
The simplest resolver contains one input winding and two output windings located 90 degrees apart. A reference signal is applied to the input winding and couples to a secondary winding on the rotor. As the rotor turns, the secondary signal changes magnitude according to rotation angle. This signal then couples back to two secondary windings on the stator, which provide a sine and cosine output signals. The controller produces a signal representing the angle through which the rotor has moved, and another proportional to speed. The output signal goes through one sine wave as the rotor goes through 360 mechanical degrees. If the output signal went through four sine waves per rotor revolution, it would be called a 4-speed resolver. Another version, called a 'synchro,' has three windings located 120 degrees apart on the stator.
As the table shows, each feedback device has its own characteristics, parameters, operating range, and advantages. It is up to the engineer to choose the most appropriate feedback device for the application and design a control package to make best use of it.
—John Mazurkiewicz, Baldor Electric Co.
This article is an edited adaptation of a white paper provided as a handout during educational sessions presented by Baldor Electric entitled 'recommending, sizing and applying servo/motion products' and 'servo motor repair.' For more information about these sessions, or to register for one or both of them, visit
November 21, 2006
TECH TIP OF THE WEEK:
Match drive and motor moments of inertia for best motion-control results
Actuators have to develop forces to overcome both dissipative forces (friction) and inertial forces. That is, to move anything from position A to position B, you have to first apply force to accelerate the thing from a state of rest to some state of motion, then apply (usually smaller) force to maintain it in the accelerated-motion state against friction, and finally apply a force to decelerate it back to rest.
When sizing electric motors to drive moving components, engineers tend to concentrate on overcoming friction while giving less heed to the ramp-up and ramp-down portions of the motion. In fact, for many applications, overcoming friction is less important than inertial considerations. If you've put in good bearings and used high-efficiency mechanical components, friction forces are likely to be much smaller than inertial forces. There are lots of things you can do to reduce friction, but your ability to control inertial forces is usually much more limited.
Most motion control systems rely on rotating electric motors to drive the motion, even if the final motion is linear, so it is much more natural to use rotational dynamics to analyze the motion. This is unfortunate because most engineers, even mechanical engineers, are more comfortable with linear dynamics than rotational dynamics.
Moment of inertia is the rotational equivalent of mass. Just as mass quantifies an object's tendency to resist changes in its translational velocity, moment of inertia quantifies its resistance to changes in rotational speed. In general, the moment of inertia of any component increases directly with its mass and with the square of the distance that mass is from the rotation axis. That is, if you double the mass for the same size and shape, you double the moment of inertia, but if you keep the mass constant while doubling the size, the moment of inertia goes up by a factor of four. So, in general the formula for moment of inertia is
I = Kmr2,
where I is the moment of inertia, m is the mass, and r is the object's radius. The additional parameter K is a numeric value that depends on how its mass is distributed. When most of the mass is concentrated near the axis, such as for a solid ball, K is low. If it's mostly far from the center, such as for a mass on the end of an arm, K is much larger.
Electric motor armatures are generally cylindrical with more or less uniform density, so K is approximately 0.5. For a lever that pivots about one end, K is about 0.3. Values for other shapes are given in most undergraduate physics textbooks and can be found by searching the Internet. Of course, you should be able to get the moment of inertia for the armature of any specific electric motor from its manufacturer.
Understanding the moment of inertia that the load presents is a little more complicated because loads come in all shapes and sizes, and perform complicated motions. Generally, start with the rotational equivalent of Newton's second law:
M = I u,
where M is the moment or torque driving the rotation, and u is the angular acceleration.
Suppose that we want to, for example, accelerate a 100 grm weight to a linear velocity of 10 cm/sec in 0.1 sec. Let's say that we'll use the mechanism shown in Figure 1 to do it. If the lever arm length is 20 cm then the angular acceleration is 10 cm/sec÷ 0.1 sec ÷ 20 cm = 5 radians/sec2.
The force applied to the mass would have to be (from Newton's second law) 100 gm × 10 cm/sec÷ 0.1 sec = 10,000 dynes. Applying that force through the 20 cm lever arm gives a moment of 10,000 dynes × 20 cm = 2×105dyne-cm. Solving the rotational version of Newton's second law for the moment of inertia gives 40,000 gm-cm2.
The motor's maximum rotational speed will be (in radians per second) 10 cm/sec÷ 20 cm = 0.5 radians/sec. To convert from rad/sec to RPM, multiply by 9.55 to get just under 5 rpm.
Very few electric motors run that slowly. It's a pretty typical output speed for a gearmotor, however. Suppose we find a gearmotor whose armature runs at 2,500 rpm while the output shaft runs at 5 rpm. That means the gear head has a ratio of 500:1.
Motor designers' rule of thumb is that the armature moment of inertia should match the load moment of inertia, after accounting for the gear ratio. The way to do this is to divide the load moment of inertia by the square of the gear ratio to get the appropriate armature moment of inertia, which in this case turns out to be 0.16 gm-cm2.
Why match the armature moment of inertia to that of the load? If the load is too heavy, the motor won't be able to control it. If it's too light, most power will go into accelerating and decelerating the armature, rather than the load. Not only is this a waste of power, it could lead to overheating the motor.
How good does the match have to be? It turns out to be not all that critical. A mismatch by a factor of 2-3 will not cause problems, but it could be as large as 10:1 depending on the application. If the mismatch is much larger, however, serious problems may develop.
— C.G. Masi , Control Engineering Senior Editor
November 14, 2006
TECH TIP OF THE WEEK:
Use modularity for better software maintenancel
While computer programs are, essentially, long strings of commands that the computer steps through in sequence, veteran software engineers know to break them into much smaller modules. Variously called modules, subroutines, function blocks or virtual instruments (VIs) depending on the programming language involved, they are relatively small groups of commands that perform very limited tasks, which the programmer can mix and match to produce much more complex machine behavior.
In his book Programmable Logic Controllers , William Bolton says: 'A function block is a program instruction unit which, when executed, yields one or more output values.… Function blocks can has standard functions, such as those of the logic gates or counters or timers, or have functions defined by the user, e.g. a block to obtain an average value of inputs.'
Programmers combine calls to simple modules to create more complex modules. In LabVIEW graphical programming, for example, the programmer starts with a palette of pre-packaged VIs that they interconnect to form their unique application program or diagram. They can then select a section of their diagram to compile as a new VI. So, a complete application becomes a VI made up of simpler VIs, which are, in turn, combinations of even simpler VIs, and so forth ad nausium .
Some languages, such as C++, require programmers to work in modular form. A C++ program always starts with prescribed header information (such as a list of standard modules the program will call up during its execution), then moves to a main module that controls execution of the entire program, which is then followed by all of the specially written modules for the application, which the main program and other modules call as functions. Programming with such a formatted approach is called 'structured programming,' and languages that make structured programming mandatory are called structured languages.
Not all languages are structured. Matlab, for example, allows complete freedom in how you write the code. It does encourage modularity, however, and experienced Matlab programmers typically follow structured-programming methods by choice.
Structured programming has advantages that make it de rigure for sophisticated programmers:
Readability —As you write programs, you have to read them as well. With small modules, it is easier to follow the program execution thread while reading them.
Testability —As test-industry observer Steve Scheiber is fond of pointing out: 'Every program longer than one page contains at least three bugs.' By breaking the code into small modules, you reduce the number of bugs you have to deal with. Fewer bugs reduce the problem of bugs hiding behind bugs. Small code modules are easy to test individually. They have few, readily identifiable inputs and outputs and exhibit simple behavior, so it is easy to create quick and simple tests to make sure a newly written module really does what it should do, and to look for unexpected behaviors.
Reusability —Just as having small, standard-shaped bricks makes building a big house easier, starting with small, pre-tested modules makes building a large, complex application easier. Once a module is created and tested, you can always copy it to a new application, easing the programming burden. Many application programmers maintain large libraries of modules that they've created in the past, which they paste into their new programs whenever they need that function.
Maintainability —Months or years after you've written the application, it is much easier to find your way through a modular program than some enormous list of commands, especially if the modules are functions that you re-use constantly. Also, modularity makes it easier to update programs. Instead of having to re-write the whole program edifice, all you have to do is pull one module out and insert the new one.
Structured programming with modular code is a discipline that has to be learned through practice. It's a case of developing new programming habits. Once you've expended the effort to make those habits automatic, the rewards are enormous.
C.G. Masi , Control Engineering Senior Editor
W. Bolton, Programmable Logic Controllers , Fourth Edition, Copyright 2006, Elsevier Newnes, ISBN-13: 978-0-7506-8112-4
D. Abbott, Linux for Embedded and Real-Time Applications , Second Edition, Copyright 2006, Elsevier, Inc., ISBN-13: 978-0-7506-7932-9
November 7, 2006
TECH TIP OF THE WEEK:
Don't re-invent the wheel
The 'engineer disease' is not a medical condition. It's a psychological problem that forces engineers—all of whom have the condition in a fairly virulent form—to re-invent the wheel. Let's face it: we got into engineering in the first place because we liked to create clever new things. We continue do it because it's fun.
Somebody once defined the difference between mental health and mental illness by whether your behavior improves your life or causes problems. Having fun creating clever new things gives engineers the opportunity to make a more-or-less pretty good living doing something enjoyable and useful. In general, it's pretty healthy.
But, the impulse to create clever new things can be taken too far. When it causes an engineer to spend resources solving a problem that someone else has already solved quite nicely, thank you, it becomes counterproductive. That's what we call 're-inventing the wheel.'
How do you know when you're about to start re-inventing the wheel? You know when you find a product available that will do the same job as your clever new thing.
If you can buy it, don't build it!
'Aha,' you say, 'maybe my idea can do it better !'
Unless your job is to design that particular thing, which your company is going to turn into a mass-produced product, you'll never design and build something that will 'do it better,' period.
Every veteran engineer knows from experience that taking a product concept from idea to full production is a long row to hoe. Each stage is marked by trials and errors and ideas that just didn't work out very well. Those are all prototypes. At the process' end, you have a stream of units pouring out of a factory, all of which have been tested and proven to do what they're supposed to do each time and every time, and keep doing it for as long as necessary.
If you buy one of those units, you have something that has been tested and proven to do what it's supposed to do each time and every time, and to keep on doing it for as long as necessary. Not only that, but what you will pay for it is the actual cost to produce it (which includes paying profits to the investors who put up the money to make it all possible), plus the non-recurring engineering cost divided by the number of units produced .
If you, instead, succumb to temptation and build it yourself, you will end up with one of those prototypes. That is, one of those trials and errors and ideas that just didn't work out very well. By definition it has not been tested and not proven to do what it's supposed to do each time and every time, nor has it been shown to keep on doing it for as long as necessary. On top of that; you have to pay for all the non-recurring engineering costs all by your oneseys. No amortization over 1.3 kazillion manufactured units; it's NRE divided by one for you!
In the end, the engineer disease causes its sufferer to wind up with an inferior solution at an extraordinarily large price.
For more advice about managing your career as a controls engineer, visit the Control Engineering website .
- Events & Awards
- Magazine Archives
- Digital Reports
- Global SI Database
- Oil & Gas Engineering
- Survey Prize Winners