Tech Tips October 2006

By Control Engineering Staff March 22, 2007

October 31, 2006

TECH TIP OF THE WEEK:

Network PLCs for complex control applications

Increasing use of automation in industry has led to the need for communications and control on a plant-wide basis with programmable controllers, computers, robots, and CNC machines interconnected. Any such piece of equipment capable of communicating on a network is called a ‘node.’
Networks can take three basic forms.

With the star form (figure), nodes are each directly linked to a central computer, termed the host or master, with the nodes being termed slaves. The host contains the memory, processing and switching equipment that enable terminals to communicate. Access to terminals is via ‘polling’—the host asks each node in turn whether it wants to talk or listen.

With the bus or single highway type network, each node links to a single cable. Signals go from the transmitting node through its link to the cable, then through the cable to all potential receivers . The node to whom the data is addressed accepts the data, while all the others ignore it. Methods ( i.e. , protocols) have to be adopted to ensure that no more than one terminal transmits at once.

Bus systems generally employ a protocol in which a node wishing to transmit listens to see if any messages are being transmitted. If there is no transmission in progress, the node can take control of the network and transmit its message. This method is known as carrier sense multiple access (CSMA).

We could, however, end up with two stations simultaneously perceiving the network to be clear, and simultaneously taking control. The result would be a ‘collision’ of their messages. If such a situation is detected, both stations cease transmitting and wait a random time before attempting to again transmit. This is known as carrier sense multiple access with collision detection (CSMA/CD).

With ring network topology, a continuous cable links all the terminals in the form of a ring. Again, protocols have to be employed to enable communications from different terminals without mixing messages. Single highway and ring methods are often called peer-to-peer in that each node has equal status.

Different PLC manufacturers adopt different network systems and protocols. For example, Mitsubishi uses a network termed MelsecNET, Allen Bradley uses Data Highway Plus, General Electric uses GENET, Texas Instruments uses TIWAY and Siemens has four forms under the general name SINEC. Most employ peer-to-peer topologies.

The ISO/OSI model

Interconnecting several devices can present compatibility problems ( e.g. they may operate at different baud rates or use different protocols). In 1979, the International Standards Organization (ISO) devised a standard model for open system interconnection (OSI) to facilitate communications between different devices. The so-called ISO/OSI model divides communication links into seven layers with physical, electrical, protocol and user standards.

Layer 1 (Physical medium) is concerned with coding and physical transmission of information. Its functions include synchronizing data transfer and transferring bits of data between nodes.

Layer 2 (Data link) defines protocols for sending and receiving information between nodes that are directly connected to each other. Its functions include assembling bits from the physical layer into blocks and transferring them, controlling the sequence of data blocks, and detecting and correcting errors.

Layer 3 (Network) defines switching that routes data between nodes.

Layer 4 (Transport) defines the protocols responsible for sending messages from one end of the network to the other. It controls message flow.

Layer 5 (Session) sets up communication between users at separate locations.

Layer 6 (Presentation) assures that information is delivered in an understandable form.

Layer 7 (Application) links user programs into the communication process and is concerned with the meaning of transmitted information.

Each layer is self contained and only deals with the interfaces of the layer immediately above and below it; performing its tasks and transferring its results to immediately adjacent layers. The OSI model enables manufacturers to design products operable in a particular layer that will interface with the hardware of other manufacturers.

SOURCE: Bolton, W., Programmable Logic Controllers , Fourth Edition, Elsevier Newnes, Burlington, MA, ISBN 0-7506-8112-8, 2006.

October 24, 2006

TECH TIP OF THE WEEK:

Get real about real-time

There has been a lot of talk about ‘real time,’ such as ‘Microsoft Windows XP is not a real-time operating system.’ It’s pretty clear that control loops have to operate in real time—having to wait 60 sec for the ABS in your car to stop a skid would be unproductive—but what constitutes ‘real time’ in the first place?

The issue revolves around latency , which is a measure of how long you have to wait for a system to react to an input, and determinism , which is a qualitative characteristic involving trust.

Think of any control-system element as a two-port black box. One port is where data points go in, and the other is where data emerge. If the element is a transmission medium, such as an Ethernet hub, data that go in should be identical to what comes out. If it’s a computing engine, such as a PLC, the output data may be wildly different from, but related to, the input data. Of course, real system elements often have more than one input and/or output port, but that’s immaterial to this discussion.

Everybody with a technical background, which should include all Control Engineering readers, knows that, when you get right down to it, there has to be some delay between the time data appear at the input and when resulting data appear at the output. That delay is the element’s latency .

The latency for a data bit stuffed into one end of a fiber-optic cable is the cable’s length times the fiber’s refractive index divided by the speed of light. That means the latency for an optical fiber is deterministic . Anytime you stick a bit in one end, you’ll get it out again after a clearly determined time interval. When talking about deterministic systems, engineers usually use the term ‘propagation delay,’ rather than latency, but they mean the same thing.

The same goes for a Boolean logic network built into an FPGA. Each gate takes a certain time to go clickety clack for its part of the computation. By figuring the number of gates that need to go clickety clack, you can calculate the thing’s propagation delay.

That’s not so when you’re talking about most modern computer equipment. For most applications, being able to share computing resources among several users is more important than the time it takes for a given user to get a response.

When you press a key on your Linux workstation keyboard, the keyboard sends an interrupt saying to the CPU ‘Hey, I need some service here!’ The CPU then puts the interrupt in a queue and takes care of it whenever it gets around to it.

Ethernet is non-deterministic for a similar, but different reason. Ethernet communication is a cooperative exercise among all the DTEs (data terminal equipment) on the net. There’s a lot of:

‘I have a packet to send.’

‘Okay, send your packet.’

‘This is what I received, is it what you sent?’

‘No, something got corrupted. I’ll resend it.’

‘Whoops, somebody sent something along the cable at the same time. We’ll both have to do it over again.’

‘Okay, ready when you are.’

‘Here’s your packet.’

This sort of conversation goes on all the time for every packet of information to be sent. And, most blocks of information of useful size get broken up and sent in multiple packets. Everything eventually gets across, but how long it takes is anybody’s guess.

When someone says ‘Linux is not a real-time operating system,’ they mean it is a non-deterministic operating system. But, does anyone really care whether a given control system is deterministic or not?

I submit to you that the answer is an emphatic ‘NO!’

When your wheels start to skid on ice because you’ve pushed your brake pedal too hard, it makes no never mind whether the ABS takes control of the situation in 50 microseconds or 500. As long you regain control before the car slides off the road, you’re happy.

Similarly, a week’s delay in implementing the provisions of the Kyoto protocol has exactly zero effect on the Earth’s climate.

A tenth of a second delay in controlling a pulse-width modulator trying to reproduce the sound of someone’s voice, however, means the system won’t reproduce anything.

Every control application has its characteristic rhythm. Whether its time scale is a microsecond or 100,000 years, that is what separates real time from non-real-time. ‘Real time’ is when latency is much shorter than your characteristic time. Determinism only tells you whether you have a guarantee.

Source: C.G. Masi , Control Engineering Senior Editor

October 17, 2006

Picking a robot controller.

  • The following 10 questions can help when selecting a robot controller, according to Joe Campbell, Adept Technology’s VP:

  • Is the application path intensive or pick and place? Software optimized for path functions will not deliver the performance required for high-speed applications.

  • How fast is the required I/O response? While most I/O devices respond happily in milliseconds, some functions like stop-on-force or motion latching require microsecond response.

  • Where is the sensor? If the sensor is external to the controller, assure enough processing and communication bandwidth.

  • Are you safe? Assure compliance with safety standards including ANSI 15.06.

  • Going international? If so, make sure controller (or systems) has the CE mark.

  • Is your ‘open’ architecture closed? If you plan to add third-party boards or software, make sure the vendor allows it.

  • Do you know your networks? Make sure you know exactly what hardware connections and software protocols are available.

  • Need dual robots? If using two robots in a single cell, determine if you can live with one controller or if two will be necessary.

  • Do you have enough software power? Match software power to the application.

  • It’s just another controller, isn’t it? Apply traditional control engineering guidelines in determining I/O capacity; selecting and designing the graphical user interface:, and providing power isolation and backup, enclosures, and other interconnects.

Source: Gary Mintchell, ‘One (or More) Controller for Every Application,’ Control Engineering, Feb ’02, p. 61.

October 10, 2006

TECH TIP OF THE WEEK:

Controlling controllers through an intranet

Imagine you need to set up a system to, say, control a number of renewable energy devices, such as wind turbines in a wind farm. Each device has its own control loop, but they all need to be under common supervision and coordination. Setting up an intranet based on Internet technology provides a very cost-effective approach, and it works as well for almost any complex multi-level control system

In general, you need local control of each device, then coordination of all the devices on the network (which adds up to two-levels of nested control loops) and finally supervisory monitoring and data archiving.

The individual devices at the first level can be autonomously controlled by embedded computers that have no local user interface. All they need is the ability to communicate over Ethernet.

The next control level monitors the operations of the first-level control loops along with data derived from the external world—such as actual voltage, frequency and phase on the electric-power grid. Based on that information, additional processing power generates control signals that essentially modify the setpoints for the lower-level loops to keep the entire system ‘doin’ what it oughta.’

This is a networking problem and an appropriate solution is an intranet based on Ethernet and TCP/IP, and a data server to collect the information, make decisions, and passes commands to the local nodes to modify the individual device loop control parameters.

Finally, there should be a node devoted to collecting and archiving data. This may be the data server itself, or it might be a separate computer archiving data on a storage area network (SAN) and retrieving it on call.

Notice we haven’t said anything about HMI. The system as described doesn’t know or care about humans. It can run autonomously forever or, more likely, until it runs into a situation it can’t handle—in which case it crashes.

Humans, however, care about the system and can help save it from crashing. With the intranet paradigm, the HMI runs on one or more individual personal computers that interact with the system through the data server. Users can, for example, log on to the intranet to upload revised application software, which the server can then distribute to the device nodes. Users can also monitor operations by downloading real-time or archived data from the archive server as well.

This intranet solution can be surprisingly inexpensive. It requires one or at most two additional computers beyond the embedded computers controlling the local loops, and an Ethernet switch. It is inexpensive because it relies on technology that is widely used in consumer and business applications.

C.G. Masi , Control Engineering Senior Editor

October 3, 2006

TECH TIP OF THE WEEK:

Two brains are better than one

Many, if not most, embedded-system applications include real-time and non-real-time activities. Generally, mission-critical tasks, such as flight-control operation in a fly-by-wire aircraft, require real-time performance and determinism, whereas other tasks, such as painting the HMI screen, don’t. Partitioning these tasks in a multi-core single-board computer (SBC) makes a lot of sense.

SBCs and embedded systems go together like pens and paper. SBCs are available in various form factors to suit a particular application and the space available.

Most are designed to carry one microprocessor core, since most computer applications get along just fine with one brain, thank you very much. Some newer SBC motherboard offerings, however, provide sockets for two processors, making it possible to divide up computation and analysis tasks while maintaining tight coordination between them.

Most engineers think of compute-intensive applications as the motivation for using multi-core motherboards. They think in terms of parallel or pipelined processing using as many processors as possible to cut processing time.

Techni-Scan Medical Systems has just such an application. They are developing an ultrasonic computed-tomography system for use in breast-cancer imaging. The embedded computer in their current prototype uses several single-core SBCs and takes a few hours to turn one scan’s worth of raw data into a 3-D image. They’re anxiously awaiting delivery of some new multiprocessor SBCs that they believe will cut that processing time to one hour immediately, and to less than one-half hour after optimization.

Cranking up computation speed is not, however, the only motivation for using a multicore SBC. Two processors tightly linked on one motherboard can make it possible to separate tasks that are qualitatively different to prevent them interfering with each other.

Specifically, you can run a real-time operating system (RTOS) on one processor, and a more laid back OS, such as Microsoft Windows XP, on the other. The RTOS-running processor can concentrate on mission-critical tasks programmed in C, while the non-RTOS processor can handle less critical tasks, such as displaying interim results of a simulation. You can then program the critical tasks in C or Python, while building your HMI with Visual Basic.

Sure, you can run HMI software under the RTOS along with your mission-critical tasks, but do you really want to? Obviously, non-real-time and real-time tasks run on vastly different time scales and have different requirements for determinism. Separating them on different processors makes both run better, simplifies programming and provides greater safety.

C.G. Masi , Control Engineering Senior Editor