Logical evolution in machine control and control systems

The logic portion of the control loop (sense, decide, actuate) is the core of any control system. How that logic has evolved ranges widely from silicon to systems. Hardware has evolved from relays to programmable logic controllers and distributed control systems, personal computers, process automation systems, and software architectures.

The logic portion of the control loop (sense, decide, actuate) is the core of any control system. How that logic has evolved ranges widely from silicon to systems.

Hardware has evolved from relays to programmable logic controllers and distributed control systems, personal computers, process automation systems, and software architectures. Software tools have expanded into unified development environments. Control systems can be embedded, centralized, or decentralized through I/O points into the thick of the processes they control, networked or stand-alone. They can come from a single vendor or multiple vendors, and can be customized or commercial off the shelf (COTS). In the end, users want to preserve existing assets, speed set-up, get the most for their money, and, ultimately, save their jobs.

Relays, PLCs, and PCs are all available to be part of any control system. PCs didn’t replace—but helped augment—control systems and aid data acquisition, configuration, and other functions. PCs also have been industrialized for use with human-machine interface software and slimmed down for use as thin clients. In the past few years, automation architectures have emerged to buffer users from the pace of Microsoft platform migration. Also, PLC-like controllers with PC guts have made their way onto plant floors. Multiple hardware alternatives have found places in various niches of today’s marketplaces and applications.

Software vendors have taken great advantage of expanded hardware platforms. Proliferation of Microsoft-based tools has raised expectations of users so that no more ”difficult-to-use” software can be sold. It seems every piece of marketing says every software release is easy to use. Further, standards—such as the IEC-61131 languages, OPC, XML, and Visual Basic, among others—make applications easier to develop. Unified development environments have emerged so users can develop code once and scale and adapt it to many applications. Dashboards make information more available across the enterprise. As a result, software hardly ever stands alone anymore. Everyone has come to understand that making connections and playing nice with others expands the utility and function of control software and everything able to connect to it.

While customized code in closed, proprietary black boxes used to be the rule, now control engineers can be creative about where logic resides. It can be on a chip, embedded on a board, within a system or stand-alone, centralized or decentralized, within a cell or networked across the plant, enterprise-wide, extending through and making the entire supply chain more relevant and efficient. Control systems allow users to view and control via the Internet. As a result, security has become more of an issue, as systems have become more open.

As control options have become more varied, so are the delivery, setup, and configuration options. Single-vendor service is often preferred for a complete control system. Multiple vendors often serve more distributed systems. Custom engineering, once popular, has given way in many cases to the use of more COTS hardware and software. In-house engineering is often saved for set-up or optimization. Many users prefer to get as far as they can with packages available for purchase, then adapt or customize the last mile.

System integrators once feared they would be less needed as more Microsoft-based software was used. Instead system integrators find business helping end-users and OEMs fit the pieces together. System integrators also help apply systems and make them work more efficiently with existing assets.

As a result of these changes, increasingly, customers want vendors to:

  • Preserve and maximize the value of existing assets;

  • Speed set-up of new lines;

  • Make software and hardware easy to use; and

  • Deliver the most for the money invested.

There’s more platform neutrality, less time for training, higher processing power, more silicon, smarter software, and greater connectedness. This promotes more functionality and great features. In the end, though, people just want the stuff to work, make their lives easier, and help their organization make more money.

The control loop—measure, decide, actuate—remains a useful reference point for improvements. Depending on the operation, involuntary shut downs can cost millions. If the proper measurements aren’t happening when they need to be, if the data isn’t transformed into information at the time it’s needed to make a decision, then there cannot be effective actuation or action.

Optimizing the control loop, and getting more from existing assets that comprise systems requires extending mean time between failures. The best plants, however, have moved past preventive maintenance, to failure avoidance with predictive maintenance. Maintenance (exactly and only when needed) can be optimized according the most-convenient schedule of related resources. Part of optimizing performance and predicting when maintenance should occur requires real-time analysis of plant-floor data. Users need to identify and monitor recurring patterns that signal when, where, and what needs attention.

These thoughts were part of a Control Engineering keynote presentation at the Sept. 11-12, 2003, InStep Software User Group Conference, in Chicago. For more on the conference, watch for ”Think Again: Accessible, organized, secure,” an upcoming column in October 2003, Control Engineering . Issues are generally posted by mid-month at www.controleng.com/issues .

InStep Software includes among its offerings eDNA historian software to help improve equipment reliability.

—Mark T. Hoske, Control Engineering, editor in chief , [email protected]