OPC Solves the I/O Driver Problem

In factory automation there are many different devices, protocols, and industrial network standards. As a result, each software vendor is responsible for connectivity from automation applications to other vendors' hardware products. An additional complication is that these devices and protocols are always evolving.

By Al Chisholm, OPC Foundation May 1, 1998

KEY WORDS

Software for control

Advanced control

Embedded control

Process control systems

Sidebars: Terms

In factory automation there are many different devices, protocols, and industrial network standards. As a result, each software vendor is responsible for connectivity from automation applications to other vendors’ hardware products. An additional complication is that these devices and protocols are always evolving. These constant updates require frequent maintenance of the software.

User benefits

To solve this problem, the original OPC (OLE for process control) Foundation’s task force developed the OPC Data Access Specification. The specification defines a standard interface in industrial automation which will allow software and hardware device drivers to simply “plug and play.”

The process of developing the OPC Data Access Specification standard started with just a few companies on an OPC task force. OPC Foundation includes membership of about 140 international companies which now address what the original OPC task force referred to as the I/O Driver Problem. Lack of a defined standard interface affects both vendors by consuming engineering resources, and end-users by making it more difficult to mix-and-match devices and software components in a system.

Creating an industry standard is challenging. There is a large installed base in manufacturing that vendors are reluctant to render obsolete. However, the OPC task force was convinced that solving this I/O driver problem would benefit vendors and users without diminishing vendor competition.

Why COM?

OPC Foundations’s first decision was to define the technological foundation. Should it be platform (operating system) neutral and/or language neutral? Most businesses were already based on Microsoft Windows, so basing the OPC Foundation on Microsoft standards made sense. Given this, Microsoft’s Component Object Model (COM) seemed the only logical choice. COM was:

An already established technology; the distributed component object model (DCOM) was under development. It appeared that COM/DCOM would provide a good technical basis for the OPC effort. Using COM as the plumbing would minimize the amount of software infrastructure that the OPC task force and vendors using the OPC standard would need to reinvent and reengineer.

Attractive in performance. Questions are frequently asked about the performance of OPC as a result of previous performance issues related to DDE (Dynamic Data Exchange) and OLE 1.0. First, the underlying technology, COM, is fundamentally different and much more efficient than either DDE or OLE 1.0. Each protocol included a level of overhead that was high and unavoidable. COM however, allows the creation of connections between clients and servers where the overhead is essentially zero. In recent tests, one-million plus-values per second moved between an In Process OPC Server and a Client.

Easily extendable. With COM, it is very easy for vendors to add functionality without affecting the defined OPC interfaces. Functionality is added simply by defining additional vendor specific interfaces. As a result, vendors may add unique features and value, or simply address issues such as security, while retaining compatibility with other OPC products.

How OPC object model works

OPC Foundation developed an object model based on COM. The solution is analogous to NETBIOS, WINSOCK, or DDE. A set of APIs (application program interfaces) were created to move data in a generic way without dictating the details of functionality provided by the underlying monitoring or control system.

Next the Foundation defined the nature of data exposed and how applications would refer to those data. The most flexible way to do this was to define an interface that would be very good at exposing “named data items” such as PLC42:R40001 or FIC101.PV. And the easiest, most universal way to do this was to allow the user to initially pass the names of the items in strings, where the formats of the strings would be entirely vendor specific (that is, “opaque” to the clients). After the initial registration, this name gets resolved into a “handle” to allow more efficient reading and writing.

Next, the task force determined the binary format in which to return data to the client. The task force chose a data type called a Variant. While this data type is ideal for returning single values, it can also be used to return arrays and structures should these values be defined in the future. A Variant can also be used to move vendor specific data between a vendor’s servers and clients using today’s specification.

After resolving naming and data type, the remaining issue was how to organize COM/OPC objects to enter names and retrieve values. To make this decision, the needs of typical applications such as human-machine interface (HMI), reporting, historical, and trending applications were examined.

One goal in defining a standard interface was to maximize performance by making the interface so effective and easy-to-integrate that vendors would want to use it as the primary interface rather than support it as an add-on much like DDE. As a result, this required focusing primarily on the C++ user.

However the task force also wanted to allow for quick and simple applications written in Visual Basic. After considerable experimentation and debate, the force specified a custom C++ interface as the primary data path and a somewhat simpler automation interface for use by Visual Basic. The VB interface would be created as a “wrapper” around the C++ interface, much as Microsoft’s RDO (Remote Data Objects) is a wrapper around the OLE/ODBC (open database compliant) interface.

OPC, evolving interfaces

In developing a defined standard interface, OPC needed to address multiple clients accessing data multiple ways. These clients might include multiple operator display windows, reports, and trend groups, which need different data at different times and at different rates. This need resulted in the invention of OPC Groups. Such groups can be created, used, and deleted on-the-fly by the client based on minute-to-minute, or day-to-day needs. They are analogous to row sets as used by interfaces such as OLEDB.

At first OPC focused mainly on the connection among the process I/O devices, supervisory control and data acquisition (SCADA), or distributed control systems (DCSs) as the related “I/O driver problem” mentioned earlier. Priority was also to make this interface as flexible and efficient as possible. As a result, the interface specified works between SCADA, DCS engines, and higher-level applications. The development of DCOM has enabled use of the OPC interfaces over a network, in part because of the way COM works.

As of this writing, the OPC Data Access 2.0 specification effort is near completion. This effort involves minor updates to the original OPC specification. Changes were intended to add functionality to the original effort without changing scope or effectiveness.

As a result of numerous requests by OPC Foundation Members to expand the scope of the OPC efforts and success, additional working groups are also active in areas such as Historical Data Access (headed by Dave Rehbein of Fisher-Rosemount), Alarm and Event Message Delivery (headed by Al Chisholm of Intellution), Security Control (headed by Neil Peterson of Fisher-Rosemount), and Batch Data Access (headed by Joe Bangs of Intellution).

As a result of the OPC standard, vendors will spend less time on marginally profitable re-engineering of device interfaces. This situation will be further improved by the availability to OPC Toolkits from various vendors.

For example, using OPC toolkits is like getting skilled labor on a CD. The OPC toolkit was designed to significantly reduce development time. Independent field trials have shown a dramatic reduction in driver development time to as little as two days.

Toolkits

The better toolkits to contain a number of powerful features to make writing OPC servers easier. (See table.)

For users, OPC is the ideal answer to manufacturers’ need for integrating disparate applications throughout their plants in an easy, cost-effective manner. Instead of being forced to invest in a multitude of custom application interfaces, users can simply “snap together” standard, OPC-compliant components.

OPC enables customers to use best of breed products, while experiencing cost-savings related to lower development times. System integrators, original equipment manufacturers, and plant-floor engineers will have fewer interfaces to learn and enjoy faster debugging.

OPC Foundation has a general membership of about 140 companies in the U.S., Japan, and Europe. All members are active in marketing, tradeshow, and promotional activities, however only U.S. members manage the technology. Foundation members enjoy early access to specifications and access to free sample code via the OPC Foundation website at www.opcfoundation.org .

For more information, visit www.controleng.com/info .

Author Information

Al Chisholm, chief technical officer with Intellution Inc. (Norwood, Mass.), is also director on the OPC Board, chairman of the OPC Technical Steering Committee, chairman of the OPC Data Access Working Group, and chairman of the OPC Alarms and Events Working Group.

Terms

API: Application Program Interface.

COM: Component Object Model. A binary component used in several objects (programs) which may be combined to produce desired results. Originated by Microsoft (Redmond, Wa.)

DCOM: Distributed Component Object Model. A highly optimized protocol used to extend COM to networks (remote objects).

DDE: Dynamic Data Exchange. A standard convention for information exchange among various Windows software packages.

OLE: Object Linking and Embedding. Also developed by Microsoft, it’s used to determine how objects interrelate to each other. Objects can be linked or embedded.

OPC: OLE for Process Control. It’s a communication standard based on OLE concepts.

Variant: A data type used for returning single values, but can be used to return arrays and structures should these values be defined in the future. Variants can also be used to move vendor specific data between a vendor’s servers and clients using today’s specification.

Source: Control Engineering