Real-time databases for process control

As process control systems continue to evolve, their software applications become more complex, and often require timely access to, and processing of, massive amounts of data.

By Control Engineering Staff August 14, 2003

As process control systems continue to evolve, their software applications become more complex, and often require timely access to, and processing of, massive amounts of data. It is often assumed that a fast enough data processing engine, created specifically for a system and tightly integrated with its code, will meet real-time requirements. However, the control application’s input data must often be correlated, merged, or compared across all data objects and across time for filtering or analysis. The data must be shared by concurrent tasks that have different functions, timing requirements and degrees of importance. Real-time database systems (RTDBS) are used increasingly to meet these demands. While traditional databases have long served as back-end repositories for control systems, RTDBSs differ in that they are integrated within the systems’ actual real-time processes.

In a typical industrial control system incorporating real-time database management, each device detects the values of some attributes of the real world and makes them available to the database. In turn the database provides the information needed by various system transactions to perform their functions. Input data comes from the field devices (sensors, transmitters, switches, etc.) via the controller’s data acquisition interfaces, from supervisory control systems (PC, DCS, PLC) via external controller links, and from other controllers via intercontroller connections. Output data is directed to field control and indication devices, supervisory systems, and other controllers.

One of the most important differences between the databases used by real-time and non-real-time systems is that while the conventional database management system aims to achieve good throughput or average response time, the real-time database must provide a predictable response time to guarantee the completion of time-critical transactions. Therefore such databases avoid using components that introduce unpredictable latencies, such as disk I/O operations, message passing or garbage collection. Real-time databases tend to be designed as in-memory database systems. These forego disk I/O entirely, and their simplified design (compared to conventional databases) minimizes message passing. The validity of the real-time data in the database may also be compromised if updates cannot be performed fast enough to reflect real-world events. To avoid this, the best solution has been a fully time-cognizant transaction manager. At the very least, the database design should provide some means of transaction prioritization.

In-memory databases can achieve predictable response times in the microsecond range. These databases are designed to operate in the harsh environment of real-time systems, with strict requirements for resource utilization, and are ready to provide the performance and reliability required by real-life control applications.

This article was written by Dr. Arkady Kanevsky, an adjunct faculty in the computer science departments of Texas A&M University and Mississippi State University and chair of Real-Time MPI (MPIRT) Forum, Direct Access Transports (DAT) Collaborative, and Virtual Interface Developers Forum (VIDF); and Andrei Gorine, principal architect for McObject, developer of the eXtremeDB real-time database system.

More information, including white papers on in-memory data management, can be found at www.mcobject.com .

—David Greenfield, Editorial Director, dgreenfield@reedbusiness.com