Operational technology to Informational technology: Creating a circle of improvement

Linking operations technology (OT) systems, such as SCADA and sensors to IT-level big data analytics is possible, but begs for the right asset-centric and event-driven view of OT-level trends so the IT-level analytics can find relevant patterns. This article demonstrates that an improvement cycle can be applied to key operational concerns, including equipment health and drilling optimization.

By Roberto Michel July 14, 2015

Linking operations technology (OT) systems, such as SCADA and sensors to IT-level big data analytics is possible, but begs for the right asset-centric and event-driven view of OT-level trends so the IT-level analytics can find relevant patterns. Courtesy OSIsoft

The meeting of OT and IT systems is starting to take shape. OT-level systems are effective at what they do: tactical control over industrial processes and equipment assets. For their part, IT-level systems, including big data analytics, are effective at mining data to find patterns. But for the two to work together, there is a need to not only move data between the levels, but to provide context so that appropriate actions can be taken at the OT level.

In a sense, the two are not merging so much as working in concert, with data from OT systems and Internet-connected sensors feeding higher-level analytics, where outputs then must be acted upon. It’s a virtuous circle in the making, with the goals spanning issues, such as equipment reliability and uptime, and continuous improvement of operations, such as better rate of penetration (ROP) in drilling.

Technology that can help this improvement cycle include big data analytics and enterprise historian software with associated asset-centric data models and analytical tools. In the opinion of some observers, what’s really needed is a better way to aggregate OT-level trends in one place and apply analytics without saddling engineers to manually gather data from point systems, including multiple SCADA systems.

"Many of our clients have engineers spending 60% to 80% of their time just gathering data-dealing with exports from SCADA systems, reports, or spreadsheets-to even get to the point of analysis," said Kemell Kassim, a vice president with Gray Matter Systems. "So by the time they’ve identified something to take action on, the data they are looking at might be weeks or months old, and meanwhile, the equipment and processes have been running inefficiently during that time."

Enterprise historians and associated analytics can help companies quickly gather data from OT-level systems, spot trends, and recommend adjustments, according to Kassim. This class of solution can also keep high-resolution data intact. "When data is gathered manually, it tends to get dumbed down, meaning you end up looking at averages, or minimums and maximums, which makes it hard to draw accurate conclusions," Kassim said. "You want to maintain high resolution on the underlying data, and also be able to apply some modeling to spot trends that require action."

Asset-centered analysis

For OT and IT to work together, some technology providers see the need for software that acts as a context layer. Craig Harclerode, an oil and gas industry principal with OSIsoft, said that a context infrastructure for OT to IT interoperation should provide a metadata layer for an asset-centric view of tag data, and be able to single out events that can be examined by analytics.

"The context infrastructure’s metadata is an object model that abstracts process-level data so that it can be searched and analyzed by asset names and relationships instead of by tag names," said Harclerode. "In this way, users can more easily compare the performance of similar types of equipment, such as compressors, or pumps that have similar sensor, performance calculations, and operating envelopes rather than trying to find information by tag names.

"Because of the explosion in the number of data points or tags, the tags are becoming so numerous that you can’t really be effective at finding information and analyzing performance of like assets unless you move to an asset-based structure," said Harclerode. The benefit of an OT-level-context infrastructure goes beyond data integration to this idea of circle of improvement between the OT and IT levels. Harclerode referred to this last, crucial phase as "operationalizing" the results of big data analytics.

Oil and gas industry clients have been able to use context infrastructure capability with the company’s historian to aggregate and pass data to enterprise-level analytics in the proper asset-based model, and bring insights back into the historian for an operational benefit. Marathon Oil is a good example of using this technology, as the company has used the system and other analytics to improve drilling operations. Marathon is using the system for drilling performance analytics, using it to aggregate and move OT level data to systems such as Tibco Spotfire for further analytics, then displaying results and recommendations for adjustments back to drill rig operators using the system.

As Ken Startz, senior business analyst with Marathon Oil, said in a presentation at OSISoft’s 2014 European user conference, "We see the transition from using PI as a tactical historian … to more of an infrastructure around all of our data."

The PI system serves as OT infrastructure for Marathon’s MaraDrill system, a customized corporate system used for drilling performance management for approximately 30 leased drilling rigs across three shale-gas plays in North America: Eagle Ford, Bakken, and Woodford. The system feeds data into analytics, including Spotfire, which is used to provide depth-based drilling performance insights to enhance the time-based drilling performance trends available through the system. An XML data export function within the system is used to export a slice of time-based drilling data to Spotfire, where it can be analyzed to give a depth-based understanding of drilling performance, such as weight on bit (WOB) over a given depth.

By analyzing performance across both time and depth, MaraDrill provides insights to help avoid problems such as "stick slip," a vibration phenomenon detrimental to efficient drilling. MaraDrill also runs models that generate recommended setpoints that drilling operators use to adjust WOB, torque, and other parameters in the drilling control system to avoid problems like stick slip and maximize ROP. According to Startz, about 50% of the benefit of MaraDrill comes from operators following these recommended setpoints, while the other half of the benefit comes from collecting data in PI and being able to perform analyses on the wells that were drilled.

MaraDrill’s system has helped Marathon Oil improve on ROP for drilling versus peer group averages, with Marathon’s ROP at 1,450 ft/day, while peers averaged 1,050 ft/day. "That is a pretty strong statistic that by using these advanced tools, we can drill faster and better," said Startz.

Equipment health

Many oil and gas companies are trying to improve either process performance or equipment health by bringing data from the OT world into enterprise-level analytics. To do that more effectively, it helps to have an asset-centric view of OT-level data within asset monitoring solutions, according to Bart Winters, business manager of asset management solutions for Honeywell Process Solutions (HPS).

"A key challenge is organizing data in the context of assets, so that you’re dealing with the data in terms of the attributes associated with a piece of equipment, rather than cryptic underling tag names," said Winters. "For example, it’s easier to troubleshoot pumps or heat exchangers if an engineer or support person can drill down into an asset hierarchy, beginning at the rig or well level, moving down to asset type, such as a pump, and then looking at the attributes of interest, such as inlet pressure. This requires the analytics software to have an abstraction layer that organizes data around asset types and attributes, but can still access tag-level data."

An effective analytics solution for taking action on OT-level trends should also be able to do rapid calculations on process or equipment performance data to see what the current operating efficiency is compared to expected performance to monitor for deviations from goals, according to Winters. "Besides these first principle calculations, asset monitoring solutions also need rules engines to kick off workflows," he said.

Asset analytics also should have data cleansing logic to automatically compensate for corrupt or missing data, as well as the ability to consider data other than tag-level process data, such as lab samples, work order data, vibration monitoring data, or oil analysis data. The sum of these capabilities, according to Winters, is what allows end-user organizations to glean OT-level data for the trends that require action. "You need the ability to bring multiple data sources in, cleanse and analyze the data, and do exception-based notifications and workflows when conditions or trends are not matching the expected model," he said.

Shell uses HPS’ asset monitoring and analysis software for performance and asset health improvement for its deepwater Gulf of Mexico operations, according to Winters. The system supports what Shell calls its exception-based surveillance or (EBS) program for the region.

The EBS program is able to take in multiple sources of OT-level data and perform data cleansing to adjust for noisy and incomplete data so that valid alerts are triggered. Shell sees the EBS workflows as a higher-level, more multifaceted form of dealing with exceptions, rather than the type of operational alarming that occurs in OT-level systems when a single tag crosses a threshold. The EBS’s workflows also differ from OT-level alarming in that they escalate to the engineers responsible for long-term operational or asset health improvements, rather than operators who need to concentrate on short-term adjustments.

The aim of asset monitoring and analytics is to consider various sources of OT-level data, and then apply data cleansing, rules engines, and workflow capabilities to elevate crucial issues for engineers. This helps identify both subtle precursors to equipment failure that might go unnoticed in traditional OT-level systems, as well as spot inefficiencies, such as primary and backup pumps running in parallel. "The basic goal in the use of monitoring and analysis solutions is to reduce the frequency and impact of unplanned capacity loss," said Winters. "You might avoid a more catastrophic event in some cases, but it’s also about monitoring for efficiency."

Haliburton’s big data expert on why Hadoop can help OT-level performance

Open source big data technologies, including Hadoop and Storm, should be seen as complementary to traditional systems used at the OT level for monitoring and analysis. These big data technologies will help oil and gas companies get a complete picture of diverse data and complex relationships, which influence issues like drilling performance, according to Dr. Satyam Priyadarshy, chief data scientist at Halliburton’s Landmark division.

Hadoop can be thought of as a way to store large amounts of complex data of any type without losing its resolution or data model, according to Priyadarshy. "This is because unlike traditional data warehouse technology based on an extraction, transformation, and loading paradigm, Hadoop is used to ingest data in native format and the transformative step is performed during the computing phase to discover value from big data," he said.

The result is that many types of data, from flat-file sensor data, tag data from historians, work order data from maintenance systems, geology data, or even video or scans of handwritten notes, can be stored and used within a Hadoop system. "What it provides you, at a base level, is a single source of truth," Priyadarshy said.

In some instances, big data must be analyzed in near real time, which is where big data technologies like Storm and Spark come into play because they are able to perform stream processing of data. "Halliburton Landmark’s software is able integrate to these big data technologies," Priyadarshy said, "so that users can look at every available data source to find out which factors or subtle causal relationships between factors lead to lower than expected performance around issues such as ROP in drilling operations.

Internally, Halliburton has begun to employ big data engines such as Hadoop to optimize operations by being able to consider the full range of data that might influence performance."

"Big data analytics is all about looking at the total picture and can be combined with industry applications that have specific domain knowledge or data cleansing algorithms," said Priyadarshy. "Big data is about all the data relevant to a business. So whether the data comes from operational technology systems, or the data is from IT or business systems, you want to look at the whole picture. Traditional analytics that people have done have mostly been done in silos, and when you do things in silos, you can only get so much value from it. Roberto Michel is a freelance writer and editor with more than 20 years of experience with business-to-business publications. Courtesy: Roberto Michel

– Roberto Michel is a freelance writer and editor with more than 20 years of experience with business-to-business publications. Edited by Eric R. Eissler, editor-in-chief, Oil & Gas Engineering, eeissler@cfemedia.com

For more information:

OSIsoft User Conference; Marathon presentation on Real Time Operational Intelligence 

Honeywell Process Solution, Asset Sentinel solution page