Optimizing asset performance in refineries

Advanced analytics use machine learning to simplify user experience, generate process data insights.

By Michael Risse May 30, 2019

These are treacherous times for refineries worldwide. Buffeted by pricing volatility, regulatory complexity and the changing tides of public opinion, these companies must focus like never before on the bottom line. Profitability hinges on optimizing the performance of capital-intensive assets used in mission-critical processes (Figure 1).

At the same time, the process industry reality is that many refineries are drowning in data ─ from historian-based process data to business and manufacturing system contextual data ─ and thirsting for insights. Now, with the rise of the Industrial Internet of Things (IIoT), the sheer volume of data is expanding exponentially.

To be clear, the data is available, along with complementary data from other sources, but software for creating value from this data has been sorely lacking. Advanced analytics software is needed to:

  • Access data from all the sources in an industrial setting
  • Contextualize and cleanse the data to prepare it for analysis
  • Enable engineers and other process experts to investigate and share insights.

Meanwhile, engineers have been hampered by software, such as spreadsheets, unable to perform these tasks efficiently, resulting in a low return on investment and excessive human resources requirements. And even if advanced analytics were deployed in the form of machine learning and other Big Data innovations, before analysis could even begin data scientists and other IT experts were required to gather and prepare data. This lack of innovation marooned engineers without a way to crystalize insights.

Today, the gap between mountains of data and the insights that drive improved process outcomes has been bridged, empowering engineers to help drive better business results. Advanced analytics software is opening a new world of process and performance optimization to improve refinery operations dramatically. This software uses machine learning and other underlying innovations to empower the engineers and experts charged with creating value from their company’s data.

As is true for other process companies, refiners retain a wide range of employees, who in turn have different information needs according to their roles. Maintenance personnel examine diagnostics, predictions and self-service analytics. Plant and operations managers and staff review documents, reports and dashboards. Teams need to share knowledge and collaborate. Operators and engineers require near real-time monitoring of processes and asset performance. And at the top of the corporate pyramid, executives demand information to make decisions and take actions to drive profitability. Being able to provide each role with the right insight ─ without the intervention of a data scientist ─ is now within reach, thanks to modern advanced analytics.

Advanced analytics

Advanced analytics software helps refineries unlock knowledge to enable more efficient production and drive profitability. There are two critical components to an advanced analytics approach.

First, it should be a self-service offering for the engineers who have the required experience, expertise and history with the plant and processes. This enables engineers to work at an application level with productivity, empowerment, interaction and ease-of-use benefits (Figure 2). Furthermore, engineers, teams, managers and organizations can use these new capabilities to enable the distribution of benefits throughout a plant and a company.

Second, the advanced analytics solution should include a connection between the created insights and the underlying data set so users simply can click through and drill down to the data of interest. Advanced analytics offerings should be used to produce not just data visualizations, but also to provide access to the calculations and sources used to generate the outputs.

Advanced analytics solutions accomplish these goals in part by using machine learning and other built-in intelligence tools to simplify and speed the user experience (see sidebar linked below).

Here are three refinery use cases demonstrating the application of machine learning algorithms within advanced analytics software. Although these use cases are from refinery operations, much of the analysis is applicable to other petrochemical plants.

Heat exchanger monitoring

The challenge for this refiner was to proactively predict the end-of-cycle for a heat exchanger as a result of fouling. This would enable risk-based maintenance planning. It also would enable the optimization of processing rates to improve margins, the optimization of required heat energy to minimize operating costs and the minimization of maintenance costs.

The solution was to use advanced analytics application specifically designed to work with process time-series data. Specifically, it meant using a first principles equation to calculate the heat transfer coefficient (U) from stored temperature and flow rate data in the process historian. The next step was to use a prediction tool to create a model predicting U-value data as a function of time, and to determine the end-of-cycle date versus the known minimum U performance threshold (Figure 3). Once this methodology was applied to one heat exchanger, it was then available for use with additional units across the refinery.

Benefits included the ability to perform risk-based maintenance planning based on monitoring of heat exchanger performance degradation. The refinery also was able to optimize operational plans based on potential rate reduction penalties and planned maintenance costs. Production rate reductions due to heat transfer constraints were eliminated, saving millions of dollars. Unplanned heat exchanger maintenance was minimized, saving thousands more. Payback was achieved by predicting and planning, and therefore avoiding, a single failure event.

Catalyst end-of-run prediction

In this use case the challenge was to predict end-of-run for a fixed-bed catalyst system to optimize near- and long-term economics. This required the selection and examination of historical data for training the correlations, which were auto-updated as new data became available. Another challenge was to provide insights to enable collaborative analysis and investigation between the refinery licensor and the catalyst vendor.

The solution was to implement first principles equations for calculating normalized weighted average bed temperature (WABT) for the fixed-bed reactor system. Next, WABT was normalized for variables such as feed rate, feed and product quality, and treat gas ratio. Prediction was then used to create a model to predict normalized WABT as a function of time within steady state conditions. This enabled the refinery to determine the end-of-run date versus the known WABT performance threshold, and to apply this methodology to their other fixed-bed catalyst processes.

Benefits included monitoring of catalyst deactivation to allow co-optimization of near-term economics and risk-based maintenance planning. Better prediction of end-of-run allowed more effective analysis of the tradeoff between rate reduction and maintenance costs. Calculation of end-of-life for the catalyst enabled rapid detection of unexpected changes and performance of corrective actions.

Salt deposition risk monitoring

The challenge in this case was to identify when the refinery was operating at high risk of salt deposition in crude and FCC fractionator overheads and hydro-processing effluent trains. These depositions can lead to unplanned shutdowns from highly accelerated corrosion and fouling. Results needed to be presented as a continuous signal, expressed as a percent of time at-risk. The data required for analysis resided in multiple systems unaligned in terms of time stamps and other metadata.

The solution was to import times-series lab data (H2S, NH3, HCl, etc.) and process data (temperature). The next step was to calculate salt deposition temperatures for NH4HS and NH4Cl, and to compare these values to limits using first principles with safety margins. The final step was to use deviation search and histogram functions to identify high-risk periods, and to provide visualization of at-risk results.

Benefits included minimization of lost production from fouling in trays, exchangers and pipes ─ and elimination of unplanned shutdowns from accelerated corrosion. Unplanned shutdowns from accelerated corrosion can lead to safety incidents and cost millions of dollars in terms of lost margin opportunity. Unplanned maintenance was greatly reduced, saving thousands more, with payback realized by predicting and planning for one failure event.

Final words

For each of these three examples, the data could have been analyzed using spreadsheets or other general-purpose software tools ─ but the required effort, complexity and time would have been excessive. For this alternative approach, assistance from IT and data scientist experts would have been required, adding complications due to the required coordination.

This substantially reduced the required effort. It also cut complexity, allowing refinery process engineers and experts to interact directly with the data of interest using an iterative process, a requirement for solving these and other difficult process problems.

Original content can be found at Oil and Gas Engineering.


Author Bio: Michael Risse is the CMO and vice president at Seeq Corporation, a company building advanced analytics applications for engineers and analysts that accelerate insights into industrial process data. He was formerly a consultant with big data platform and application companies, and prior to that worked with Microsoft for 20 years. Michael is a graduate of the University of Wisconsin at Madison, and he lives in Seattle.