Modernize electronic flow measurement with MQTT

MQTT is a publish/subscribe, extremely simple and lightweight messaging protocol

By Arlen Nipper July 30, 2020

To discuss modernizing electronic flow measurement (EFM), let’s start at the top.

In 1980, I was at Amoco Oil, and EFM, or measuring how much oil or gas flows through a pipeline, was calculated using round chart recorders.

A pin would chart the pressure and temperature over the course of a month and then someone would physically collect the round charts, if the ink hadn’t run out or it hadn’t been eaten by mice (Figure 1). Then, engineers would integrate the temperature and pressure with a calculator to determine volume. You can imagine an engineer’s office with a couple of hundred of these paper charts lying around with the office inhabitants trying to make sense of the data.

Sometime in the early 1980s someone realized the data could be measured electronically and integrated on a computer. Eventually the American Petroleum Institute (API) released a standard that said if you are going to measure oil and gas and then sell it, the measurement must be precise.

In the ensuing rush to automate EFM for better measurement, millions of flow computers were installed at companies all over the world. Over the next 30 years there were something like a dozen different manufacturers and a dozen protocols for EFM.

As a result, today we see a heterogeneous, legacy infrastructure for EFM, with many customer-specific, proprietary solutions. Some oil & gas companies have thousands of flow computers; therefore, upgrading to new technology is not easy. A tech must go out to unwire/uninstall the flow computers, then reinstall new equipment. When you are looking at tens of thousands of dollars to modernize each flow computer, the cost is often prohibitive.

Not to mention, the flow computers in the field today are accurate. They work. We all know if it isn’t broken, don’t fix it. The problem is not accuracy, but consistency.

Challenges with varied EFM data types

Today’s flow computers do double duty, with multiple hosts and multiple data types, which is not conducive to typical SCADA processes. From an operations standpoint the real-time operational data includes configuration, alarm, event, and history data. Per the API standard these data points are atomic, meaning they need to stay together, and you need to prove they were all gathered at the same time, which is incredibly challenging. These data points are accessed by protocol.

On the other side of the flow computer is the cash register which says — in the last hour this pipeline delivered x barrels of oil or x cubic feet of natural gas. Also, two hours ago someone changed the calibration of the flow computer. This accounting type of data is accessed differently but from the same instrument.

Several challenges exist with EFM today, but the primary concern is that most existing networks do not have the bandwidth to meet all these demands for data. Companies are making decisions they shouldn’t have to — they have to decide what data to leave stranded since they cannot get to it all. They are sacrificing data availability due to bandwidth.

The reason networks do not have enough bandwidth is because EFM protocols are poll/response. On the network, the user sends out a poll, waits for the flow computer to assemble a response, the response comes back, and then the process moves to the next flow computer. Poll/response protocols have limits because you can’t poll fast enough on the network and a lot of valuable data is left stranded in the field.

The takeaway is that because of the state of EFM, users must choose what data they want to leave stranded in the field — and that is not a good thing.

EFM requirements

Even if users were keeping with the status quo it is clear challenges exist with EFM. Now – what about new requirements? The industry is abuzz with artificial intelligence, process optimization, increasing production and more. All these activities mean multiple data consumers, more strain on bandwidth and more siloed data sets.

The good news is that to enable these technologies, companies don’t need to re-instrument their entire system. All the equipment out there is doing the job well, we just need to get better access to the data.

Now, a plant can improve its network a little and get a bit more data. But these new requirements entail a magnitude of more data and faster than ever. Perhaps a manufacturer was asking for a set of five process variables, but now they want 100. Instead of every hour, they want 100 every minute. These types of improvements are possible, but changes must be made.

Once a customer achieves this increase in data points, the advanced functions become a byproduct of the data. If you see data every minute, you can enable artificial intelligence and predictive maintenance and other new applications by using that data for real intelligence.

I worked with a customer who rolled out the changes we are about to discuss and went from getting data once per hour to once per minute. Achieving that kind of visibility required them to retrain their operators because they are seeing live data happening and they’ve never seen that before. They can see how the pipeline is operating in near real-time. Forget about optimization or artificial intelligence, just having more visibility allows them to do a better job operating the assets they have today.

MQTT solves EFM challenges

If we forgot about the past 35 years of EFM technology and eliminated any preconceived notions and invented something new today — how would we want it to work? We would want a flow computer plugged into a modern TCP/IP network. We would want to connect, authenticate, then find everything we want to know. It should be that simple.

The good news is there is a way to do that efficiently, using all the advantages of the underlying TCP/IP networks we have today. It turns out we don’t have to use poll/response, there is a better way.

I co-invented MQ Telemetry Transport (MQTT) in 1999 with Dr. Andy Stanford-Clark of IBM as an open standard for running a pipeline. The project was for Phillips 66, and they wanted to use VSAT communications more efficiently for their real-time, mission critical SCADA system. Multiple data consumers wanted access to the real time information (See figure 2).

Figure 3: The basic MQTT architecture allows for unlimited clients over a publish/subscribe protocol. Courtesy: Cirrus Link[/caption]

MQTT is the most-used messaging protocol for an IoT solution, but it’s time for the oil & gas industry to catch up for EFM. We don’t have to prove that MQTT has become a dominant IoT transport — everyone is using it because it is simple, efficient, and runs on a small footprint. You can publish any data that you want on any topic.

We recently created a specification within the Eclipse Tahu project called Sparkplug that defines how to use MQTT in a mission-critical, real-time environment. Sparkplug defines a standard MQTT topic namespace, payload, and session state management for industrial applications while meeting the requirements of real-time SCADA implementations. Sparkplug is a great starting point for how to use MQTT in EFM.

One of the challenges with old EFM methods is dealing with atomic data. SCADA systems can handle single process variables, but multiple variables put together as a record gives them trouble. MQTT can publish this type of data as records. MQTT and Sparkplug can handle both single process and records as a whole object with data standardization, data time stamped at the source, data sent as an immutable object and the ability to store and forward.

EFM and MQTT in practice

EFM creates vast amounts of data at the source. With MQTT there is a standard way to send that data across the enterprise so no matter what piece of equipment or what end application speaks what language, it’s just a piece of data, such as a temperature. All of those backend systems now can use the temperature independently and however they like. EFM data can be connected to the cloud, to big data applications — the possibilities are endless.

Let’s take one last look at a customer who implemented MQTT on their EFM system. Its number one goal was to get more production from its wells. With MQTT it can adjust its well once per minute instead of once per hour, which led to an increase in field efficiency of five percent, or hundreds of millions of dollars. It was able to eliminate three different host systems, each representing hundreds of rack servers and moved to a single source of truth for data with one large MQTT broker, simplifying the overall infrastructure significantly.

The misnomer in the industry is that modernizing EFM requires a large financial investment along with a lot of time and effort. MQTT is open-source, and it can be implemented on existing legacy equipment.

Original content can be found at Oil and Gas Engineering.


Author Bio: Arlen Nipper is president and CTO of Cirrus Link. He brings more than 40 years of experience in the SCADA industry to Cirrus Link as President and CTO. He was one of the early architects of pervasive computing and the Internet of Things and co-invented MQTT, a publish-subscribe network protocol that has become the dominant messaging standard in IoT. Arlen holds a bachelor’s degree in Electrical and Electronics Engineering (BSEE) from Oklahoma State University.