Solving Process Instability
Some days it may seem like there are gremlins in the plant doing their best to make your life miserable. Nothing will run in automatic like it is supposed to and you can’t figure out why. What are the main causes of unruly processes, and how do you fix those at the source?
“Upsets can come from anywhere: a thunderstorm, variations in raw materials, or failed equipment,” says George Buckbee, P.E., VP of marketing and product development for Exper-Tune. “The real problem is that upsets tend to propagate throughout the plant. Most processes have some kind of surge capacity, but upsets still tend to move around.”
Sometimes this movement makes it hard to tie a symptom back to a cause because the two may not seem related at first glance. That’s where the real detective work begins. Fortunately, there are many tools available that can make your job easier.
Once you are convinced that the most basic concepts of the process itself are not at fault, problems can be separated into four main categories:
• Process design and control strategy;
• Hardware selection and sizing;
• Equipment malfunctions; and
While these overlap in some areas, it is useful to begin to separate problems and their associated symptoms for analysis.
Design and strategy
Control strategy may be a problem more often than users tend to realize. The fact that a control system is installed and may even be relatively new is no assurance that it can actually run the process the way it is supposed to. If the architects did not get the system tuned well enough for all the loops to run in automatic, the basic strategy could be wrong. This leaves operators trying to bridge weak links in the chain by running some loops in manual. Such situations are not difficult to detect, since some operators and shifts will run better than others. (More on that topic later.)
There are other types of strategy problems that are more subtle but just as problematic. For example, some continuous processes tend to oscillate. When you begin to look for the source, you may find that there are discrete or batch-like elements that can disrupt the bigger picture if you don’t make appropriate compensation. Bob Rice, Ph.D., director of solutions engineering for Control Station, suggests one example: “Think about a cement plant where you have 10- or 15-minute feed patterns from different silos, and you get a cycle within that bin through filling and emptying. You’re going to get a natural oscillation in the feed based on how much is in the bin. That kind of oscillation can make its way through the whole plant, and you’ll never get rid of it entirely because it’s kind of a batch operation. To minimize that kind of oscillation, you have to deal with it as early as possible. You can’t always fix it, but you should at least know where it comes from. Then you can find the closest thing to it that you can correct, and mitigate some of the issues. It’s finding where the root cause is and finding the most economical way of fixing it.”
Sometimes an oscillation source isn’t so easy to find. The key to locating it may involve finding all the elements that it affects. “When you have something that appears as a continuous cycle, the key thing to know is that the source of the upset and all the things that are affected by it all oscillate at the same frequency or the same period,” advises Buckbee. “That’s the big clue. What’s the period of oscillation? Nowadays, you can do a massive Fourier transform analysis, which breaks down every signal in the plant into its component frequencies, then sort by frequencies of oscillation and say, ‘If I have this cycle that’s affecting my finished product, and it comes at a period of six minutes per cycle, what other things are also oscillating at six minutes?’ With a little bit of process knowledge, you can pin that down very quickly to the original source. Of all these loops that cycle at six minutes, this one is farthest upstream so that’s the one you look at first.”
The challenge when performing such an analysis is realizing that the source may be farther away than you think, and it may not always be upstream. Tom Kinney, director of solutions development for Invensys Operations Management, cites one example: “In one case at a refiner on the West Coast, the main reactor temperature control in the FCC (fluidized catalytic cracker) was cycling. It didn’t matter what kind of tuning they did around the reactor or regenerator, this thing would cycle. We used some analytical tools and techniques, and they pointed to a valve on the back end of a main fractionator, several major pieces of equipment downstream. But since this was all largely vapor phase, the variation in pressure backed up, and it caused a variation in temperature. We found a valve positioner on the back end of that main fractionator that hadn’t been calibrated since it was installed in 1955. That was the cause of the problem.”
One of the facts of life in process plants is that few units are making the same product or handling the amount of throughput the designers originally built it for. Economic drivers push plant owners to run units well beyond expected capacities. This makes any deficiencies visible that may have gone unnoticed with more moderate use.
“I’ve never seen a process that’s sized perfectly,” Rice observes. “Maybe it was perfect when they first started, but two weeks later they’re trying to run twice the throughput it was designed for. Or, they’ll design a pump with tons of over capacity and never reach it. Either of those is difficult because if your equipment is oversized, you tend to be running close to its lower bounds, and if it’s undersized you’re running full out. You still have very little control.”
Sizing isn’t the only issue. The type of control device and how it operates has to be appropriate for the nature of the process and any upsets it may see. Rice adds, “If you have a frequency within your process—like an oscillating process that has a frequency of five seconds but a time constant on your valve, pump, or whatever your control element is that’s 20 seconds—you’ll never be able to reject a frequency that’s faster than your controller output. Your final control element has to be faster than the disturbance. Otherwise you’ll still be recovering from the last disturbance when a new one hits.”
Things that simply don’t work can make life miserable. If a plant isn’t being maintained, production will be a constant struggle to bridge those weak links and find ways to work around the equipment that isn’t doing its job. Solving that kind of basic maintenance problem goes beyond this discussion, but if your equipment generally works, it isn’t hard to see that if one pressure sensor that gets knocked out of calibration or a control valve begins to stick, it can leave operators scratching their heads.
“If you want to have processes that are stable, the controls have to work,” says Herman Storey, process consultant. “Smart diagnostics will help you a lot, because at least you’ll know what’s broken. Otherwise you won’t know without a huge amount of manual effort. And that’s on top of all the things that can go wrong in the process—fouling, catalyst degradation, and mechanical wear and tear.” While this is true, Storey points out that very few companies use asset management programs to their best advantage even though they can provide huge benefits.
At the same time, control system companies are making constant advances in how operators can see diagnostic information when they decide to use it. Ben Mansfield, marketing manager, plant PAX system for Rockwell Automation, explains how a new diagnostic system operates. “We can show you at a high level where we have an alarm, an irregularity, or even a device configuration error with a little icon,” he says. “As you drill in, that icon persists and leads the operator to the unit display. I might see a low-flow alarm, and it guides me to the display that it’s on, and ultimately I can bring up the faceplate and see that I have a process variable out of range, that I’ve hit a high limit, or something like that. I might also get back some enhanced diagnostics from the instrument that tells me more details—there’s a slug of air in the pipe for example. Traditionally, I wouldn’t have known that. I’d have just gotten 22 mA, or 3.5 mA, and I wouldn’t know what was causing that problem. Now I can get some additional diagnostics thanks to the instrumentation technologies. The HMI can direct operators directly to the problem.”
Such sophisticated diagnostics can tell you a huge amount of information about what’s going on in the process beyond the individual device’s condition. A flowmeter’s behavior can suggest that it is becoming clogged or a two-phase flow is moving through it, but only if you have the ability to access that information.
Over the last few years, the idea of variability in process manufacturing has been applied primarily to raw materials and feedstocks. One major example is oil, as refiners have had to adjust to the fact that the crude supplies they counted on may not be available or could be too costly. Often, supply sources that are more difficult to process are substituted, which bring new control problems. If variability in feedstocks is really an issue, a plant has to have appropriate instrumentation to measure the specific attributes that are at fault. That way, appropriate correction strategies can be built into the control approach.
Variability can have internal causes as well, and one of the first places to look are plant utility systems, since they can interconnect multiple units. “Upsets can start in the boiler feedwater process,” says Buckbee. “Those upsets propagate downstream and affect every energy user in the plant. The very nature of a common utility is that it’s used by multiple parts of the plant, so that when one unit comes online and starts drawing steam, pressure in the header dips and that leads to upsets in other parts of the plant.”
Many utility related problems, such as steam header pressure, are easy to see, but often more subtle things can emerge in unexpected places. If someone who is trying to trace the source doesn’t look beyond the point where the problem shows up, he or she may never find it. “People focus on that part of the process, when in fact the problem is in another part of the process,” Kinney notes. “It can quite often be a problem in the utility system—the steam system, or the condensate system, or the cooling water system—which is global and affects a lot of areas in the plant. There may not be a direct or obvious connection at first when you look at local controls where the problem is manifesting itself.”
In some cases, the plans that companies implement to reduce energy consumption or reduce waste streams bring new problems. The plan itself may be effective and reduce resource use, but there are side effects. For example, one unit may capture waste heat from another unit. While this boosts efficiency, it also connects two processes that have no other reason to be connected, and this can provide a conduit for an upset to spread. The more interconnection, the more ways there are to propagate problems.
Buckbee adds, “People understand the basics of interaction, but it becomes very difficult with recycle loops and reuse of heat. Given the complexity of a modern process, it becomes very difficult to think about the bigger picture of where upsets are coming from. You’d be amazed at how many plants don’t measure ambient temperature and feed it into their control system. If you look at the oil and gas and petrochemical plants down on the gulf coast, one of the biggest upsets that hits those producers is a thunderstorm. Suddenly the temperature changes, barometric pressure changes, and rain hits the outside of those uninsulated process units. That’s an important thing to track, if only so you can eliminate it as a cause. If you’re blaming your raw material feed for what is actually a change in the weather, you’re wasting your time.”
The right tools
There are two major classes of tools that make problem hunting easier: process simulation, and loop performance and interaction analysis. Both of these provide important forensic information for diagnosis and for testing possible solutions.
Loop performance and process interaction tools use mathematical analysis to determine how loops should be behaving and how different parts of a process unit or larger plant are connected. These connections are not necessarily obvious, but they provide the transmission lines for upsets and cycling to move from place to place.
Buckbee recalls an example of where that helped one user. “In a plastics plant in Alabama, a guy was trying to chase down a variation in hydrogen pressure,” he says. “The process interaction map pointed to the cooling tower temperature, and he thought, ‘That’s half a mile away in a completely different part of the plant.’ But once he looked at his process diagrams, it made sense. With a simple fix to the strategy of how the cooling tower fans operated, he stabilized not just the hydrogen loop, but a large part of the plant that was swinging around chasing the variations in cooling water temperature. Without that kind of analysis tool, he could have spent weeks trying to figure it out.”
Once you think you have found the source of a problem or you want to see if a specific solution will work, a process simulation platform can help you test your theory before trying it in the real plant. It can also give you the ability to review your underlying control strategy and make sure that it is capable of running the plant correctly. If there are problems at that level, your attempts to fix more superficial matters will ultimately prove fruitless.
An effective process simulator delivers the ability to play “what if” games and fiddle with the process. Usually this capability is used to train operators, but you can make a valve stick or an instrument malfunction and see how that will affect behavior. In some cases, it may also help point out the solution.
Reaz Kabir is simulation business leader for Honeywell Process Solutions. He has seen the use of simulators as forensic tools growing. “It didn’t used to be a common thing, but it’s happening more often. In 2008 we worked with a plant that was making a chemical in high demand and high volumes. The problem in the plant was that they could not achieve required purity levels to meet customer specs of 99.8% purity. Despite all their efforts, they could not get the plant to give them what they needed. After working with the simulator for just one day, they made a recommendation to increase the reflux ratio on one of the distillation columns. That simple solution solved the problem without losing any product. They had tried for more than a month to fix things by trial and error.”
The right resources
While the tools to do the work are available, companies must have the management processes to use them. This is not necessarily a call for more individual initiative, but instead involves providing training and making sure people with the right skills are available and effectively utilized. As Storey describes it, “You have to have the people, the organization, the management, and the culture that says, ‘We’re going to go and make this stuff work,’ and do it. The extra capital expense is negligible, but the extra skills and culture are fairly significant. The tools are continuing to improve, but the people issues aren’t getting any easier. We have more opportunities, better and more reliable equipment, but fewer of the skills and resources needed to use those tools. It’s technically possible to do a better job today than we’ve ever been able to do. It’s just from a financial and resources part, it’s not getting any easier.”
Peter Welander is process industries editor. Reach him at email@example.com.
Also read from Control Engineering: