Avoiding nuisance trips from SIFs

The actions a specific safety instrumented function (SIF) takes to correct a problem and avoid escalation can take various forms and for a serious situation, the action the SIF takes can be hugely disruptive.

By Jack Smith September 30, 2016

Within a process manufacturing context, safety incidents can take many forms. Some are small and an occurrence could create a minor chemical spill. Others are huge and could result in a fire or explosion with thousands of gallons of petroleum products. In the same way, the actions a specific safety instrumented function (SIF) takes to correct a problem and avoid escalation can take various forms. A relief valve may be opened to release pressure in a tank for a small problem. However, for a serious situation, the action the SIF takes can be hugely disruptive. If a major incident is underway, a drastic action may be appropriate, such as effectively shutting down an entire process unit. To avoid a catastrophe, such actions are absolutely necessary, but they cost enormous amounts of money and disrupt production even if the process shuts down exactly as it is supposed to with no damage to equipment. When the problem is fixed, the unit can be restarted, but critical time has been lost.

Now, imagine the same situation where a unit tripped and went into shutdown mode because there was a malfunction in one of the safety sensors. The SIF went into action because the pressure sensor mistakenly reported a problem when none actually existed. If it’s a small problem with a small result, such situations are called nuisance trips. However, if it’s a big event, the result is almost as disruptive as an actual emergency.

Obviously, a plant wants its SIFs to do their jobs. Reliable SIFs need to act when an actual problem is developing—but only when an actual problem is developing. To avoid nuisance trips with critical SIFs, some plants install redundant sensors to prevent a critical system from going into shutdown unnecessarily. These approaches use voting schemes to force the redundant sensors to act as a group. A common method is to use three sensors monitoring the same process variable. For the sake of this oversimplified example, imagine the application is a large pipeline compressor, which is not easy to shut down and restart. Nonetheless, in an emergency it is critical for it to be shut down to avoid over-pressurizing the line. To avoid unnecessary trips, the safety engineer installs a group of three pressure sensors to monitor its output.

Why multiple sensors? If the pressure exceeds the trip point, the sensor is supposed to open the circuit causing the compressor to shut down. To avoid unnecessary trips caused by a sensor failure, the individual devices are wired as shown in Figure 1. Each is effectively a DPST switch, and they are wired in series with the compressor as shown. If one of the sensors malfunctions and opens, there is still a path for power to reach the compressor. If any two or all three sensors open, power is cut off. Two sensors have to report the same problem simultaneously for the SIF to act and shut down the process. This is called a 2oo3 (two out of three) scheme, and it reduces the likelihood of a nuisance trip because two devices have to fail at the same time to cause a trip. There are other voting schemes used in various applications, but 2oo3 is common because of its simplicity and effectiveness.

So far, the example deals only with the sensor, but it can be extended to other parts of the SIF. In the real world, these systems can be far more elaborate, using different forms of redundancy within the logic solver and networks. Any part of a SIF can malfunction, so it is critical to ensure every part of the system is appropriately protected. The actual functionality of a sophisticated safety system can simulate redundancy even where it is not practical to have multiple sensors.

In many respects, the antithesis of redundancy is common cause—a situation where the same problem affects all the sensors the same way. The kind of malfunction a 2oo3 voting scheme is trying to avoid is a simple mechanical, electrical, or electronic failure. It assumes the cause is unique, and affects only one of the multiple units. That concept is fine as far as it goes, but in many cases, failures are caused by something in the process. Say for the sake of argument, imagine some catalyst beads have escaped their holder in a reactor and become entrained in the process fluid. They are carried through the piping and collect at various points. If they obstruct an element in the piping where the sensors are, they might affect all the safety sensors in the same way. So whether there is one or five, they will likely all suffer the same problem.

Prudent investors don’t put all their money in one kind of investment because a single problem can affect all of it. They diversify their investments to avoid a financial common-cause disruption. In the same way, clever safety system designers avoid using one type of sensor or sensor technology in redundant systems. Different sensor technologies—even if they are monitoring the same process variable—can be selected with an eye toward their characteristics in specific situations. The assumption is not that one technology is better than another so much as having different operating characteristics.

Using a mix of measuring technologies or even mounting options can create a situation where it is much harder for a single cause to fool multiple different devices. Of course, if one safety sensor does trip, maintenance should be alerted to deal with the problem, whether caused by a basic electrical or mechanical failure, or some problem within the process. Any safety sensor action should be taken seriously, whatever its cause.

Jack Smith, editor, AppliedAutomation, jsmith@cfemedia.com.

This article appears in the Applied Automation supplement for Control Engineering 
and Plant Engineering

– See other articles from the supplement below.


Author Bio: Content manager, CFE Media