Process safety safeguards must be capable of performing their required function in stopping the progression of a loss event. Since a process upset proceeds at a certain rate as determined by the process design, speed is a critical part of the functional specification. If a safeguard performs too slowly compared to the loss event, it provides no protection at all. Determining just how fast a safeguard needs to complete its action is an important project deliverable that impacts safeguard selection and setpoint specification.
The first task in specifying the safeguard response time is to look to the process design to determine how much time it will allow. The process safety time (PST) is the time between the process failure and the loss event that would occur if there were no safeguards. The PST may be only seconds, which limits the types of safeguards that can be effective. On the other hand, the process may take days to transition from the initial failure to the loss event, allowing for a sequence of safeguards.
A variety of engineering practices may be used to justify the selected PST. Expert judgment, based on individual or industry experience, can be a very useful starting point and may be the only method available for the initial PST estimation. Expert judgment alone may leave the operating facility with a weak rationale to support future management of change. Extrapolation or mass-and-energy balances can be used to determine the PST based on a specific process design. These simpler techniques may be insufficient for evaluating a complex process upset and are unlikely to reveal short-term transients during abnormal operation. Process simulation uses first principle models to determine the response of multiple process parameters to one or more process upsets. In dynamic simulation, the actual process equipment, including piping configurations, and process chemistry are incorporated into the model. With the capability to model complex reactive or multi-phase reactions, a dynamic simulation is more likely to reveal short duration transients and more likely to track the actual process conditions during a simulated upset. Tuning the model with startup and normal operational data helps increase confidence in the simulation accuracy. Operator testing using a simulator can increase alarm response effectiveness.
The second task is to specify the response time for each safeguard that must complete its action in order to stop the loss event. The response time is basically the time available for the safeguard to act given its setpoint and the process dynamics. The least conservative setpoint is determined based on time lags inherent to the safeguard hardware, application program delays, and measurement error. Early in the project, simple tables of generic hardware delays and measurement error might be used. Later project stages replace the early estimates with specific information as it becomes available.
It is not a good practice to design a safeguard based on the least conservative value. The generally accepted margin is to ensure that the safeguard completes its action in half the time allowed by the process dynamics. This design margin increases the likelihood that the safeguard is effective even when the real world slows things down a bit. On the other hand, having an setpoint too close to the operating limit can put a facility at risk for more frequent nuisance alarms and trips. Nuisance trips cause lost productivity and often cascade into other events. Ensuring enough time to act without causing nuisance events is a balancing act that is often negotiated among operations, process safety, and engineering. Dynamic simulation can provide detailed data to make this negotiation easier and the outcome more consistent.
Contact SIS-TECH to learn more about how a dynamic simulator can be used to improve process control, determine the process safety time, increase operator effectiveness, reduce operator response time, validate your alarms, and certify your operators.