Aside from celebrities such as the Keystone XL, most of the US’s vast oil and natural gas pipeline network is invisible. Just as invisibly, the world of pipelines is undergoing an informational overhaul to improve performance, minimize ruptures and spills, and increase safety, and is becoming another example of data-enabled infrastructure.
A vast network
Around 55,000 miles of crude oil trunk lines connect regional markets in the US. Oil wells connect to this backbone through 40,000 miles of gathering lines. And the end products of refineries travel through 95,000 miles of pipelines.
According to the U.S. Energy Information Administration, there are some 305,000 miles of interstate and intrastate natural gas transmission pipelines in the US, with an additional 1.25 million miles of natural gas distribution pipeline.
The pipelines are mostly coated steel pipe buried underground. Oil pipelines typically transport liquid at pressures between 600 and 1000 psi, while natural gas pipelines go up to 1500 psi. These high pressures are why ruptures can be so serious, and why monitoring and detecting flaws in advance is so important, particularly given the age of some of these pipes. According to the US DOT, more than half are at least 50 years old.
Ruptures and leaks
Pipelines tend to enter the public consciousness only when there is a leak, leading to a toxic spill, or even an explosion that costs lives.
Yet pipelines are by far the safest way to move large amounts of petroleum, and really the only way to transport natural gas. But an accident, when it happens, can be serious. While a derailed tanker train can only spill as much oil as it is carrying, a ruptured pipeline can continue to pump. Thus, prompt detection and shutdown are essential.
The state of the pipe
The industry is incorporating sensing technology to monitor pressure, flow, compressor condition, temperature, density, and other variables. Large ruptures often start as pinhole leaks, that visual inspection can easily miss until they become serious. Acoustic sensors can detect a breach by a variation in the acoustic signature. Fiber optic sensors detect deformations in the pipe walls.
Sensors are also sent down the pipes for inspection. The most popular is a robotic instrument called a smart pig. The name comes from the squealing noise the original models, wire-wrapped straw used for cleaning out wax and other contaminants, made as they traveled down the pipe. Depending on the model, smart pigs detect cracks and weld defects through magnetic flux leakage or shear wave ultrasound, mechanically measure the roundness of the pipe to detect crushing, or measure pipe wall thickness and metal loss through compression wave ultrasound.
From SCADA to IoT
The system that integrates this information on an operational level is called SCADA (Supervisory Control and Data Acquisition). It is used to gather and monitor data and then to do something like turn a valve or change the set point on a flow controller. SCADA is common in industrial operations that require real-time control of system operations.
In industrial implementations, the Internet of Things develops on top of the already existing system, allowing for a move from “monitor and respond” to a predictive and proactive approach supporting improved decision making.
Moving up from data: the example of PG&E
Over the past five years, Pacific Gas & Electric (PG&E), which operates 6,700 miles of gas transmission pipeline and 42,000 miles of gas distribution pipeline in northern and central California, has worked intensively to become predictive and proactive in the way it manages its network. According to Mel Christopher, Senior Director of Gas Systems Operations, this will be a multistep journey. These steps can serve as a model for all such IoT implementations.
Situational awareness creates intelligence out of the data, with better visualizations that enable operators in the Gas Control Center to see changes in the system quickly.
Situational intelligence follows, and integrates geospatial and temporal data to give a precise understanding of specific events as they happen.
Predictive analytics finally take all of that real-time data and pull out patterns that signal approaching abnormal events, allowing for proactive responsiveness.
Their own pipeline sensors can’t provide all the needed data. PG&E recognizes the importance of gathering data from outside sources to gain the broadest view of developing risks. Data from the Army Corps of Engineers, Caltrans, CAL FIRE, and other third parties is also an input to PG&E’s proprietary system, called TAMI (tactical analysis mapping integration).
“In late September 2015, three huge fires, the Valley, Butte, and Rough Fires, were close to our pipelines and facilities,” said Christopher. “We used TAMI to monitor the movement of the fire lines and the wind direction, and it provided alerts whenever the fire line was within a certain distance of a facility. This would trigger an isolation plan we had already built using data from TAMI.”
Invisible system, visible improvements
Continuously monitoring hundreds of thousands of miles of pipeline is no easy task, nor is responding effectively to ruptures and other malfunctions. For the oil and gas industry, the IoT makes that task easier.