Big Technical Data, the Cloud and Data Residency

Big Technical Data, the Cloud and Data Residency

by Claude Baudoin, Principal Consultant at cébé IT & Knowledge Management. May 2015

The Oil and Gas industry provides an excellent example of some of the conundrums that face the industrial world when it considers the adoption of new information technologies.

On the one hand, this should be a dream environment. Traditional data gathering techniques such as seismic studies, measurement while drilling (MWD) and traditional oil well studies (“wireline logging”) have long produced large amounts of data that complex software algorithms process to guess where the oil or gas is trapped, and what its characteristics are. This has now been augmented with permanent sensors embedded in oil wells, so that the gathering of data no longer stops when drilling is finished, but goes on for the life of the oil well, perhaps 30 to 50 years. As in other examples of the Internet of Things (IoT), there is almost no limit to what big data analytics can discover based on this mass of data, which you have to store and process somewhere.

The data volumes in question and the lack of high-speed networking infrastructure at the point of acquisition (think of the deserts of Saudi Arabia or offshore Brazil) have always been a challenge for data processing. At the end of a seismic campaign, a truck is often required to take the data cartridges off the boat and to a data center. Meanwhile, the data center on the ship has already produced intermediate results and transmitted them over satellite links.

Not every company has the software or the data center capabilities to perform the most extensive analyses, and the software to do so is very expensive and evolves rapidly. This seems to be a match made in heaven for a Software as a Service (SaaS) cloud offering. Yet this idea is only now starting to be explored. For example, the leading oilfield services company, Schlumberger, is offering its Intersect 3-D reservoir simulator, jointly developed with Chevron and Total over a ten-year period, in the cloud.

Part of the challenge lies in the data volumes. To get the data processed in the Schlumberger data centers, you have to get it there. Another key obstacle is the extremely high – some would say paranoid – sense of confidentiality attached to this data, on which multi-billion dollar decisions are based.

But a factor that was initially neglected, and has become increasingly important in the past decade, is data residency. Depending where the oil company operates, it may be subject to a host of regulations about what it can do with the data.

Some countries consider their subsurface data to be a sovereign asset, which is only licensed to the oil company in order to prospect and develop the oilfield, but never ceases to belong to the government, and cannot be exported. Other countries are willing to let the data out of the country in order to be processed by highly skilled subcontractors… but when the interpreted data comes back, it wants to levy a value-added tax since the data is now more valuable!

Politics, technology and geography thus combine to create a complex situation – and we didn’t even mention the fears of industrial spying if the data ends up in a cloud that’s hosted in, say, China or under the watchful eyes of the U.S. National Security Agency. Surely, the same issues will confront finance, healthcare and other industries.
We have the fancy technology (cloud, BDA, IoT, etc.) – now we have to solve the real problems!

You may also like...