Don’t be fooled by the hype around AI

Production and reservoir engineers are well-acquainted with the constant grind of firefighting that defines so much of well management.

Effective operation of an asset is an ongoing, complex, challenge requiring operators to navigate through an intricate web of process interdependencies to maximise production rates, maintain asset integrity and maximise ultimate recovery.

Until recently, the upstream industry has largely invested in development projects and drilling activities.

However, at the current $50-55 per barrel price, growth through the drill bit can be less attractive.

Capital projects have either been terminated in favour of acquisitions and divestitures, or put on hold, and operational headcount reduced.

>See also: The oil and gas industry: prime target for email security threats

As a result the pressure is now on for engineers to anticipate production issues before they happen, to operate proactive and agile business processes, and to have a clear and up to date view of the current state of producing assets to make more informed, on-target decisions faster.

But what tools will empower these engineers to get the job done?

In a time when operations are awash in disconnected data and engineers are continually asked to do more with less, emerging technologies like artificial intelligence (AI) and machine learning are claimed as a panacea for addressing operational inefficiencies.

AI and machine learning can provide brilliant macro learning observation and insights, however at the equipment/well level they can be time consuming to implement and lack resilience to change.

Additionally, there is always effort to build or train the models, a function today that is performed largely by data scientists due to the maturity and complexity of the available tools.

Context, experience and tacit knowledge all still count. Even when an AI solution is appropriate, the subject matter experts need to be part of the equation.

While AI can be a source of information to the engineers, it is only one of many tools available e.g. metering, well and subsurface models, production accounting, etc.

Rather than adding another gadget into an already bulging toolkit, engineers need to simplify and consolidate what they do have to achieve something new: operational intelligence.

Operational intelligence

Data is everywhere. SCADA data, production accounting information, drilling & completions information, maintenance and reliability data, well header data, information from spreadsheets – and more.

Not only is there a ton of data that has a ton of value to any engineering function or optimisation activity (the old adage of you can’t optimise what you don’t measure); it is also disjointed in most cases because disparate systems are used to capture it.

Knowing what information to look for and how best to use it become immediate challenges for subject matter experts like production engineers and technicians.

That’s why it is important to help engineers be more effective individually and as a team. Providing a centralised reference model of all relevant information relating to an asset gives engineers a rapid and reliable foundation for their workflows.

>See also: Significant link between revenue growth and AI maturity

Additionally, we then need to provide the tools for interpretation (calculations), diagnostics (analytics) and surveillance (complex event processing and notifications).

Adding value here means enabling engineers with self-service capabilities to leverage available data, in context, and capture their institutional knowledge and subject matter expertise (as opposed to it being stored in a spreadsheet) so that they don’t have to go back to IT to obtain and understand the data.

Shifting the paradigm from ad-hoc or reactionary firefighting activities to proactive, automated surveillance routines with self-service diagnostics opens up a new world of optimisation opportunities.

This allows a single engineer to watch for multiple events, customised to the current operating conditions.

On notification of an important event, users can view their own diagnostic displays to further investigate the cause of the production issue.

All of this allows engineers to understand and interpret data themselves, to see and identify leading indicators to mitigate risk, or reduce its duration in the worst case – and to do it faster.

AI and machine learning are really just new sources of data, insights and events. All too often the people selling those solutions, skip over the need to include the human intelligence that exists in the business.

Here’s where a single data environment and exception-based-surveillance processes come in.

A single data environment brings together data from disparate sources and connects it to digital representations of the physical assets, giving production engineers and technicians a one-stop, self-service environment for configuring rules and performing key analysis.

Exception-based-surveillance processes provide a 24/7 watch over company assets and trigger alerts and tasks when predefined events occur, allowing personnel to manage more wells and fix problems in less time.

A practical example. Foreign materials – salt, scale, paraffin and others – have a tendency to build up in produced water wells.

>See also: Software verification: the first step towards safe and resilient systems

This build-up occurs slowly over time and can hinder production if an acid treatment isn’t mobilised in due course.

This is a scenario where you could use field-data-capture information, an injection curve and SCADA data to monitor the pressure-flow correlation against the curve over time.

Using the derived operational intelligence, you could then create a rule that would provide prioritised notifications showing the event in context long before material build-up became a serious issue, giving you plenty of time to plan the intervention activities needed to keep production rates at their highest.


Today, there is still much opportunity to be had in better enabling subject matter experts to optimise production or manage the constraints from the producing assets.

Establishing an effective self-service operational intelligence program will deliver immediate benefits whilst reinvesting the knowledge into a foundation that can enable any AI program.

There is no doubt that the ongoing application of artificial intelligence technologies bring about new perspectives and benefits to production within the oil and gas industry.

Today though, there are significant opportunities through self-service operational intelligence to consolidate, contextualise and capture subject matter expertise.

>See also: Local council workers prepare to meet their new colleague –and she’s not human

By leveraging the connected data, calculations and events within the operational intelligence environment the AI and machine learning solutions will be more comprehensive and on target.

Improved profitability will happen when data from across the enterprise is brought together and presented to the right people, and the right systems, at the right time.

Businesses should consolidate a strategy that better empowers subject matter experts to deliver benefits whilst establishing the data foundation for an AI program.

In addition to the benefits provided by operational intelligence, AI tools can adopt that same data foundation investment to identify incremental opportunities and provide insights to engineers and operators.


Sourced by Grant Eggleton, vice president, global production solutions at P2 Energy Solutions

Avatar photo

Nick Ismail

Nick Ismail is a former editor for Information Age (from 2018 to 2022) before moving on to become Global Head of Brand Journalism at HCLTech. He has a particular interest in smart technologies, AI and...