May 4, 2026
The energy industry is buying AI, but not scaling it. Here is why.
.png)
At CERAWeek just a short while ago, AI was everywhere. On stage, in the hallways, in every conversation about what comes next for energy operations. The ambition is real. The investment is real.
And yet, when you look past the announcements, most organizations deploying AI and more specifically, AI agents, are discovering the same uncomfortable truth: buying the capability is the easy part.
Making it work at scale – in production, in the hands of people who have to trust it with real decisions – is a different problem entirely. This is not a technology failure. The technology is ready but the capability to scale is lacking.
Only 62% of organizations are experimenting with AI agents.
McKinsey's data shows 62% of organizations are experimenting with AI agents. However, scaling remains a challenge: only 23% have deployed them at scale anywhere in their business, and fewer than 10% have achieved scale within any single, specific department. BCG found that 74% of companies are yet to show tangible value from their AI investment. The gap between aspiration and operational reality is widening.
The question worth asking is not "what can agents do?" We already know the answer to that. The question is: why aren't we scaling them?
The trust problem
When an operator asks an AI agent why it recommends a particular course of action and the system cannot tell them, they stop using it. Not immediately, and not loudly. They just route around it. They go back to the whiteboard, the spreadsheet, the colleague they trust. The brilliant basics of how industrial work actually gets done.
McKinsey research confirms this pattern. In a 2024 survey, 40% of respondents identified explainability as a key risk in adopting AI – and yet only 17% said they were doing anything to mitigate it. Another gap, this time between recognizing a problem and doing something about it. This is where agents stall.
We only trust what we can verify. As an industry, we have too much at stake to take recommendations on faith.
Shane McArdle, CEO, Kongsberg Digital
Our industry has absorbed more change over the past two decades than most sectors. But look, we are a rational industry. We only trust what we can verify. And we have too much at stake to take recommendations on faith.
This is the organizational readiness problem, and it sits upstream of everything else. You can deploy the most capable agent in the world. If the workforce cannot see how it reasons, cannot challenge its conclusions, cannot understand when to override it – you have a liability, and not a tool.
Scale is not just speed
Looking outside the energy sector can help us understand what true scale looks like. Visma, one of Europe's largest software companies, has shifted to an API-first development approach – validating whether an API solves a customer's problem before any backend code is written. Only once the design is proven do developers build. It is an extraordinary statement of intent, and it points to where capable organizations are heading: workflows where AI does not just assist work, it performs it.
That is not where most energy operations organizations are today. Nor should we pretend it is, but the direction is not wrong.
The gap between what AI can do and what organizations have equipped themselves to absorb is the strategic problem of this decade.
Shane McArdle, CEO, Kongsberg Digital
The lesson from companies moving at that speed is not that we should try to match their pace in environments where physical safety and regulatory accountability are the stakes. The lesson is that the gap between what AI can do and what organizations have equipped themselves to absorb is the strategic problem of this decade. Every quarter spent in pilot mode, running the same proof of concept for the third time, waiting for conditions to be perfect, is a quarter that gap widens.
Most companies are stuck on the first step not because they lack ambition, but because they have not yet built the internal scaffolding that lets people work alongside AI with confidence. At Kongsberg Digital we are actively building that scaffolding today inside our own operations: AI agents are already being applied across engineering, design and product management, and increasingly moved out of the conceptual phase into our daily workflows where practical value is built. The more we use AI internally, the more intuition we are building around what is possible with AI – and the more we understand how it can impact organizations and be scaled across business functions.
What does readiness actually look like?
It means your people understand, at least at a working level, how AI-generated recommendations are produced and what data informs them. Not every person, and not at a model-architecture level, but enough that the technology feels like a colleague whose reasoning you can follow – not a black box handing you instructions.
It means you have established where human judgement is non-negotiable, and built that harness into the design of your agent workflows from the start, not retrofitted it after a near-miss. Human-in-the-loop is not a compromise. In our complex industrial environments, having us in the loop is the only architecture that earns sustained adoption.
And it means you have done the unglamorous work on your data foundations. Not because the data needs to be perfect – it does not – but because an agent running on poorly contextualized data will make recommendations that feel wrong to experienced operators, and experienced operators who feel the system is wrong will stop trusting it. Once that trust breaks, it is extremely hard to rebuild. You cannot scale intelligence on top of chaos.
Our biggest constraint is courage
None of this is new thinking. We have been talking about digital transformation for years. But agents make the stakes higher, not lower, because agents act. They do not just inform; in the most advanced deployments, they recommend, initiate, execute. The cost of an adoption failure at agent level is not a dashboard nobody uses. It is a workflow that breaks, or worse, one that runs but quietly, in directions that nobody intended.
Our biggest constraint is this: organizational courage. The willingness to redesign workflows, invest in workforce literacy and AI fluency, and build the harnesses that act as a protective layer so trust can be built in tandem with adoption.
At this Future Digital Twin & AI, I expect we will hear a lot about what agents can do. I am more interested in talking about what it takes to make them work – for the operators who have to use them, the engineers who have to maintain them, and the organizations that have to stake their performance on them.
That conversation, I think, is the one we actually need.
Author

Shane McArdle
CEO, Kongsberg Digital
Related news

April 15, 2026
Industrial intelligence is becoming a priority for manufacturing
.png?width=330&height=330&name=Karen-S-blogpost-web%20(1).png)
March 16, 2026
From data to decisions: Key conversations expected at OTC Asia

March 11, 2026
AI and the Future of Drilling: Insights Ahead of OTC Asia

February 2, 2026
Stop predicting failures in chemicals. Start defining the operating envelope.