A conversation withKai Bender und Rainer Glaser, Oliver Wyman

"We have left the explanatory stage in AI"

In discussions between management consultants and clients, the topic of artificial intelligence occupies a firm place. The conversations increasingly revolve around the specific application, report Kai Bender and Rainer Glaser from Oliver Wyman.

"We have left the explanatory stage in AI"

When Rainer Glaser talks to a client, he knows what to expect: "In almost all discussions, artificial intelligence becomes a topic in some form," says the partner at Oliver Wyman who specializes in AI. However, the focus of the conversations has shifted: "We have left the explanatory stage in AI," observes Kai Bender, head of Oliver Wyman in Germany. It's now more about demonstrating how the technology can be embedded within an organization.

Two issues play a significant role for clients: "They want to derive a quantifiable benefit from investing in AI," says Bender. But an aspect that his colleague Glaser summarizes under the keyword "People" is just as important: "An algorithm can work as convincingly as possible – if employees do not accept and use it, I have gained nothing." The reasons for rejection range from a lack of technical interest to fears that AI could endanger their own jobs.

It is not enough for a consulting firm to demonstrate under laboratory conditions what is fundamentally possible with AI.

Kai Bender, Oliver Wyman

One problem is that the algorithms are not necessarily developed by those who will later work with them. Bender sees his own industry as having a responsibility here: "It is not enough for a consulting firm to demonstrate under laboratory conditions what is fundamentally possible with AI," he says. "We must consider the conditions under which the users are supposed to use it – the 'real life'."

According to Glaser, the mere programming and training of an algorithm only accounts for a fraction of the work: "In my AI projects, maybe 20% of the time is spent on algorithm training, the rest concerns integration into the value chain and organizational issues, such as data protection." The fields of application have been further expanded by the advances in generative AI – although some topics are better suited for the AI than others.

Algorithms need to learn

The reason: Many AI applications, for example, are based on Large Language Models (LLMs) that are primarily trained in language processing. If one wants to use AI to create a risk report, one must first explain to the LLM how to process numerical information. "Handling statistics and numerical information has to be trained," explains Glaser.

When processing language, LLMs have the advantage of being able to keep track of much more significant amounts of data than a human can. This is intended to be used, for example, in early risk detection, where Oliver Wyman collaborates with the news service Dow Jones. The technology "Factiva Sentiment Signals" is supposed to assist in credit decisions or the detection of supply chain risks.

Glaser is convinced that AI could have detected risks before collapses like those at Silicon Valley Bank or Wirecard. "Looking back, there were signals in the news flow that the tool would have responded to," he says. The predictions would be even better if, in addition to news, investor relations reports and capital market data were included in the automated analyses.

As a user, one can also introduce incorrect expertise.

Rainer Glaser, Oliver Wyman

But what if the tool falls for misinformation? "It's a statistical tool, it can also be wrong," Glaser emphasizes. Therefore, it's essential that the application transparently shows which findings and signals the warning refers to. AI also still struggles with ironic comments if it is not adequately trained.

When someone checks a warning message, they can then provide feedback to the AI about the decision, which will be incorporated into future analyses. However, this also carries a risk: Because humans can be deceived too. If an employee dismisses a warning raised by the algorithm multiple times, the AI receives the signal that it is not relevant – even if the AI was actually on the right track. "As a user, one can also introduce incorrect expertise," Glaser warns.

Great Uncertainty

Even though artificial intelligence is on the agenda for almost all companies, there is still a great deal of uncertainty. Bender often sees customers worried about falling behind in developments. "The question 'What are others doing?' comes up repeatedly," he reports. The reason behind this being, that decision-makers often fear that direct competitors might already be further ahead.

However, not every problem needs to be addressed by the latest generation of generative AI, says Glaser. Instead, one should look at what is really needed to achieve the desired result. "Many companies have not yet fully exploited the potential of Advanced Analytics." Even with technologies that have been in use for longer, there is often still room for improvement.