With virtually all companies looking at AI, what are some of the key risks they need to consider before implementation?

Today virtually all companies are forced to innovate and many are excited about AI. Yet since implementation cuts across organisational boundaries, shifting to an AI-driven strategy requires new thinking about managing risks, both internally and externally. This blog will cover “the seven sins of enterprise AI strategies”, which are governance issues at the board and executive levels that block companies from moving ahead with AI. by By Jeremy Barnes, Element AI

1- Disowning the AI strategy

This is probably the most important sin. In this case, a CEO and board will say that AI is a priority, but delegate it to a different department or an innovation lab. However, success is not based on whether or not a company uses an innovation lab—it’s whether they are truly invested in it. The bottom line is that the CEO and board need to actively lead an AI strategy.

2- Ignoring the unknowns

This happens when companies say they believe in AI, but don’t reach a level of proficiency where it’s possible to identify, characterise and model the threats that emerge with new advances. Even if it is decided not to go all-in on AI innovation, it’s still important that there is a hypothesis for how to address AI within a company and an early warning system so the decision can be re-evaluated early enough to act.  Being a fast follower requires as much organizational preparation and lead time as leadership.

3- Not enabling the culture

The ability to implement AI is about an experimentation mindset. That and an openness to failure need to be adopted across the company. Organisations need to keep in mind that AI doesn’t respect organisational boundaries. Most companies want high-impact, low-risk solutions that could simply lead to optimising, rather than advancing new value streams. It is hard to accept increased risk in exchange for impact but it will come as part of the continuous cultural enablement of an experimental mindset.

4- Starting with the solution

This is the most common sin. It’s important to be able to understand the specific problems you’re trying to solve, because AI is unlikely to be a solution for all of them, and especially not blindly implementing a horizontal AI platform. Have the conversation at board level to ensure that an overarching AI strategy, and not simply quick-fix solutions, is the priority.

5- Lose risk, keep reward

As mentioned in the third sin, it is natural for companies to want to implement AI without any risk. But there is no reward without risk. A vendor motivated to decrease risk will also decrease innovation and ultimately impact by making successes small and failures non-existent. AI creates differentiation only for companies that are willing to learn from both their successes and their failures. A company that doesn’t effectively balance risk in AI will ultimately increase its risk of disruption.

6- Vintage accounting

Attempting to fit AI into traditional financial governance structures causes problems. It doesn’t fit nicely into budget categories and it’s hard to value the output. The link between what you put in and what you get out can be less tangible or predictable, which often makes it harder to square with existing plans or structures. Model the rate of return on AI activities and all data-related activities. This demands that these activities affect profit (not just loss) and assets (not just liabilities).

7- Treating data as a commodity

The final sin concerns data and its treatment as a commodity. Data is fundamental to AI. If data is poorly handled, it can lead to negative impacts on decision-making. Data should be treated as an asset. The stronger, deeper and more accurate the dataset, the better models that you can train and more intelligent insights you can generate. But, at the same time, when personally identifiable information is stored about customers, it can be stolen, risking heavy penalties in some jurisdictions. You need to build towards data from a use case rather than invest blindly in data centralisation projects. So, now you know what not to do. Here are some of the simple things that you can do to move ahead. First, talk to your board about how long it will take to become an AI innovator, modelling it out, rather than simply discussing it conceptually.

Second, prepare for change and put in place monitoring. AI shifts all the time, so you’ll want to regularly check in to adjust and pivot your strategy. It’s important to develop a basic skill set so you can redo planning exercises with your board. Third, model out risks in both action and inaction. But don’t model them in a traditional approach, which is to push risk down to different business units and then compensate those units for reducing risk rather than managing trade-offs. Instead, view those trade-offs in terms of risks and rewards, and start to think about how you are accounting for the assets and liabilities of AI. Ultimately, you want to start to model what is the actual rate of return for all these activities that you are doing. Then benchmark it against what you see in other companies from across the industry, and that will give you a good picture of the current situation and where to go.

Related Stories

We believe in a personal approach

By working closely with our customers at every step of the way we ensure that we capture the dedication, enthusiasm and passion which has driven change within their organisations and inspire others with motivational real-life stories.