visit
A million pompous Tweets, a thousand pontifical TedX videos and hundreds of unnecessary hot takes don’t lie: Artificial Intelligence (AI) is here, and it’s here to stay. Ok. Good. What now? Well, before AI can truly be called a democratised technology, we have to go beyond Silicon Valley startups and implement it within small/medium businesses and governments to reap the rewards promised by the technology.
And so we must ask ourselves: how does a non-tech company go about it? What are the pitfalls to avoid? Where to begin? Below are a few lessons I’ve learned throughout my time as a technology consultant for some of Europe’s largest companies.Companies have lost years of progress to false issues and silly excuses, and continue to do so today. Before starting any AI project, some housekeeping is necessary to make sure that none of the issues highlighted below will be used as excuses to slow the project down. Write the answer to these questions in your project manifesto, and watch all the politically-driven push-back just melt away.
This should not be an issue for most companies: most AI projects are not made to scale at this point in time. They are often just proofs of concepts, or answer a very specific pain points, with few users affected. If, tomorrow, an AI solution is scaled throughout an entire organisation, it will simply have to be co-built with the end users and stakeholders. Using an agile methodology, it is then possible to ensure all stakeholders are happy with the solution.
Yes. Yes it is. There is no such thing as perfect data, even within the tech giants of this world. But you’ve got to start somewhere. And that somewhere is benchmarking the quality of the existing data in relation to a concrete purpose or pre-defined use case. Only then will an improvement plan start to emerge, which can be implemented step by step with the relevant management support. Running around trying to fix all data issues will only lead to a waste of energy and resources.
The answer is most often yes, these two core teams often do not communicate enough, and seldom share the same goals. This can cause rifts within the organisation : operational teams may want to increase productivity, while IT teams are likely to pursue cyber safety and cost-saving options. There is however no point in trying to artificially link them without first having a clear purpose. Once a goal is set, a few people from each team might meet regularly to exchange on a common strategy to ensure that no part of the organisation feels left out of an AI transformation. They would then report back and guarantee that the decisions are implemented within each team as previously specified and planned.
Beware of artificial and “above-ground” governance : it’s all too easy to define a governance structure before the project is even started, and then trying to adapt the project to said governance. This would be like writing a recipe with no cooking experience. Governance must be a consequence of the implementation of a AI use case and not its prerequisite.
There are currently only 22,000 PhD-level experts worldwide capable of developing cutting edge algorithms. And many of them work for the the big tech companies (you know the ones I’m talking about). It will be incredibly hard (and expensive) for an SME to hire one of them. The good news is that it is not necessary. Most projects are not trying to push the border of our AI knowledge, but use what already exists (). As such, the skills that an executive team might think is missing is likely to be very different from the ones actually needed for a small AI project. What truly matters nowadays is business commitment, algorithm robustness and the informatics systems’ architectures.
CxOs are famously busy people, and rarely have the time to sit through lecture after lecture on a particular topic. This is why corporate teams are very fond of “learning expeditions” : it places them in a new environment where they’re forced to listen to experts on the matter, and gets them away from their partners and kids for a few days.
As much as I like organising learning expeditions, I’ve always been skeptical about their effectiveness, beyond that of a free vacation for the executive team. Instead, I’m a proponent for planning ahead and hiring or promoting executives based not only on their merits in their respective fields, but also based on their data-management knowledge and/or curiosity for the matter.
. It also often fails to provide real customer value (when was the last you used Bitcoins?). This is why it’s important to keep focusing on the “business pull” rather than the “techno push” : seemingly brilliant solutions sometimes (often) come to the wrong place, at the wrong time. During any AI project, it’s important not to lose track of why the project was created, and for whom.
While the uninformed masses run around discussing innovative start-ups, data science, POCs, deep learning, Elon Musk…, experts are keen to talk about the darker side of AI projects : data quality, engineering, architecture, HR & business model transformation. Those aspects of a project are all-too-often under-appreciated and undervalued, yet they are at the very center of a positive AI transformation. Remember : AI is supposed to be boring; it’s just statistics, and statistics are the worst.
It’s very easy to get over-excited at the beginning of an AI project. We hear words like big data, data-centricity, DQM, no-code, RPA… and believe we can tame these concepts to bring value to the entire corporation and finally get that well-deserved promotion. But beware : all of these ideas have generated tremendous disappointments in the past, and could put the entire project are risk if thrown around carelessly. Steering clear of over-selling is often the key to a successful project.
Or any best-in-class AI company for that matter. Copying the tech giants is a fool’s errand that will yield virtually no success. Words can hardly describe how good they are at what they do, and how much it cost them to get there (with a little help from inefficient anti-trust laws). Every company, however, has unique assets which it can use to make a unique algorithm that will specifically fit its needs.
It may be hard to believe after reading all of the above, but far too many projects will want to “hit the ground running”. Data scientists then start the design process mid-sprint (see what I did there?), without taking the time to understand the who, the what, the why and the how. Management is then surprised that they got a skateboard instead of a scooter. Let me be clear, if not a little condescending : starting with the beginning is however necessary to get to the end.
As mentioned above, it is important to take the time to answer a few key questions. The most important of these questions (as always) is “why”. Why does the company want to invest in AI, and what are its goals for its AI projects? Regardless of their coding or data analysis abilities, the people at the top have a key role to play in defining the strategy for an AI project. Without being given precise directions, teams will be left to aimlessly dig through data, looking for a story. And with no clear and agreed-upon goal, they’ll be left chasing a moving target, running the risk of rewriting history as new data comes in. That’s why the strategy defined BEFORE any project kick-off should be Specific, Measurable, Attainable, Relevant, and Time-bound.
“Everyone else is doing it” is a terrible reason to get into the A.I game.
A “Smart Lab” roadmap can be created from this priorisation (assuming the goal of the AI project is to create more AI project babies).
This post is meant as an introduction to AI project management; longer version was published here: .