Today, it’s common to see technology and data teams under intense pressure from senior executives in their organizations to “deploy” AI — and fast. AI, and specifically generative AI, is the technology of the moment, and many C-suite leaders want to see advancements in this area immediately; Gartner found that 62% of CFOs and 58% of CEOs believe AI will have the most significant impact on their industries in the next three years.
However, technical teams are well aware that speed does not always equal success. In fact, excessive speed can hinder progress. So how can technical teams safely implement AI while still meeting management expectations?
Principal Data Strategist at Databricks.
The risks of a hasty implementation
First, it’s important to understand what risks are associated with implementing AI without a proper roadmap and too quickly. One of them is over-deployment, which can lead to duplicate efforts, as two or more tools are used to basically do the same thing, which in turn leads to wasted resources and unnecessary costs. This isn’t to say that harnessing the energy of that enthusiasm is a bad thing, but rather that too much misdirected energy can lead to wasted effort.
One of the things that generative AI has really brought to the fore is the importance of data and information quality, and the consequences that bad data can have. If a model goes haywire and gives incorrect answers that are then put into action, the repercussions could be huge. For example, an employee might try to figure out what deals they can offer customers. If the model generates the wrong answer and the employee follows through on the offer, the company’s bottom line will suffer. Also, if incorrect information is provided to stakeholders, reputation and trust could be damaged. More than half of AI users already admit that they find it difficult to get what they want from AI, and when applied to a business context, the risks multiply.
It’s not just the data that’s key, but also the models being used. This is especially important for organisations that are heavily regulated, but it’s a critical consideration for everyone. If decisions are being made based on the output of a certain model, it’s critical that the output can be replicated and tracked. A key challenge, therefore, is ensuring that a model is reliable, consistent and secure. In this case, the data used to train a model is very important – data is the fuel of AI, and so its quality determines how accurate and reliable a model can be. Too quick a deployment can mean that key steps are overlooked, such as ensuring that accurate and high-quality data is used. If this step is done poorly, organisations could have to deal with the consequences down the road. Technical teams know this, but communicating it to senior executives can often be a real challenge.
Controlled experimentation with AI: finding the middle ground
What if there was a middle ground where both sides could come to an agreement? There is, and it lies in AI experimentation. A recent MIT study found that 56% more companies are registering experimental models compared to last year, and with good reason. Experimentation has huge potential benefits and can be used to work on the “pain points” of businesses, bringing businesses closer to technology. On the one hand, experimentation with generative AI can help identify the most valuable use cases that will have the biggest impact on organizations.
Experimentation also involves putting AI into production in a safe and controlled environment, where issues can be detected and resolved. For example, if a model is giving inaccurate answers to employees, developers can draw on the data the model is being trained on to address these issues before officially deploying it. Experimentation can also help organisations identify what governance needs to be in place – is an operational model, or at least a coordinated set of handoffs between teams, necessary to govern the end-to-end AI generative cycle? Experimentation can also highlight where there is a wealth of skills, as well as where the gaps are, which in turn allows organisations to plan future upskilling programmes. Then, lastly, and arguably most importantly, experimentation can highlight where there may be data issues that need to be addressed to fully put any AI generative model into production.
Experimenting in this way is a good option for both management and technical teams. From the management perspective, they are seeing results when it comes to AI, confirming that the business is not falling behind. Technical teams have more control over the pace and quality of implementation. However, for experimentation to be truly effective, there are a few things that technical teams need to keep in mind.
Leveraging technology for safe AI experimentation
Accessing or unifying data in one place is a key enabler for generative AI. Platforms like the Data Intelligence Platform can unify data and models, giving organizations one place to go to access the data they need for their generative AI use cases. Finding the right AI tools that allow for a safe place to experiment will also be key, allowing end users to access and validate multiple LLMs to select the most appropriate one for their use cases. Lastly, having proper governance in place will allow organizations to monitor and manage data and model access, as well as performance and lineage, all of which are critical to the success of generative AI.
In the high-stakes race to harness the power of AI, it’s critical to find a balance between speed and caution. Pressure from senior executives to deploy AI quickly is understandable, yet as technical teams know well, a hasty implementation can lead to significant roadblocks. AI experimentation offers a pragmatic middle ground, allowing businesses to meet the urgent demands of leaders while ensuring robust and reliable AI systems that can truly drive transformative success.
We have introduced the best phone with artificial intelligence.
This article was produced as part of TechRadarPro's Expert Insights channel, where we showcase the brightest and brightest minds in the tech industry today. The views expressed here are those of the author, and not necessarily those of TechRadarPro or Future plc. If you're interested in contributing, find out more here: