As more organizations experiment with GenAI, the landscape of emerging AI models is becoming ever broader. The sheer variety of models available means that organizations that have gotten past the first question of whether they should use AI in the first place are now faced with an even more daunting question: which model should they use?
With the overwhelming number of options available in the market and new models constantly being developed and deployed, many companies are unsure of which direction to go in and which model to adopt to best support their application development. As we look to the future and expect more models and versions to be introduced, organizations need to take a flexible approach when selecting AI models – shifting the focus from finding the best-fit single vendor to taking a balanced, future-proof approach with LLM Mesh.
Director of Sales Engineering at Dataiku.
The risks posed by dependence on a single supplier
Relying solely on a single model is risky. For example, let’s say a company centers its healthcare business applications around a single AI model without integrating other models. The risk is that the single model it relies on can sometimes provide inaccurate results and recommendations, which not only leads to potential financial problems but also causes a decline in trust in the company by the broader market. How do we know this is true? Because this happened to IBM, which centered its healthcare applications around the Watson AI model. As the model sometimes provided inaccurate information, this led to an erosion of trust, along with a major negative impact on reputation. Since then, the company’s healthcare division has struggled to recover.
Despite the importance of tools like Open AI’s ChatGPT, concerns about their governance have raised questions and doubts among investors and those involved in integrating new technologies. As in the case of IBM, there is operational risk when companies jump on a wave and tie themselves to a single AI model. To mitigate this risk, avoiding vendor lock-in is crucial to navigating the fast-paced AI landscape and the ability to reduce concerns about security, ethics, and stability. That’s why companies are encouraged to shift their perspective from a single vendor lock-in to jumping now into all the different waves of AI, using LLM Mesh.
LLM Mesh: Jump over all waves
With LLM Mesh, companies can take advantage of the wave of AI models while preparing for future changes. By removing the complexities of backend connections and API requirements, LLM Mesh makes it simple to transition or “wave-jump” from one model to another quickly.
The benefit of “wave jumping” is that it allows companies to develop enterprise applications using today’s best AI models while still having the option to shift to other models, either by jumping to more suitable models now or by keeping options open for emerging models to come to market.
As businesses make informed decisions about the costs of running LLMs, which can be quite high, they must also choose the right model for an application’s performance needs. Keeping options open to account for these needs, such as cost, performance, and security, allows businesses to benefit in a rapidly changing landscape.
The imperative to jump now
Why jump now? Nearly 90% of executives consider GenAI a top technology priority. Waiting for the perfect wave is a strategy to gain competitive disadvantage. As companies look to the future of AI technology, it is important not to wait to ride the AI wave if they want to avoid being left behind. To take advantage of the momentum, companies must fully immerse themselves in the use of AI. As of 2024, there are over 125 LLM business models available, with a rapid 120% increase in models launched between 2022 and 2023. The landscape is growing and new emerging models are being introduced to the market – there is no better time than now for companies to ride the wave.
Ultimately, companies that want to ride the GenAI wave without suffering the downsides of vendor lock-in have only one option: adopting an LLM mesh approach. Not only does this approach offer the flexibility to choose which model best aligns with an organization’s priorities, but it will also help future-proof AI applications and projects so that a company can always take advantage of the latest AI models. If an organization rides the AI wave in a smarter, more agile way, it will have a much better chance of staying ahead of the competition and riding the wave of AI innovation.
We have listed the best AI tools.
This article was produced as part of TechRadarPro's Expert Insights channel, where we showcase the brightest and brightest minds in the tech industry today. The views expressed here are those of the author, and not necessarily those of TechRadarPro or Future plc. If you're interested in contributing, find out more here: