AI has been all the rage in 2023. Online, at conferences, in articles like this, you can't stay away from the topic. But AI has been around for a while. So, beyond the hype and headlines, what's behind the sudden emergence of AI as a concern for businesses around the world?
We have reached a critical mass of global connectivity and the computing power that is now available is seeing the rise of massive data sets. With extreme computing power, extreme networks, and large data sets (such as those used to train large language models (LLMs), AI has become commonplace. It is now more available and needed, which is why there are so much fuss around.
And the buzz seems to go beyond the normal clamor when a new technology enters the scene. It seems that AI will shape every aspect of the future. Not just what it means to do business, but also question what it means to be human.
These are the big esoteric questions behind AI. But what does all this mean in practice, on a day-to-day basis?
As I said, the basis of AI is huge amounts of data. And now, managing this constant rain of data has become one of the biggest information challenges that companies must overcome. And while interacting with AI may seem simple from a user's perspective, it involves many sophisticated technologies working together behind the scenes: big data, natural language processing (NLP), machine learning (ML), and more. But integrating these components (ethically and effectively) requires experience, strategy and knowledge.
Senior Director of Product Marketing, OpenText.
Specialized versus generalized: making the most of AI
The most high-profile AI tools, such as ChatGPT or Bard, are examples of generalized AI. These work by ingesting data sets from publicly available sources (i.e. the entire Internet) and processing that data into results that look plausible to humans.
But the problem with using generalized AI models in business is that they are subject to the same inaccuracies and biases that we have become accustomed to on the Internet in general.
Therefore, to achieve maximum impact, companies should not use general AI models. Instead, leveraging specialized AI models is the way to most effectively manage the deluge of data that comes with AI. Specialized AI tools are like generalized ones in that they are also LLMs. But the big difference is that they are trained with specialized data, which is verified by subject matter experts before incorporating it into the LLM.
Therefore, specialized AI algorithms can analyze, understand and generate content that can be relied on for specialized accuracy. This type of capability is crucial to avoid the kind of pitfalls we've seen so far with widespread AI, such as lawyers including inaccurate information provided by ChatGPT in their legal presentations. But the question remains: how can companies best manage the enormous amounts of data that are created by taking a specialized approach to AI?
Manage the flood of data with specialized AI models
Any successful approach will involve effective strategies for data collection, storage, processing and analysis. As with any technology project, defining clear objectives and governance policies is key. But data quality is arguably even more important. The old saying “garbage in, garbage out” applies here; The success of any specialized AI model depends on the quality of the data, so companies must implement data cleaning and validation processes.
Data storage infrastructure, lifecycle management, cross-system integration, and version control should also be considered and planned before implementing a specialized AI model. Ensuring all of this is in place will help businesses better handle the large volumes of data generated at the other end, and continuous monitoring is also required to evaluate model performance.
But companies should also consider AI ethics here, just as they would with widespread AI. Specialized AI models can be prone to domain-specific biases, while what is considered ethical in one industry may not be ethical in another, requiring judicious use of any specialized AI product. Additionally, specialized LLMs may have difficulty understanding nuanced or context-specific aspects of language. This could lead to misinterpretation of the data provided and generate inappropriate or inaccurate results.
This complexity, of course, dictates that human input and continuous monitoring are key. But it also reinforces the importance of both departmental and industry collaboration to ensure that any use of AI is ethical and effective. Sharing data and knowledge can be a key step in improving the quality of the underlying data and, when done correctly, can also help keep that data secure.
Ultimately, as AI becomes increasingly integrated into our work and daily lives, we will need to develop processes to handle its production in a scalable and ethical way. Partnership and collaboration are key to achieving this, especially with a technology that affects many of us simultaneously.
We have introduced the best data visualization tool.
This article was produced as part of TechRadarPro's Expert Insights channel, where we feature the best and brightest minds in today's tech industry. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing, find out more here: