We are currently seeing two divergent approaches to enterprise adoption of AI. Some organizations rush to implement solutions in an attempt to generate a quick return on investment, while others take a longer-term view, hoping to reap future rewards based on long-term research investments made now.
Regardless of where an organization is on its AI journey, there are common challenges to face, including skills shortages, energy usage, supply chain issues and budget constraints, and any meaningful AI implementation starts at £ 10 million. It is vital that any organization considering implementing AI invests in the right resources and technologies from the beginning to avoid painful and costly headaches down the road.
Clear investment trend in AI
According to figures published by Statistica, it is estimated that companies invested $934.2 billion in artificial intelligence technologies between 2013 and 2022, a figure that is constantly increasing year after year. The advent of generative AI has further compounded AI spending over the past year, with major tech companies like Microsoft, Google and Amazon leading the way, outstripping investment from Silicon Valley venture capital firms, according to the Financial Times. Additionally, a recently released McKinsey report called 2023 a “defining year” for generative AI, with a third of respondents saying their organizations are regularly using the technology in at least one business function.
Despite the clear investment trend in AI, many organizations currently find the costs of implementing large-scale AI prohibitive. In addition to IT infrastructure and people-related costs, environmental impact and energy use also need to be considered. Costs might be a temporary limitation for some, but organizations will need to have a clear path to monetization and ROI from their AI project to justify the expense, purchase the necessary infrastructure, and offset carbon emissions to meet compliance. regulatory requirements.
CTO International, Pure Storage.
Lay the right foundation
Regardless of the challenges, the transformative benefits and value of successful AI projects are too great to ignore. Most industries are still in the early adopter phase of AI implementation, but adoption is only increasing as use cases are defined and we move beyond the conservative thinking that prevails within many organizations. In preparation for this shift, now is the time to start thinking about what is required to ensure there are solid foundations for an AI-based future.
To improve the prospects of a successful AI implementation, these are the key things organizations need to think about:
GPU Accessibility
Supply chains must be evaluated and taken into account in any AI project from the beginning. Access to GPUs is vitally important, as without them, the AI project will not be successful. As a result of the huge demand for GPUs and their consequent lack of availability on the open market, some organizations planning to deploy AI may need to turn to hosting service providers to access the technology.
Data Center Space and Power Capacities
AI and its huge data sets create real challenges for data centers that are already stretched to their limits, particularly around power. Current AI deployments can require a power density of 40 to 50 kilowatts per rack, far beyond the capacity of many data centers. AI is changing the network and power requirements of data centers. Much higher fiber density is required, along with a larger, higher-speed network than traditional data center providers can support.
Energy- and space-efficient technologies will be crucial to getting your AI project off the ground successfully. Flash-based data storage technology can help mitigate this problem, as it saves much more power and space than HDD storage and requires less cooling and maintenance than traditional hard drives. Each watt allocated to storage reduces the number of GPUs that can be powered in the AI cluster.
Data challenges
Unlike other data-driven projects that can be more selective about where the data is sourced and what is taken into account, AI projects use huge data sets to train AI models and extract insights from massive quantities. of information to drive new innovations. . This presents significant challenges regarding fully understanding AI models and how introducing new data into a model can change the results.
The question of repeatability is still being addressed, but a best practice to help understand data models and very large data sets is to introduce “checkpoints” that ensure models can revert to a previous state, effectively going back in time. over time and thus facilitating a better understanding of the implications of data and parameter changes. The ethical and provenance aspects of using Internet data in training models have not yet been sufficiently explored or addressed, nor has the impact of (trying to) remove selected data from an LLM or RAG vector dataset.
Invest in people
Any organization embarking on the AI journey will encounter a skills shortage. There are simply not enough data scientists or other professionals with relevant skills available in the global workforce today to cope with demand and, as a result, those with the right skills are hard to come by and demand premium salaries. This is likely to remain a major problem for the next five to ten years. As a result, organizations will need to not only invest heavily in talent through hiring, but also invest in training their existing workforce to develop more AI skills internally.
Conclusion
As organizations mature in AI adoption, develop specific use cases, fine-tune infrastructure requirements, invest in skills, and chart a clear path to short- or long-term ROI, there may be Realize that the challenges may be very difficult to overcome in the future. your own. For many, collaboration will be necessary. This is where there is a real opportunity for cloud service providers, managed service providers and other specialists to offer services and infrastructure that will help organizations achieve their AI goals.
We have the best AI tools.
This article was produced as part of TechRadarPro's Expert Insights channel, where we feature the best and brightest minds in today's tech industry. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing, find out more here: