2023 will be seen as the year that artificial intelligence (AI) enters the mainstream, and it is just getting started. The global AI market is expected to grow to $2.6 trillion within a decade. Given how transformative AI can be in areas ranging from healthcare to food safety, the built environment and more, it is critical that we find a way to harness its power as a force for good.
Beyond the excitement around ChatGPT, there are serious questions about how we build trust in AI, especially generative AI, and what guardrails are needed. This is not a future challenge; According to BSI's recent Trust in AI survey, 38% of people already use AI in their daily work and 62% expect to do so by 2030. As the uses of AI multiply, there will be many questions to answer. For technology leaders and those focused on digital transformation, these include: What does safe use of AI look like? How can we take everyone on this exciting journey and upskill those who need it? How can businesses be encouraged to innovate and what should the government do to enable this while maintaining a focus on safety?
Safe use of AI
Governments around the world are racing to answer those questions. From Australia's Responsible AI Network to China's draft regulation of AI services for citizens, to the EU AI Law and President Biden's recent Executive Order on AI, this global conversation is alive; Its urgency contrasts sharply with the slow global political response to social issues. media. Most importantly, however, no country can influence how another decides to regulate, and there is no guarantee of consistency. However, in our globally connected economy, organizations (and the technology they use) operate across borders. International collaboration to determine our future in AI and catalyze future innovation is key.
Some, including former Google CEO Eric Schmidt, have called for an IPCC-style body to govern AI, bringing together different groups to determine our future approach. This is in line with public opinion: BSI research found that three-fifths of people want international guidelines for the safe use of AI. There are many ways we can do this. Bringing people together physically is key, for example at the recent UK AI Safety Summit. I hope to continue making progress in the upcoming discussions in South Korea and France.
Another useful starting point is international standards, which are dynamic and based on a common consensus between countries and multiple stakeholders, including consumers, about what good practice looks like. With rapidly emerging technology, standards and certification can act as a common infrastructure, offering clear principles designed to ensure innovation is safe. Compliance with international standards can act as a common thread and is already a key component of similar cross-border issues, such as sustainable finance or even cybersecurity, where long-established international standards are commonly used to mitigate risk. Such guidance is designed to ensure that what is on the market is secure, builds trust, and can help organizations implement better technology solutions for everyone. The agility of standards and the rapid ability of organizations to apply them is critical given the pace of change in AI. The ultimate goal is to promote interoperability and provide suppliers, users and consumers with confidence that AI-enabled products and systems meet international security standards.
Global collaboration
Although achieving consensus is not easy, when it comes to AI we are not starting from scratch. The AI management system standard (ISO/IEC 42001), soon to be published and recognized in the UK government's national AI strategy, builds on existing guidance. This is a risk-based standard to help ordinary organizations of all sizes protect themselves and their customers. It has been designed to help with considerations such as non-transparent automated decision making, using machine learning for system design, and continuous learning. Additionally, there are already many standards around trustworthiness, bias, and consumer inclusion that can be turned to immediately, and we are in the early stages of developing GAINS (Global AI Network or Standards). Ultimately, some of the big questions around AI lack a technological solution, but standards are helping to define the principles behind robustness, fairness and transparency as the technology continues to evolve.
To see this approach in action, we can look at how global collaboration is helping to accelerate decarbonization. The ISO Net Zero Guidelines, released a year ago, were developed from a conversation among thousands of voices from more than 100 countries, including many underrepresented voices. Nigel Topping, a senior advocate for UN climate action, which has now been adopted by organizations such as General Motors to inform their strategies, described the guidelines as “a central reference text… to align global actors”.
AI has the ability to have a positive impact on society and accelerate progress towards a sustainable world. But trust is essential. We need global collaboration to balance the great opportunity it promises with its potential risks. By partnering across borders, we can create the right checks and balances to make AI a powerful force for good in all areas of life and society.
We have presented the best business VPN.
This article was produced as part of TechRadarPro's Expert Insights channel, where we feature the best and brightest minds in today's tech industry. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing, find out more here: