In the month that the OECD (Organisation for Economic Cooperation and Development) published a paper with the first clear definitions of AI incidents and dangers, it is useful to look at how governments around the world are approaching the challenges of this rapidly evolving technology.
Politicians are faced with the challenge of putting in place regulatory frameworks to ensure the safety of AI-based research and use, while not stifling innovation. Some countries are calling for global agreements on its development and implementation, while others are putting their own guidelines in place, recognising the fact that reaching global agreement can take a long time and risk lagging behind new developments.
So, is artificial intelligence really worthy of a regulatory approach that is tailored specifically to this technology? Some sceptics argue that both the dangers and the benefits visible so far have been overhyped. However, other people have issued dire warnings about the potential of AI to marginalise groups of people, entrench economic disadvantage or cause mass unemployment. In 2023, more than 20,000 people, including high-profile CEOs, AI researchers and philosophers, even signed an open letter proposing a moratorium on AI development.
The fears expressed in this letter may seem overblown, and it is also important to remember that governments have other tools at their disposal than the blunt instrument of regulation. In many cases, the public sector can help to encourage the type of projects they want to see developed, and discourage those that could cause damage, by engaging with development teams in a more proactive manner.
There are two main tools at their disposal:
- Grants and funding programmes, such as the AI Diagnostic Fund
- Procurement frameworks that encourage innovation within the constraints of particular guardrails
It is important to remember that when we talk about AI, in most cases we are not talking about the type of generative AI products that many of us are used to seeing in our daily lives, for which large language models (LLMs) have been trained with huge quantities of existing creative content so that they can answer questions or generate images, videos or text based on their training data.
Instead, much development and research is taking place below this very visible tip of the AI iceberg, in business-to-business tools (such as recruitment tools and CRM platforms) or in laboratory settings, such as medical diagnostic aids.
The applications that result from this research are generally brought to market by private-sector companies, even where they are used in the public sector, which is why procurement frameworks are such a powerful influence. Examples of third-party AI tools in the UK’s public domain include the Outmatch AI-assisted recruitment tools used by HMRC or the Brainomix tools used by the NHS to assist in the diagnosis of strokes.
Government departments are, of course, also investigating further uses for the technology. As Jim Harra, First Permanent Secretary and Chief Executive of HMRC, explained in a recent interview: