The UK has set out principles to prevent artificial intelligence (AI) models from being dominated by a handful of tech companies to the detriment of consumers and businesses. By emphasizing the need for accountability and transparency, the country’s anti-trust regulator, the Competition and Markets Authority (CMA), is, like other authorities worldwide, trying to control some of the potential negative consequences of AI without stifling innovation.
In a white paper published in March, the British government asked several different regulators to weigh in on how to govern the use of AI while avoiding heavy-handed legislation that could stifle innovation. The CMA’s new rules, which it outlined on Monday, aim to establish “a broad program of engagement to help ensure the development and use of foundation models like OpenAI’s ChatGPT and Meta’s Llama 2 evolve in a way that promotes competition and protects consumers.”
One of the key themes of the CMA’s rules is that transparency should be promoted by encouraging collaboration between the companies developing these AI systems and their customers. It also aims to stop Big Tech from restricting access to essential features of their platforms or using unfair dealings such as bundling, a practice that it says could undermine consumer trust.
These rules, due to come into force in October, will apply to a wide range of foundation models. These include large language models, such as those used by OpenAI’s GPT-4 and Meta’s Llama 2, a core building block of generative AI applications. They also cover the “deep learning” techniques widely used by many modern applications, such as facial recognition software and self-driving cars.
Despite the UK’s best efforts to establish an adaptable framework, it may still be challenging to future-proof regulations relating to rapidly changing technologies such as AI. The CMA’s approach will likely be monitored and evaluated, with adjustments made as necessary. This is a sensible approach to take, as the risk of a constantly shifting regulatory landscape could create uncertainty and stifle confidence in new technology.
The British rules also establish that they will apply to all of the United Kingdom’s nations, including devolved jurisdictions. This is a significant step, given that the decentralized approach to governance across the four nations could pose coordination challenges. Regular ministerial discussions between the UK’s central and devolved governments on this issue should be a priority, as should the creation of a mechanism to coordinate the work of different local and national regulators.
The CMA’s proposed rules will significantly test whether the UK can position itself at the forefront of global AI regulation. Britain’s Prime Minister Rishi Sunak will host a global AI safety summit in November at the country’s famous Bletchley Park code-breaking center, and the country’s police forces have been pushing ahead with the deployment of facial recognition software. The government is betting on AI’s potential to boost productivity and simplify millions of daily chores, but a bright future will only be easy to secure if it can get its house in order.