The UK’s Competition and Markets Authority (CMA) aims to maintain an open and fair playing field in the Artificial Intelligence (AI) sector by centering its approach around seven core principles. These principles, designed to ensure robust competition, consumer protection, and a proactive stance towards potential issues, are set to be a compass in navigating the booming AI frontier.

Conversations with over 70 stakeholders ranging from AI developers to industry groups and academics have informed the CMA’s position. Such a broad cross-section of expertise certainly allowed the organization to gain perspectives that have helped create a blueprint to better oversee companies creating foundation models (FMs). FMs underpin generative AI services like OpenAI’s ChatGPT and Google Bard, serving as key components in an increasingly AI-driven market.

The guiding principle at the top of CMA’s list is accountability. The goal here is for AI developers and FM deployers to bear responsibility for the outputs their technology provides. This key principle aims to ensure that if an AI application disseminates harmful or misleading content to the public, regulators will have the power to intervene.

Through the five principles of access, diversity, choice, flexibility, and fair dealing, the CMA seeks to guarantee that consumers and businesses will have autonomy over the FM they choose and the freedom to switch between different providers as needed. Steps will be taken to mitigate any evidence of anti-competitive practices, such as tying clients to prolonged contracts with difficult terms or engaging in anti-competitive bundling.

The final principle of transparency underscores the importance of keeping risks and limitations of FM-generated content known to the consumers and businesses. Clarity is yet to be gained, however, on whether it is the AI developers or regulators who should take the responsibility of imparting this vital information.

According to CMA chief executive Sarah Cardell, “the CMA’s role is to help shape these markets in ways that foster strong competition and effective consumer protection, delivering the best outcomes for people and businesses across the UK”. The aim is to keep abreast with developing markets and preemptively identifying potential issues, rather than reacting after issues surface.

These guidelines align with a UK government white paper published earlier this year regarding AI regulation. Rather than allocating the responsibility of AI governance to a new singular regulator, existing authorities like the CMA, the Health and Safety Executive, and the Equality and Human Rights Commission have been called to devise their own tailored strategies.

In the road ahead, the CMA plans to engage with tech giants like Google, Meta, OpenAI, Microsoft, Nvidia and Anthropic to gauge the reception of its guiding principles. An update is anticipated on its regulatory tactics by early 2024. As Cardell notes, the CMA is prepared to step in when necessary, underscoring the need for a collaborative approach in achieving the maximum potential of this new and evolving technology.