The report warns that the UK’s approach to AI safety lacks credibility

Image credits: Ian Fogler/Getty Images

In recent weeks, the UK government has been trying to cement an image of itself as an international mover and shaker in the emerging field of AI safety – and dropped a flashy announcement of an upcoming summit on the topic last month, along with a pledge to spend £100m on a foundational model working group of It would do “cutting-edge” AI safety research, she says.

However, the government itself, led by the British Prime Minister and Prime Minister Rishi Sunak, has sidestepped the need to pass new domestic legislation to regulate AI applications – a stance of its own policy paper on the subject of ‘pro-innovation’ brands.

It is also in the midst of passing an editorial overhaul of the national data protection framework that risks working against the integrity of AI.

The latter is one of many conclusions reached by the independent research-focused Ada Lovelace Institute, part of the Nuffield Foundation charitable trust, in a new report examining the UK’s approach to regulating artificial intelligence that makes it seem diplomatic but, at times, deeply embarrassing. Minister read.

The report contains 18 full recommendations to raise the level of government credibility/policy in this area – ie if the UK wants to be taken seriously on this topic.

The institute defends an “expensive” definition of AI safety — “reflecting the variety of harms that arise as AI systems become more capable and engaged with society.” So the report is concerned with how to regulate the damage that “AI systems can cause today”. Call it real-world AI damage. (Not with the theoretical, sci-fi-inspired future risks that have been exaggerated by some notable figures in the tech industry recently, apparently in an attempt to hack policymakers.)

For now, it’s fair to say that the Sunak government’s approach to regulating (real-world) AI security has been ambivalent — heavy on flashy industry-led PR claims it wants to defend safety but light on policy proposals to set objective rules for protection from a variety of Of the risks and harms we know can flow from poorly judged automation applications.

Here’s the Ada Lovelace Institute dropping the basic truth bomb:

The UK government has laid out its ambition to make the UK an “AI superpower”, capitalize on the development and spread of AI technologies to benefit UK society and economy, and host a global summit in the fall of 2023. This ambition will only be achieved with the effectiveness of local regulation, which will provide a platform for the economy Future artificial intelligence uk.

tThe report’s list of recommendations goes on to show that the institute sees significant room for improvement in the UK’s current approach to AI.

Earlier this year, the government published its preferred approach to regulating AI domestically – saying it saw no need for new legislation or oversight bodies at this point. Instead, the white paper presented a set of flexible principles that the government suggested to incumbent and sector (and/or mass) regulators “interpret and apply AI within their terms of reference.” Only without any new legal powers or additional funding to also oversee new uses of AI.

The five principles set out in the white paper are safety, security, and robustness; adequate transparency and interpretability; fairness. accountability and governance; competition and compensation. This all sounds good on paper — but paper alone clearly isn’t going to cut it when it comes to regulating AI security.

The UK’s plan to let incumbent regulators know what to do about AI with some overarching principles to target and a lack of new resources contrasts with that of the EU as lawmakers are busy reaching agreement on a risk-based framework. The CEO proposed to block again in 2021.

The UK’s slim budget approach of charging existing, overworked regulators with new responsibilities to monitor AI developments on their patch without any powers to impose consequences on bad actors doesn’t sound very credible on AI safety, to put it mildly.

It doesn’t even seem like a coherent strategy if you’re striving to be pro-innovation either – because it would require AI developers to consider a whole host of sector-specific and cross-cutting legislation, which was drafted long before the last AI boom. Developers may also find themselves subject to scrutiny by a number of different regulatory agencies (however feeble their attention may be, given the lack of resources and legal force to enforce the aforementioned principles). So, really, it sounds like a recipe for Uncertainty about current rules that may apply to AI applications. (And, most likely, a mixture of regulatory interpretations, depending on the sector, use case and oversight bodies involved, etc. Ergo, confusion and cost, not clarity.)

Even if current UK regulators release guidance on how they approach AI – as some already do or are working on – there are still plenty of loopholes, as the Ada Lovelace Institute report also points out – since coverage gaps are a feature of the regulatory landscape. current in the United Kingdom. So the proposal to extend this approach implies that the regulatory contradiction has been circumscribed and even amplified with the increase/explosion of the use of AI across all sectors.

Here is the institute again:

Large sectors of the UK economy are currently unregulated or only partially regulated. It is not clear who will be responsible for implementing AI principles in these contexts, which include: sensitive practices such as recruitment and hiring, which are not subject to comprehensive monitoring by regulators, even within regulated sectors; public sector services such as education and the police, which are monitored and enforced by an asymmetric network of regulators; activities carried out by central government departments, which are often not directly regulated, such as administering benefits or detecting tax fraud; Unregulated parts of the private sector, such as retail.

AI is being deployed and used in every sector, but the UK’s pervasive legal and regulatory network for AI currently suffers from significant gaps. Clearer rights and new institutions are needed to ensure that safeguards extend across the economy,” he suggests.

Another growing contradiction to the government’s alleged position on “AI leadership” is that its attempt to become the country’s global hub for AI safety is directly undermined by efforts in training to reduce domestic protections for people’s data – such as reducing protections when they are subject to automated decisions with influence Significant and/or legal – through the Non-Regulatory Data Protection Act and the Digital Information Act (No. 2).

While the government has so far sidestepped the most striking proposals of Brexiteers to tear up the EU-derived data protection rulebook — such as deleting Article 22 entirely (which deals with protections for automated decisions) from the UK’s General Data Protection Regulation — However, it is pressing ahead with a plan to reduce the level of protection citizens enjoy under existing data protection law in various ways, despite its newfound ambition to make the UK a global center for AI safety.

“The UK’s General Data Protection Regulation – the legal framework for data protection currently in place in the UK – provides the protections necessary to protect individuals and communities from potential harms from artificial intelligence. The Digital Information and Data Protection Act (No. 2), introduced in its current form in March 2023 “significantly modifies these protections,” the institute warns, pointing as an example to the bill that removes bans on many types of automated decisions — and instead requires data controllers to have “safeguards in place, such as measures to enable an individual to appeal.” in the decision” – which he argues is a lower level of protection in practice.

“The dependence of the government’s proposed framework on existing legislation and regulators makes it all the more important that basic regulation such as data protection properly governs AI,” he continues. “Legal advice commissioned by the Ada Lovelace Institute . . . suggests that existing automated processing protections may in practice not provide sufficient protection for people interacting with day-to-day services, such as applying for a loan.”

Taken collectively, the bill’s changes risk undermining the government’s regulatory proposals on AI, the report adds.

Thus, the institute’s first recommendation is for the government to rethink elements of the data protection reform bill that “potentially undermine the safe development, deployment, and use of artificial intelligence, such as changes to the accountability framework”. It also recommends that the government broaden its review to look at existing rights and protections in UK law – with the aim of filling any other legislative gaps and introducing new rights and protections for people.Influenced by AI-based decisions when necessary.

Other recommendations in the report include creating a legal obligation for regulators to observe the above principles, including “strict transparency and accountability obligations” and providing them with more funding/resources to remedy AI-related harms; Explore the introduction of a common set of powers for regulators, incl beforeDeveloper-focused organizational capability; and that the government should consider whether an AI Ombudsman should be established to support people who are severely affected by AI.

The institute also recommends that the government clarify the law related to artificial intelligence and liability – another area in which the EU is already making progress.

On the integrity of the foundational model — an area that has garnered particular interest and attention from the UK government recently, thanks to buzz circulating around generative AI tools such as OpenAI’s ChatGPT — the institute also believes the government needs to move forward, recommending UK-based developers should Giving constituent forms mandatory reporting requirements to make it easier for regulators to stay on top of extremely fast-moving technologies.

He even suggests that developers of leading foundational models, such as OpenAI, Google DeepMind, and Anthropic, should be required to provide the government with notification when they (or any subprocessors they work with) begin large-scale training of new models.

“This would provide the government with early warning about developments in AI capabilities, allowing policymakers and regulators to prepare for the impact of these developments, rather than being unaware of them,” the report suggests, adding that reporting requirements should also include information such as access to the data used. to train models; results of internal audits; and supply chain data.

Another suggestion is for the government to invest in small pilot projects to enhance its understanding of AI research and development trends.

Commenting on the report’s findings in a statement, Michael Birtwistle, associate director at the Ada Lovelace Institute, said:

The Government rightly recognizes that the UK has a unique opportunity to be a world leader in regulating AI, and the Prime Minister should be commended for his global leadership on this issue. However, the UK’s credibility with regard to AI regulation depends on the government’s ability to deliver a world-leading regulatory regime at home. We welcome the efforts made for international coordination, but they are not enough. The government must strengthen its domestic proposals for regulation if it is to be taken seriously about AI and achieve its global ambitions.



Leave a Reply

%d bloggers like this: