Wednesday, June 14, 2023
Prime Minister Rishi Sunak opened London Tech Week by revealing that OpenAI, Google DeepMind, and Anthropic have agreed to give conference attendees “early or priority access” to their AI models to support research into evaluation and safety. This follows the UK government’s announcement last week that it intends to host a “global” AI safety summit this fall.
Following several interventions from AI powerhouses warning about the existential and even extinction-level hazards the technology might bring if it is not properly regulated, Sunak has become more and more interested in the topic of AI safety in recent weeks.
In an attempt to compare the initiative to the COP Climate conferences, which strive to get global buy-in on combating climate change, the PM also restated his earlier announcement of the upcoming AI safety summit.
It was in full AI cheerleader mode as recently as March when it endorsed “a pro-innovation approach to AI regulation” in a white paper. The strategy outlined in the article minimized safety issues by forgoing the requirement for specific regulations for artificial intelligence (or an AI watchdog) in favor of outlining a few “flexible principles.” The government also suggested that existing regulatory organizations, such as the data protection authority and the antitrust police, should oversee AI apps.
After a few months have passed, Sunank is now expressing a desire for the UK to serve as the home of a worldwide AI safety body. Or, at the very least, it wants the UK to lead research on how to assess the results of learning algorithms if it wants to own the discourse around AI safety.
Rapid advancements in generative AI and public statements from several tech titans and leaders in the AI business warning the technology could spiral out of control seem to have prompted Downing Street to quickly reevaluate its policy.
It’s also noteworthy that Sunak had recently received personal appeals from AI powerhouses, with meetings between the PM and the CEOs of OpenAI, DeepMind, and Anthropic occurring just before the government mood music on AI changed.
However, there is a chance that the UK will leave itself open to an industry takeover of its fledgling AI safety initiatives. And if AI powerhouses get to control the dialogue surrounding AI safety research by granting selective access to their systems, they may be in a good position to influence any future UK AI regulations that would affect their companies.
Before any legally binding AI safety framework was applied to them, major AI tech companies were actively involved in publicly funded research into the safety of their commercial technologies. This suggests that they will at least have some control over how AI safety is viewed and which aspects, topics, and themes get prioritized (and which, therefore, get downplayed). And by affecting the type of study that takes place because it may depend on how much access they give.
For the AI summit and broader AI safety efforts to produce robust and credible results, the UK government must ensure the participation of independent researchers, civil society organizations, and groups that are disproportionately at risk of harm from automation rather than just touting its plan for a partnership between “brilliant AI companies” and local academics. This is because academic research is already frequently dependent on funding from the private sector.
[Source of Information: Techcrunch.com]