Table of Contents
Leading artificial intelligence developers, including Amazon, Google, Meta, Microsoft, and OPenAI, will present new protections Steps to Safeguard AI for the rapidly advancing technology on Friday at the White House.
Watermarks for AI information to make it easier to recognize and third-party testing of the technology that will aim to discover potentially dangerous vulnerabilities are among the regulations negotiated by the Biden administration.
The White House announced on Friday that it had obtained voluntary agreements from seven American businesses to assure the safety of their AI technologies before Steps to Safeguard AI deployment. Around 1:30 pm EST, Joe Biden is scheduled to meet with the executives and present a set of initiatives.
The Announcement Comes Amid Criticism that AI’s:
rapid development could allow genuine harm to occur before legislation can keep up. Although voluntary promises are not legally enforceable, they can serve as a temporary solution while more extensive action is established.
The public is fascinated by these tools, and they are concerned about the dangers they pose, including the ability to deceive people and spread misinformation. It is due to a surge in commercial investment in generative AI tools that can produce convincingly human-like text, new images, and other media.
Steps to Safeguard AI:
Announcement Comes Amid Criticism that AI’s (Image Source: forbes.com)
As per the latest reports and The Guardians assessment, the eight key strategic points to keep AI from interfering with change are:
- Using watermarking on audio and visual content to help identify content generated by AI.
- Allowing independent experts to try to push models into bad behavior – a process known as “red-teaming”.
- Sharing trust and safety information with the
- government and other companies.
- Investing in cybersecurity measures.
- Encouraging third parties to uncover security vulnerabilities.
- Reporting societal risks Steps to Safeguard AI such as inappropriate uses and bias.
- Prioritizing research on AI’s societal risks.
- Using the most cutting-edge AI systems, known as frontier models, to solve society’s greatest problems.
Before a longer-term campaign to persuade Congress to establish rules governing the technology, voluntary agreements are intended to be a quick method to manage dangers.
Biden’s action, according to some supporters of AI legislation, is a beginning, but more must be done to hold businesses and their products accountable.
Charles Schumer, the majority leader in the Senate, has declared he will submit legislation to control AI. He has briefed senators on a topic Steps to Safeguard AI that has generated attention from both sides of the aisle through a series of meetings with government representatives.
Many technology leaders have advocated for regulation, and a number of them visited the White House in May to meet with Vice President Kamala Harris, Biden, and other officials.
However, some experts and upstart rivals are concerned that the proposed regulation could benefit well-funded first movers such as OpenAI, Google, and Microsoft, driving out smaller players due to the high cost of making their AI systems known as large language models comply with regulatory requirements.
The Biden administration’s efforts to establish regulations for high-risk AI systems were applauded on Friday by the software Steps to Safeguard AI industry trade association BSA, which counts Microsoft as a member.
Many nations are considering how to govern AI, notably the legislators of the 27-member European Union, who are creating comprehensive AI regulations for the bloc.
The head of the UN also stated that he supported:
requests from certain nations to establish a new UN agency to support international efforts to regulate AI, drawing inspiration from organizations like the International Atomic Energy Agency or the Intergovernmental Panel on Climate Change.
On Friday, the White House announced that it has already talked with several nations about the voluntary commitments.