Thursday, 27 July 2023, Bengaluru, India
The leaders in the artificial intelligence sector, OpenAI, Microsoft, Alphabet’s Google, and Anthropic, said on Wednesday that they are establishing a forum to control the creation of massive machine learning models.
The group’s primary focus will be on assuring the safe and ethical development of so-called “frontier AI models”—those that go beyond the capabilities of the most sophisticated existing models.
They are compelling foundation models with the potential to be destructive enough to endanger public safety seriously.
The generative AI algorithms that power chatbots like ChatGPT quickly extrapolate vast amounts of data to convey responses as text, poetry, and art.
Although there are many applications for these models, authorities such as the European Union and business titans like Sam Altman, CEO of OpenAI, have stated that proper safeguards are required to address the risk posed by AI.
The industry group, Frontier Model Forum, will collaborate with academics, businesses, and governments to develop best practices for deploying frontier AI models and promoting AI safety research.
A spokesman for OpenAI, however, asserted that it will not lobby for or against states. The president of Microsoft, Brad Smith, claims that “companies developing AI technology have a responsibility to ensure that it is safe, secure, and remains under human control.”
In the upcoming months, the forum will establish an advisory board, work with a working group to secure funds, and establish an executive committee to oversee its operations.
[Source of Information : indianexpress.com]