OpenAI's CEO Sam Altman was told by lawmakers that government regulation is needed for the powerful artificial intelligence technology to prevent potential harm.
On Tuesday, hundreds of business executives and public personalities raised the alarm over what they saw as the possibility of a global extinction brought on by artificial intelligence.
Sam Altman, the CEO of OpenAI, the business behind the well-known chatbot ChatGPT, and Demis Hassabis, the CEO of Google DeepMind, the AI branch of the tech giant, are two of the 350 people who have signed the public declaration.
The one-sentence statement issued by the San Francisco-based charity Center for AI Safety stated that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
A variety of other individuals, including musician Grimes, environmentalist Bill McKibben, and author Sam Harris of the neurology field, have endorsed the statement.
Recent months have seen an increase in concern over the dangers posed by AI and a request for strict regulation of the technology in reaction to significant developments like ChatGPT.
Altman cautioned legislators in his statement before the Senate two weeks ago: "If this technology goes wrong, it can go quite wrong."
He continued, advocating the adoption of licenses or safety standards necessary for the operation of AI models. "We think that regulatory intervention by governments will be critical to mitigating the risks of increasingly powerful models," he said.
ChatGPT, like other AI-enabled chatbots, can instantly answer user prompts on a variety of topics, producing an essay on Shakespeare or a list of travel suggestions for a specific location.
In March, Microsoft released a version of Bing that includes replies sent by ChatGPT's most recent version, GPT-4. Google, a rival search engine, unveiled the Bard AI model in February.
There are concerns about the possible spread of false information, hate speech, and manipulative responses as a result of the growth of massive amounts of AI-generated content.
The development of AI systems should be put on hold for six months, and there should be a significant increase in government monitoring, demanded an open letter signed by hundreds of tech luminaries in March, including billionaire entrepreneur Elon Musk and Apple co-founder Steve Wozniak.
"AI systems with human-competitive intelligence can pose profound risks to society and humanity," the letter stated.
A second alarm was raised by Elon Musk last month when he told Fox News anchor Tucker Carlson that "there is definitely a path to AI dystopia, which is to train AI to be deceptive."
Other significant AI supporters, such as Microsoft Chief Technology Officer Kevin Scott and OpenAI Head of Policy Research Miles Brundage, signed the declaration that was made public on Tuesday.
The Center for AI Safety commented on the 22-word statement's brevity on its website, saying: "It can be difficult to voice concerns about some of advanced AI's most severe risks."
The Center for AI Safety said that "the brief statement below seeks to overcome this barrier and open up discussion."
0 Comments