Skip to main content

Leading computer scientists have voiced stark warnings about the potential risks of artificial intelligence (AI), equating them with other large-scale societal threats such as pandemics and nuclear war.

On Tuesday, the San Francisco-based research and advocacy group, the Center for AI Safety, released a statement from a group of high-profile AI experts emphasizing the need to mitigate the “risk of extinction” from AI.

Among the signatories of this declaration were top technology figures, including Sam Altman, CEO of OpenAI, Geoffrey Hinton, often referred to as the “godfather of AI,” and Taiwan’s Digital Minister Audrey Tang. Other notable figures supporting this stance included neuroscientist Sam Harris and musician Grimes. Their warnings come on the heels of an open letter signed by Elon Musk and other prominent individuals in March, which urged a six-month hiatus on the development of AI systems more advanced than OpenAI’s GPT-4.

The development of AI has seen a rapid acceleration, which has sparked concerns about its potential negative impact on society. These include issues such as mass job losses, copyright infringement, the propagation of misinformation, and political instability. There are fears among experts that control over this technology could eventually slip out of human hands.

The specter of Artificial General Intelligence (AGI), an AI capable of independent decision-making, adds further concern. While current AI systems have not yet reached AGI, Microsoft researchers noted in March that their GPT-4 showed “sparks of AGI,” demonstrating capabilities in solving a diverse range of tasks spanning mathematics, coding, vision, medicine, law, and psychology. Since this announcement, warnings about AI’s potential dangers have become more pronounced.

There has been a push for swift legislation and regulation of AI technology. Last month, Geoffrey Hinton left his role at Google to focus more on advocating about the risks of AI. In an appearance before the United States Congress, Sam Altman called for legislators to develop regulations promptly and suggested a licensing-based approach.

By year-end, the European Union plans to pass legislation categorizing AI into four risk-based groups. Meanwhile, China has already initiated steps to regulate AI, enacting laws on deep fakes, requiring companies to register their algorithms with regulators, and proposing rules on politically-sensitive content and approval processes for generative AI-based tech.

Despite the concern, supporters of AI argue that it holds immense promise and has the potential to revolutionize many aspects of human life.

JOIN THE MOVEMENT

Keep up to date with our latest videos, news and content