Skip to main content

Top tech executives from the United States, including the CEOs of Google, Microsoft, OpenAI, and Anthropic, attended a meeting at the White House to discuss the development and regulation of artificial intelligence (AI).

Vice President Kamala Harris extended the invitation, emphasizing the “moral” responsibility of these companies to ensure AI does not negatively impact society. President Joe Biden, who briefly joined the meeting, acknowledged the “enormous potential and enormous danger” of AI and expressed his hope that the tech leaders would help educate the administration on ways to protect society while advancing AI technology.

Following the meeting, Harris released a statement underscoring the need for tech companies to comply with existing laws to protect Americans and ensure the safety and security of their products. According to the White House, the meeting involved a “frank and constructive discussion” on the necessity for increased transparency between tech firms and the government regarding AI technology. The conversation also addressed the importance of ensuring AI product safety and protecting them from malicious attacks. OpenAI CEO Sam Altman commented that they were “surprisingly on the same page on what needs to happen.”

In response to the rapid advancements in AI, the Biden administration announced a $140 million investment in seven new AI research institutes, the creation of an independent committee to conduct public assessments of existing AI systems, and plans to develop guidelines for AI use by the federal government. The swift progress in AI has generated both excitement and concerns about potential social harm and loss of control over the technology.

AI has already been implicated in controversies surrounding fake news, non-consensual pornography, and a case involving a Belgian man who took his life after being encouraged by an AI-powered chatbot. Last year, a Stanford University survey revealed that over one-third of natural language-processing experts believed AI could result in a “nuclear-level catastrophe.”

This comes as Apple co-founder Steve Wozniak and Tesla CEO Elon Musk joined 1,300 signatories of an open letter that called for a six-month pause on AI system training until developers can confidently ensure the positive effects and manageable risks of these systems.


Keep up to date with our latest videos, news and content