Already a subscriber? Make sure to log into your account before viewing this content. You can access your account by hitting the “login” button on the top right corner. Still unable to see the content after signing in? Make sure your card on file is up-to-date.
OpenAI, the company behind ChatGPT, has announced it will create a new safety panel to enhance its AI security measures.
The new committee, which will make crucial safety and security recommendations to the company’s board, is led by Chief Executive Sam Altman and includes various company directors. The primary responsibility of this committee is to evaluate and improve OpenAI’s existing safety protocols over the next 90 days. At the end of this period, the committee will present its findings and recommendations to the board, which will then review and publicly share an update on the implemented changes.
This move comes in response to growing concerns about AI safety within the company. Jan Leike, a former researcher at OpenAI, resigned in March, criticizing the company for prioritizing product development over safety.
In addition to forming the safety panel, OpenAI revealed it has started training its next-generation AI model, which promises to bring significant advancements in capabilities. In a statement, OpenAI said, “While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment.”