OpenAI creates a position to think about the worst-case scenarios for artificial intelligence.

OpenAI creates a position to think about the worst-case scenarios for artificial intelligence.

OpenAI has announced a new senior position called Head of Preparedness, reflecting growing global concern about the potential risks of developing advanced artificial intelligence systems. This position involves thinking ahead and systematically considering the worst-case scenarios that could result from the use of artificial intelligence, and working to prevent or mitigate their effects before they turn into real crises.

According to the company's official careers page, the position will be based in San Francisco, within the Safety Systems team, which is one of the most sensitive teams within OpenAI, given its direct responsibility for ensuring that the most powerful AI models are developed and deployed in a safe and responsible manner. This team is responsible for building rigorous evaluation tests, designing multi-level safety controls, and developing comprehensive safety frameworks that ensure models behave as intended when used in the real world.

OpenAI explains that over the past few years, it has invested heavily in the concept of "readiness" across successive generations of advanced models by creating baseline assessments of capabilities, building threat models, and developing risk mitigation mechanisms in collaboration with multidisciplinary teams. With the continuous rise in model capabilities, the company affirms that enhancing readiness will remain a strategic priority.

The Head of Readiness will lead the technical and operational framework that OpenAI relies on to track advanced capabilities and prepare for new risks that could cause serious or widespread harm. He or she will also be responsible for coordinating capability assessments, threat models, and protective measures, ensuring an integrated and actionable safety ecosystem, even as product development accelerates.

The role requires a high level of technical judgment and the ability to communicate and coordinate across research, engineering, and product teams, as well as collaborate with policy and governance teams and external partners. CEO Sam Altman describes the position as one dedicated to thinking about all the ways AI could veer toward "very bad" outcomes, given real-world challenges including mental health and cybersecurity.