OpenAI is on the lookout for a Head of Preparedness, essentially someone tasked with considering all the ways AI could potentially fail. In a post on X, Sam Altman highlighted the need for this role, pointing out that the rapid advancements in AI models come with “some real challenges.” He specifically mentioned concerns about their impact on mental health, along with the risks associated with AI-driven cybersecurity threats.
The job listing details that the person will be responsible for: “Tracking and preparing for frontier capabilities that create new risks of severe harm. You will be the directly responsible leader for building and coordinating capability evaluations, threat models, and mitigations that form a coherent, rigorous, and operationally scalable safety pipeline.”
Altman noted that this individual will also execute the company’s “preparedness framework,” work on securing AI models for the introduction of “biological capabilities,” and establish guidelines for self-improving systems. He remarked that it will be a “stressful job,” which is probably an understatement.



