OpenAI is hiring a “Head of Preparedness” as threat of AI intelligence ramps up

OpenAI, the creator of ChatGPT, is looking for a “Head of Preparedness” to help protect against risks both from outside the company and those that could arise as artificial intelligence becomes more advanced.

Artificial intelligence is rapidly becoming a bigger part of our lives, and its abilities are constantly improving. It learns through interacting with people and by processing vast amounts of information, becoming more powerful every day.

Artificial intelligence is becoming increasingly problematic. It’s already led to incidents like the wrongful arrest of people and is threatening jobs worldwide, causing significant concern.

OpenAI is looking for a “Head of Preparedness” to proactively address potential risks and ensure the company stays on track. CEO Sam Altman describes the position as essential, especially now.

OpenAI looking to hire Head of Preparedness

With AI becoming increasingly sophisticated, it’s creating risks for everyone – even the people who are developing it. Sam Altman acknowledged this in a post on X (formerly Twitter) on December 29, 2025, stating that AI is “starting to present some real challenges.”

Altman highlighted how AI could affect mental health, referencing early indications seen in 2025 as a possible preview of things to come.

We previously covered a lawsuit against OpenAI claiming their chatbot, ChatGPT, led a man’s son to die by suicide.

Altman recently stated that as AI technology advances, we need to better understand and track potential misuse, especially as he announced a new position within the company focused on this issue.

The Head of Preparedness will oversee a team first created in 2023. This team focuses on identifying major potential dangers and working to ensure that systems which can improve themselves remain safe. Because of the nature of this work, it’s a naturally demanding position.

OpenAI is offering a significant compensation package – $550,000 and company shares – to hire someone who can help ensure AI is used safely and prepare for potential risks.

Read More

2025-12-30 08:18