AI Safety Rankings: Which Companies Are Secure, and Which Pose Risks?
As the race to develop smarter and more powerful AI continues, it’s becoming clear that some companies are focusing more on innovation than on safety. A recent report from the Future of Life Institute, a nonprofit dedicated to reducing global risks, sheds light on how major tech players like OpenAI, Google DeepMind, and Meta are tackling (or failing to tackle) AI safety. The findings paint a concerning picture—while some companies are taking steps to improve safety, others are lagging dangerously behind.
The Future of Life Institute gathered a panel of seven independent experts—including Turing Award winner Yoshua Bengio—to evaluate AI companies across six key areas: risk assessment, current harms, safety frameworks, existential safety strategy, governance, and transparency. The results were eye-opening.
The Good, The Bad, and the Ugly
Among the most talked-about companies in the AI space, there were some surprises. OpenAI, the creator of ChatGPT, earned a D+ grade, as did Google DeepMind. Both companies have been under scrutiny for prioritizing flashy products over safety; something that’s been pointed out by former OpenAI team members. Meanwhile, Meta, with its Llama AI models, scored the lowest, receiving an F-grade overall. The company’s approach to safety is being questioned, especially when its models are easily exploited.
Elon Musk’s X.AI didn’t fare much better. With a D- grade, it’s clear that while X.AI is trying to make waves in the AI space, its safety measures still have a long way to go.
On a more positive note, Anthropic, the company behind Claude, which has safety at the core of its ethos, scored the highest among its competitors but still only managed a C-grade. This highlights that even the most safety-conscious companies have a long road ahead to ensure that AI remains safe as it gets more advanced.
AI Safety: A Matter of Life and Death
The report doesn’t just focus on the safety of current AI models, but also their future potential risks. All of the major AI companies were found to have vulnerabilities, particularly to “jailbreaks” – techniques that override system guardrails. This is a serious concern, as AI models evolve and become more capable.
Stuart Russell, a professor at the University of California and a panelist on the report, pointed out that while many companies claim to be addressing safety, the truth is that their efforts aren’t very effective at this stage. The current safety strategies are not robust enough to deal with the potential dangers posed by AI systems that could rival human intelligence in the future.
The Importance of Accountability
One key takeaway from the report is that companies are not being held accountable enough for their AI safety practices. Tegan Maharaj, assistant professor at HEC Montréal, emphasized the need for independent oversight rather than relying solely on in-house evaluations from the companies themselves. Without this, there’s a real risk of safety being sacrificed for speed and profit.
While some companies are failing at even the basics of safety, others are making simple improvements that could go a long way. For example, Zhipu AI, Meta, and X.AI could all adopt existing safety guidelines to improve their scores, but some risks are fundamental and will require major breakthroughs in AI technology.
What’s Next for AI Safety?
As AI models continue to grow in complexity, the challenge of ensuring safety becomes even harder. Russell notes that the current black-box approach to training AI on massive datasets doesn’t give any quantitative guarantees of safety, making the future even more unpredictable. Researchers are working on techniques to peer inside these black-box systems, but it’s clear that much more needs to be done to make AI systems safe, transparent, and accountable.
In the words of Yoshua Bengio, these initiatives, like the AI Safety Index, are crucial for holding companies accountable and encouraging better practices. As AI becomes more powerful, safety must become a top priority to prevent catastrophic risks.


