
A recent report suggests that companies developing artificial intelligence aren’t necessarily prioritizing human safety when it comes to the potential risks of AI. It casts doubt on whether they’re adequately protecting us.
With artificial intelligence becoming more common in our daily lives, we’re starting to see potential dangers. Examples include people relying on AI chatbots for emotional support and tragically taking their own lives, or criminals using AI to launch cyberattacks. Looking ahead, there are even greater risks, such as AI being used to create weapons or even destabilize governments.
As a movie lover, I keep thinking about all those AI gone wrong scenarios, and honestly, it doesn’t feel like the companies building this tech are focused enough on keeping us safe. A new report from the Future of Life Institute – a non-profit out in Silicon Valley – kind of proves that. They’ve created an AI Safety Index to try and push things in a safer direction, because let’s face it, we need to minimize the risks of AI becoming an existential threat.
According to Max Tegmark, president of the institute and an MIT professor, these companies are unique in the U.S. because they develop powerful technology without any government oversight. This lack of regulation creates a competitive environment where prioritizing safety isn’t encouraged.

Business
OpenAI, the company behind popular chatbots, is now being sued. The lawsuit claims their chatbots are giving teenagers information about self-harm.
The top grade awarded was a C+, and it was given to two AI companies based in San Francisco: OpenAI, the creator of ChatGPT, and Anthropic, which developed the Claude chatbot. Google’s AI team, Google DeepMind, received a C.
Several companies received low ratings, with Meta (Facebook’s parent company) and xAI both earning a D. Chinese companies Z.ai and DeepSeek also received a D, but Alibaba Cloud received the lowest grade of all – a D-.
I’ve been following the reports on how these companies are handling AI safety, and it’s fascinating. Their scores were determined by looking at 35 different things, grouped into six main areas like ensuring basic safety and how well they identify and share information about potential risks. The researchers gathered information from what the companies have already made public, and also asked them to fill out a survey. What’s really interesting is that eight AI experts – people from universities and leaders in the AI field – actually did the scoring. It’s a pretty thorough process, and I appreciate the transparency.
Companies in the index generally scored poorly on existential safety, which includes things like how well they monitor risks internally, their control measures, and their overall strategy for ensuring long-term survival.
The new AI Safety Index report finds that even as companies race to develop advanced AI – including artificial general intelligence (AGI) and even superintelligence – none have shown a convincing strategy to prevent dangerous misuse or losing control of these powerful technologies.
Business
The latest $40-billion investment round will bring the ChatGPT’s valuation to $300 billion.
Both Google DeepMind and OpenAI said they are invested in safety efforts.
OpenAI prioritizes safety in everything we do with AI. We dedicate significant resources to researching how to make AI safer, and we build robust safety measures directly into our systems. We thoroughly test our models, both within OpenAI and with outside experts. We also openly share our safety guidelines, testing results, and research to help improve safety standards across the industry, and we’re constantly working to enhance our protections as AI technology evolves.
Google DeepMind in a statement said it takes “a rigorous, science-led approach to AI safety.”
Google DeepMind explained that their Frontier Safety Framework details how they find and reduce major risks from highly capable AI models before those risks become real problems. They also stated that as their AI models get more sophisticated, they’re constantly improving safety measures and how those systems are controlled, keeping pace with the models’ growing abilities.
According to a report by the Future of Life Institute, companies like xAI and Meta have risk management plans but haven’t demonstrated a strong commitment to actually monitoring and controlling potential dangers, nor have they invested significantly in safety research. The report also found that companies like DeepSeek, Z.ai, and Alibaba Cloud haven’t made their plans for preventing extreme risks publicly available.
Meta, Z.ai, DeepSeek, Alibaba and Anthropic did not return a request for comment.
xAI stated that traditional news outlets are spreading falsehoods. A lawyer for Elon Musk didn’t respond to a request for further information.
Business
Governor Newsom stated that while new technologies like chatbots and social media have the potential to be inspiring, educational, and connecting for children, a lack of safety measures could also expose them to harm, misinformation, and exploitation.
Elon Musk has advised and previously donated to the Future of Life Institute, but he wasn’t part of the team that created the AI Safety Index, according to Tegmark.
Max Tegmark worries that without proper oversight, artificial intelligence could be used for dangerous purposes, such as assisting terrorists in creating biological weapons, increasing the effectiveness of manipulation, or even destabilizing governments.
Tegmark acknowledged there are serious issues and a concerning trend, but he stressed that the solution is straightforward: we need strong, enforceable safety standards for AI companies.
Hollywood Inc.
More companies like OpenAI are renting office space in San Francisco, which is predicted to lower the city’s number of vacant buildings later this year.
The government is trying to increase its supervision of AI companies, but these efforts have faced opposition from tech industry lobbyists. They claim stricter rules could hinder progress and potentially drive businesses to other countries.
Recently, lawmakers have started trying to improve safety oversight of AI companies. For example, California Governor Gavin Newsom signed SB 53 into law in September. This law requires AI businesses to reveal how they protect user safety and security, and to report incidents like hacking attempts to the state. While experts like Tegmark see this as a positive move, they agree that much more work is necessary.
Rob Enderle, an analyst with the Enderle Group, believes the AI Safety Index is a helpful step towards addressing the lack of AI regulation in the United States. However, he also acknowledges that there will be difficulties.
Enderle expressed concern that current regulations from the U.S. government may be poorly designed and ultimately cause more problems than they solve. He also questioned whether there’s a clear plan to effectively enforce these regulations and ensure everyone follows them.
Read More
- Clash Royale Best Boss Bandit Champion decks
- Clash Royale December 2025: Events, Challenges, Tournaments, and Rewards
- Ireland, Spain and more countries withdraw from Eurovision Song Contest 2026
- Clash Royale Witch Evolution best decks guide
- Best Hero Card Decks in Clash Royale
- Clash Royale Furnace Evolution best decks guide
- Mobile Legends December 2025 Leaks: Upcoming new skins, heroes, events and more
- ‘The Abandons’ tries to mine new ground, but treads old western territory instead
- JoJo’s Bizarre Adventure: Ora Ora Overdrive unites iconic characters in a sim RPG, launching on mobile this fall
- Mobile Legends X SpongeBob Collab Skins: All MLBB skins, prices and availability
2025-12-05 14:04