Openi and Human researchers have rejected ‘unsteady’ security culture at the Xai’s elon

Photo of author

By aispaceworld


Safety Researchers from OpenAi, Orthroi, Orthroi, and other organizations are talking to the “Billion Culture owned by Elon Musk.

Criticized by the awesome weeks that you have overlooked the technological progress.

Last week, the AI Chatbot, Dowp, spouted, spouted antiseomic and said unisexual “Mechahitler.” Shortly after Xai has taken Thobot Tower Aiot Aiot Aio in the latest development of the Hyper-so many aggressiveness.

Friendly loneliness among the AI laboratory is normal, but these researchers seem they claimed it at standards of standards.

Barak Bark, the computer teacher on the computer currently rests out of Harvard Post on X. “I know the scientist and an @xai engineer but the safe way of getting managed is not responsible.”

Balnimists use Xai’s decision-in-law and security evaluation in the sharing information to the research community. So, Barak says it is not clear what security training is in the 4th basin.

OpenAi and Google has the worst reputation when talking about system sharing system immediately when the new AI launch. Openai decided not to publish system card for GPT-4.1, claiming that it is not a border model. Meanwhile, Google has awaited months after Launch Gemini 2.5 Pro to publish security reports. However, these companies disseminate safety reports for every AI model before it attended the full production.

Event Techcrunch

San Francisco
|
27-29, 2025

The Boyk’s colleague record “brings the most bad problem we have for the mood and try to extend them.” In recent years, we have seen Countless Stories Of Unknown people develop relationships on relationships with chatbotsAnd an acceptable acceptable response to acceptable phase can guide them during cleanliness of sanity.

Samuel Marks, Safety Researcher AI with Humanities,

“OpenIi, Openai, and Google release action,” written marks in a Posts in x. “But at least they do something, anything will assess the security, and discover the information and discovering. XAI.”

Reality is that we don’t know what Xai did to test Drook 4. In the previously shared online Anonymous researchers claim that dayk 4 no safe security guardian Based on their test.

Whether true or not, the world seems to discover your shortcomings in real time. The Xai’s safety issue has activated in virus, and the company claims to mention them tweaks to the grinding of the system of grok.

OpenAi, menthropic, and Xai did not respond to Techcrunch’s request for comments.

Dan Hendrycks, Security Advisor for Xai and Central Director for AI Safety, Down in x That the company did “harmful evaluation” in verse 4. However, the result of those evaluation is not shared.

Steven Adler, researched security description, “” The public is dealing with the risk of more powerful systems that they are dealing with the risk of many powerful systems. “

An interesting thing about the question of Xai’s safety is that the Musk is one of the most popular supporters of the AI Industry. One millionaire leaders reminds the ability to make human progress, and he has been praised the revealed approach to develop AI.

And yet, the AI researchers that Labs Labs claimed Xai is veering veering of Normate. In doing so, the beginning of the Musk may strengthen the stronger and federal members to determine the rules around the Publishing Research Report of Ai.

There are many attempts at the state level to do so. California State Sen. Scott Wiener is pushing the Account AI – to publish a security report, while New York Gov. Kathy Hochul is considering similar bill. These bills support that AI Labs disseminate this type of data anyway – but obvious, not all of them.

The AI style today has not shown real situations in the world they cause serious harm damage, such as a thousand dollars in damage. However, many American researchers said it could be a problem to make progress in the top of the AI model, and Silicon Valley is further investment to improve AI.

But even such a catastrophic situation, it has a strong case that suggested that your false behavior is the product it is worse.

Grokple spread against the world around X this week, just a few weeks after the chatbot raised in the conversation with the user. The musk indicated that the thief will be involved in the Tesla, and Xai is trying to sell Its AI model with the Pentagon And other enterprises. It’s hard to imagine that the Musk driver protects US, or automated employees

Some researchers argues that the AI security test not only guarantees that the worst results will not happen to behavior with a behavior.

At least at least, grok events tend to arshadow the progress of Robai and Google, just after the startup.