On Thursday, week after the launch of the Model Ai Efficiency High, Gemini 2.5 PRO, Google Publication Technical Report Show the results of its internal security assessment. However, the report is light in detail, experts, say, make it difficult to determine how much risk.
Technical Reports Benefits – and Useless, at times – information that the company does not advertise extensively on AI. By and big, AI community see these reports as a good effort to support the research and research assessment.
Google has a different security method than some of Ai rivals, publish a learning report only considers the process of graduate. The company has not included discoveries from its “dangerous” harm’s ability to write; It reserves a separate audience.
Many prophesist technologists have spoken with disappointment of Gemini 2.5 PR, which they have not yet spoken Front security frame (FSF). Google has introduced FSF last year in what it described as an effort to determine the future capacity that can cause “severe damage.”
“This [report] Is very scattered, with the least information, and come out of the week, “
Say, co-opened a safe AI program. The Woodside has been indicating that last time Google has spread the results of dangerous skills testing is in June – for the same month of February.
Don’t be inspired more confidence, Google does not have reported for Gemini 2.5 flash, the smaller model, there are several forms that the company has announced last week. A spokesman of advertisenes tell Techcrunch report for flash is “coming soon.”
“I hope this is a promise from Google to begin publish frequent updates,” Techcrunch Woods. “These updates should include the effect of evaluation for modeling but, since the form may be serious.”
Google may be one of the first AI lab that offers a standard report for the model, but it is not the only one accused of transparency. meta released Security security assessment The New Llama, and Openai chooses not to publish any report for its GPT-4.1 set.
Hanging more than Google’s head is guaranteed Tech giants make ai-test control of AI. Two years ago, Google tells US governments It will publish a security report for a public “key” format “within the horizon.” Company Track that promise with similar commitments To Other countriesA promise “to the public transparency” around AI Products.
Kevin Bankson, senior experts at AI governance in the Democratic and technology center called “the competition below” in the safety of AI.
“Combined with the report that the laboratory has competing with your security test each day
Google has said in that statement, while there is no details in the technical report, it operates the preceding test.