California releases the AI ​​Security Report

Photo of author

By aispaceworld


September past, every sight is in the Senate 1047 while it went home to your Gavin – and died there.

SB 1047 There will be a large generator, especially $ 100 million or more to try with specific dangers. Ai Wordsry whistleblowers do not satisfy on veto, and mainly large technology companies. But the story does not end there. News, who felt the law is strong and died – all the size of the AI ​​development in California, along with its risk.

On Tuesday, the report has been published.

The author of 52-page “Report California in border policy“Said the ability to study, their research has been very supportive. The AI ​​rules, supported by a general meeting and a party company like OpenAi.

Report – Cooperation by Fei-Fei Li, the institutional director that is smart in human artifacts. Mariano-Florentino Cuélarlar, President of the endegie’s endegie for international peace; And Jennifer Chaweches, Dean the UC Berkeley Computers, and Society, Education, Finance and Transport. Its authors agree to be important to be free of interest and “assured the rules are that the organizations are interested in.”

“Without proper protection … Al that efficient al-Efficiency may be stimulated and in some cases, uncomfortable

But the risk reduction is still important, they wrote: “They do not have appropriate protection … Al that is effective

The group published their draft of the report in March for the people’s comments. But even since then, they wrote in the last version, proof that these models contribute to the “Chemistry, CBRN (CBRNOGAL).” The company leads, they added, report themselves on spikes into the abilities in the area.

The author has changed a lot of draft reports. They note that the new AI’s new AI policy must be presented “geographical fact”. “They added that the additional situation on Model Models, and they have put in the order, saying that it is not the best way.

AI’s training needs are changing all the time, the authors wrote, and the teaching definition of these cases. It can be used as a “initial filter with the cheap screen for the initial risk,” initial risk assessment.

That is important specifically because the AI ​​industry is also transparent while transparent, safety, and authors, authors.

Report requires proponents for pronunciation, Assessment of Security Party for Evaluation

One of the leadership leaders of the report, Scott singer, tell the verge That AI Policy Conversation is “perfectly changed at the federal level” from draft report. However, he argued that California can help make “harmonic efforts,” Many policies throughout the country. ” That is the contrary to the difficult work that AI moratorium support claims that the state law will be created.

In on-ed At the beginning of this month, Asshropic Aodei CEO requires the confidential standard of federal company …

“The developer alone is insufficient enough to understand the technology and especially the risk and its damage

But even such steps are not enough, the author of Tuesday’s report, because the developer just is not enough

That’s why one of Tuesday’s reports reported the third party risk assessment.

The author concluded that the risk assessment will make companies, Human Science, Google, while making a more precise risk. Currently, leading ai company usually works on their own assessments or hire a second party contractor to do so. But the third person’s evaluation is the most important, the author speaks.

Not just a “thousand people … willing to participate in military assessment of military or sciences are very different from the pitiful AI.”

But if you allow the third person’s evaluation to test the risk and blindness of your effective AI style, for meaningful appraisals, a Copy Of access. And that is some companies that are hesitant to do.

It is not easy for the second party appraiser to access the level of access. Met, Operai Operai Person with Safety Safety Testing, Writing in a Blog article That the company did not give much time as the O3 Model Openai exams as it was with the previous model, and Openai. Those limitations, Metrome, “protected from us from assessing strong capacity.” OpenAi Later said It has explore the way to share more information with various companies like Met.

Even the API weigh the model may not give the risk of third-party more effective, and a threat of compliance with safety flaws.

Last March, more industrial researchers more than 350 years and others signed Open letter Calling for “Safe Harbor” for the AI ​​Security Ai test, similar to the prevention available for other cybersecurity test players in other areas. The report of Tuesday mentioned the letter and requires big changes, as well as reporting options for a dangerous person from Ai Systems.

“Although designed security policies cannot be prevented, which is a bad result,” the author wrote. “As the basic form is widely adopted, understanding of the damage damages in practice is more important.”