1、Issue BriefMarch 2024An Argument for Hybrid AI Incident ReportingLessons Learned from Other Incident Reporting SystemsAuthorsRen Bin Lee DixonHeather Frase Center for Security and Emerging Technology|1 Executive Summary AI incidents have been occurring with growing frequency since AI capabilities be
2、gan advancing rapidly in the last decade.Despite the number of incidents that have emerged during the development and deployment of AI,there is not yet a concerted U.S.policy effort to monitor,document,and compile AI incidents and use the data to enhance our understanding of AI harm and inform AI sa
3、fety policies in order to foster a robust AI safety ecosystem.In response to this critical gap,the objectives of this paper are to:Examine and assess existing AI incident reporting initiativesboth databases and government initiatives.Elicit lessons from incident reporting databases from other sector
4、s.Provide recommendations based on our analysis.Propose a federated*and standardized hybrid reporting framework that consists of Mandatory reporting:Organizations must report certain incidents as directed by regulations,usually to a government agency.Voluntary reporting:Individuals and groups are pe
5、rmitted and encouraged to report incidents,often with clear guidelines and policies,and usually to a government agency or professional group.Citizen reporting:This is similar to voluntary reporting,but incidents are reported by the public,journalists,and organizations acting as watchdogs.*For the pu
6、rpose of this paper,we define a federated framework as a centralized framework prescribed by a singular authoritative government body or the federal government.The framework stipulates a set of minimum requirements that can be adapted and implemented across government agencies or non-governmental or