1、1 Ignore Safety Directions.Violate the CFAA?Kendra Albert(Harvard Law School);Jonathon Penney(Osgoode Hall Law School/Harvard Berkman Klein Center);and Ram Shankar Siva Kumar(Harvard Berkman Klein Center)*Introduction In March,twenty-three artificial intelligence(AI)experts publicly released a worki
2、ng paper calling for legal and technical protections for researchers engaged in good faith evaluation and“red teaming”of AI systems.1 The co-authors,including experts from Massachusetts Institute of Technology,Stanford,Georgetown,University of Pennsylvania,and Brown,among othersargued that uncertain
3、ty in how existing laws like the Computer Fraud and Abuse Act(“CFAA”)apply to generative AI platforms like ChatGPT creates unreasonable legal risks for researchers2 that will have a chilling effect on AI safety and security research.3 In theory,the CFAA could allow AI firms to sue researchers for ac
4、cessing AI platforms in unintended ways or uncovering previously unknown vulnerabilities,and enable federal prosecutors to launch criminal investigations for the same.4 Since the paper was released,more than 350 additional researchers *Equal Contribution.Acknowledgements TK.1 Shayne Longpre et al.,A
5、 Safe Harbor for AI Evaluation and Red Teaming,(2024),http:/arxiv.org/abs/2403.04893(last visited Jul 20,2024).In security research,“red teams”refer to groups authorized to act as adversarial attackers against an organizations computer systems;sometimes also referred to as“penetration testing”.“Red
6、teams”can include in some contexts third party hackers who are testing the security of publicly accessible systems without explicit consent from the developers.See id.at 2.2 Id.at 7.3 Alexander Gamero-Garrido et al.,Quantifying the Pressure of Legal Risks on Third-Party Vulnerability Research,in PRO