1、We believe the scientific study of catastrophic risks from AI has fallen far short of where we need to be.To help address this gap,we are introducing our Preparedness Framework,a living document describing OpenAIs processes to track,evaluate,forecast,and protect against catastrophic risks posed by i
2、ncreasingly powerful models.December 18,2023Preparedness Framework(Beta)IntroductionOur practical experience withhas enabled us to.As our systems get closer to AGI,we are becoming even more careful about the development of our models,especially in the context of catastrophic risk.This Preparedness F
3、ramework is a living document that distills our latest learnings on how to best achieve safe development and deployment in practice.The processes laid out in each version of the Preparedness Framework will help us rapidly improve our understanding of the science and empirical texture of catastrophic
4、 risk,and establish the processes needed to protect against unsafe development.The central thesis behind our Preparedness Framework is that a robust approach to AI catastrophic risk safety requires proactive,science-based determinations of when and how it is safe to proceed with development and depl
5、oyment.Our Preparedness Framework contains five key elements?Tracking catastrophic risk level via evaluations.We will be building and continually improving suites of evaluations and other monitoring solutions along several Tracked Risk Categories,and indicating our current levels of pre-mitigation a
6、nd post-mitigation risk in a Scorecard.Importantly,we will also be forecasting the future development of risks,so that we can develop lead times on safety and security measures?Seeking out unknown-unknowns.We will continually run a process for identification and analysis(as well as tracking)of curre