1、2025-09-22Stress Testing Deliberative Alignment forAnti-Scheming TrainingBronson Schoen,Evgenia Nitishinskaya,Mikita Balesni,Axel Hjmark,Felix Hofsttter,Jrmy Scheurer,Alexander Meinke,Jason Wolfe,Teun van der Weij,Alex Lloyd,Nicholas Goldowsky-DillAngela Fan,Andrei Matveiakin,Rusheb Shah,Marcus Will
2、iams,Amelia Glaese,Boaz BarakWojciech Zaremba,Marius HobbhahnApollo Research&OpenAIABSTRACTHighly capable AI systems could secretly pursue misaligned goals what we call“scheming”.Because a scheming AI would deliberately try to hide its misaligned goals and actions,measuringand mitigating scheming re
3、quires different strategies than are typically used in ML.We proposethat assessing anti-scheming interventions requires at least(1)testing propensity to scheme onfar out-of-distribution(OOD)tasks,(2)evaluating for situational awareness and whether lack ofschemingisdrivenbysituationalawareness,and(3)
4、checkingforrobustnesstopre-existingmisalignedgoals.We use a broad category of“covert actions”such as secretly breaking rules or intentionallyunderperforming in testsas a proxy for scheming,and design evaluations for covert actions.We thenstress-test deliberative alignment(Guan et al.,2024)as a case
5、study for anti-scheming.Across 26OOD evaluations(180+environments),deliberative alignment reduces covert action rates(OpenAIo3:13%0.4%;OpenAI o4-mini:8.7%0.3%)but does not fully eliminate them.Our mitigationis also able to largely stop agents from pursuing a hidden goal previously trained into the m
6、odel,butwe still fi nd misbehavior after additional red-teaming.We fi nd that models chain-of-thought(CoT)often demonstrates awareness of being evaluated for alignment,and show causal evidence that thisawareness decreases covert behavior,while unawareness increases it.Therefore,we cannot excludethat