1、HealthBench:Evaluating Large Language ModelsTowards Improved Human HealthRahul K.AroraJason WeiRebecca Soskin HicksPreston BowmanJoaquin Qui nonero-CandelaFoivos TsimpourlasMichael SharmanMeghan ShahAndrea ValloneAlex BeutelJohannes HeideckeKaran SinghalOpenAIAbstractWe present HealthBench,an open-s
2、ource benchmark measuring the performance and safety of largelanguage models in healthcare.HealthBench consists of 5,000 multi-turn conversations between a modeland an individual user or healthcare professional.Responses are evaluated using conversation-specificrubrics created by 262 physicians.Unli
3、ke previous multiple-choice or short-answer benchmarks,Health-Bench enables realistic,open-ended evaluation through 48,562 unique rubric criteria spanning severalhealth contexts(e.g.,emergencies,transforming clinical data,global health)and behavioral dimensions(e.g.,accuracy,instruction following,co
4、mmunication).HealthBench performance over the last two yearsreflects steady initial progress(compare GPT-3.5 Turbos 16%to GPT-4os 32%)and more rapid recentimprovements(o3 scores 60%).Smaller models have especially improved:GPT-4.1 nano outperformsGPT-4o and is 25 times cheaper.We additionally releas
5、e two HealthBench variations:HealthBenchConsensus,which includes 34 particularly important dimensions of model behavior validated via physi-cian consensus,and HealthBench Hard,where the current top score is 32%.We hope that HealthBenchgrounds progress towards model development and applications that
6、benefit human health.11IntroductionArtificial intelligence(AI)systems are increasingly used in health,offering the potential to expand accessto health information,support clinicians in delivering high-quality care,and help people make better healthdecisions(Esteva et al.,2017;Gulshan et al.,2016;Bea