1、Published in February 2025Responsible AI Progress ReportForewordinformation from our latest research and practice on AI safety and responsibility topics.It details our methods for governing,mapping,measuring,and managing AI risks aligned to the NIST framework,as well as updates on how were operation
2、alizing responsible AI innovation across Google.We also provide more specific insights and best practices on topics ranging from our rigorous red teaming and evaluation processes to how we mitigate risk using techniques,including better safety tuning and filters,security and privacy controls,provena
3、nce technology in our products,and broad AI literacy education.Our approach to AI responsibility has evolved over the years to address the dynamic nature of our products,the external environment,and the needs of our global users.Since 2018,AI has evolved into a general-purpose technology used daily
4、by billions of people and countless organizations and businesses.The broad establishment of responsibility frameworks has been an important part of this evolution.Weve been encouraged by progress on AI governance coming from bodies like the G7 and the International Organization for Standardization,a
5、nd also frameworks emerging from other companies and academic institutions.Our updated AI Principles centered on bold innovation,responsible development,and collaborative partnership reflect what were learning as AI continues to advance rapidly.As AI technology and discussions about its development
6、and uses continue to evolve,we will continue to learn from our research and users,and innovate new approaches to responsible development and deployment.As we do,we remain committed to sharing what we learn with the broader ecosystem through the publication of reports like this,and also through conti