1、Agentic AI Red Teaming GuideAI Organizational Responsibilities Working Group21.BackgroundWhy Agentic AI Needs a New Red Teaming Approach:Traditional GenAI red teaming does not address the security risks of autonomous,goal-seeking,persistent AI agents.Why Agentic AI Is Different:Combines planning,rea
2、soning,and acting Operates across time,systems,and roles Introduces emergent behavior and expanded attack surfacesKey Challenge:Agentic AI agents can:Reassign goals Chain actions autonomously Interface with live APIs and tools Result:Unpredictable failure modes and cascading consequences GenAI promp
3、t in/output out Agentic AI goal plan action feedback adaptPurpose of This Guide:To provide practical,test-driven red teaming strategies tailored for agentic AI Developed through CSA&OWASP joint research and threat analysis32.Scope and AudienceWhat This Guide Covers:Focus:Practical,test-driven red te
4、aming for agentic AI systems Core Goal:Identify vulnerabilities not to model threats or define mitigations Approach:Detailed attack surface testing,not high-level frameworksWhats Out of Scope Threat modeling(MAESTRO Framework)General GenAI red teaming(e.g.,prompt injection only)Risk prioritization o
5、r treatment strategies Secure development practices Broad governance or ethical guidanceIntended Audience:Primary:Red Teamers Agentic AI Developers Pen Testers Secondary:Security Architects AI Governance Teams AI System DesignersAssumptions:Audience understands core security topics(APIs,authN/Z,prot
6、ocols)Teams performing tests likely have organizational or consulting support Guide is technical and operational,not academic or policy-based43.OverviewGenerative AI(GenAI):One-shot or short-context interaction Generates text,images,or code based on prompts Security focus:prompt injection,bias,data