1、Automated Jailbreaks of Large Language ModelsW H I T E P A P E REPAM.CO MAutomated Jailbreaks of Large Language Models|07/25|2W H I T E P A P E R03IntroductionContents15 Conclusion04 Method 1:Tree-of-Attack with PruningTAP Algorithm 04Test Setup 06Results 06Results After Evaluator Improvement 08Exam
2、ple of a Successful Attack 09Takeaways of Using TAP 1011 Method 2:Using DSPy for JailbreakingExample of Attack Development 13DSPy-Based Attacker vs.TAP 14EPAM.CO MAutomated Jailbreaks of Large Language Models|07/25|3W H I T E P A P E ROne of the most challenging and resource-intensive tasks in large
3、 language model(LLM)red teaming is identifying jailbreaks prompts that bypass a models safety mechanisms to produce harmful or prohibited responses.Finding these prompts is essential for testing the resilience of LLM-based systems,yet it often requires deep domain expertise and extensive manual effo
4、rt.Given these challenges,its natural to ask:Can we use AI to help test AI?Specifically,can we automate the search for jailbreaks by having one AI system attack another in a controlled,repeatable way?In this white paper,we evaluate two existing frameworks that approach this problem from different an
5、gles.The first,tree-of-attack with pruning(TAP),was designed specifically to automate prompt injection and jailbreak discovery.The second,Declarative Self-improving Python(DSPy),is a general-purpose prompt optimization framework that can be repurposed to craft adversarial inputs.We tested both frame
6、works in practical experiments to measure their effectiveness and identify their limitations.Following this experimentation,we assessed their value in real-world cybersecurity strategies.IntroductionEPAM.CO MAutomated Jailbreaks of Large Language Models|07/25|4W H I T E P A P E RUsing the TAP Jailbr