1、Issue BriefDecember 2023Controlling Large Language Model Outputs:A PrimerAuthorsJessica JiJosh A.GoldsteinAndrew J.Lohn Center for Security and Emerging Technology|1 Executive Summary Concerns over risks from generative artificial intelligence(AI)systems have increased significantly over the past ye
2、ar,driven in large part by the advent of increasingly capable large language models(LLMs).Many of these potential risks stem from these models producing undesirable outputs,from hate speech to information that could be put to malicious use.However,the inherent complexity of LLMs makes controlling or
3、 steering their outputs a considerable technical challenge.This issue brief presents three broad categories of potentially harmful outputsinaccurate information,biased or toxic outputs,and outputs resulting from malicious usethat may motivate developers to control LLMs.It also explains four popular
4、techniques that developers currently use to control LLM outputs,categorized along various stages of the LLM development life cycle:1)editing pre-training data,2)supervised fine-tuning,3)reinforcement learning with human feedback and Constitutional AI,and 4)prompt and output controls.None of these te
5、chniques are perfect,and they are frequently used in concert with one another and with nontechnical controls such as content policies.Furthermore,the availability of open modelswhich anyone can download and modify for their own purposesmeans that these controls or safeguards are unevenly distributed
6、 across various LLMs and AI-enabled products.Ultimately,this is a complex and novel problem that presents challenges for both policymakers and AI developers.Todays techniques are more like sledgehammers than scalpels,and even the most cutting-edge controls cannot guarantee that an LLM will never pro