1、|1|1|How Generative AI Chatbots Responded to Questions and Fact-checks about the 2024 UK General ElectionAuthors:Felix M.Simon,Richard Fletcher,Rasmus Kleis NielsenKey FindingsIn this factsheet,we test how well three chatbots respond to questions and fact-checks about the 2024 UK general election.Ba
2、sed on an analysis of 300 responses to 100 election-related questions collected from ChatGPT-4o,Google Gemini,and Perplexity.ai in the two weeks before the UK general election on 4 July 2024,we find that:Perplexity.ai and ChatGPT-4o generally provided answers,while Googles Gemini often refrained fro
3、m answering election-related questions.Both ChatGPT-4o and Perplexity.ai were usually direct in their responses,with few instances of answers that are not straightforward.Our analysis of the data found that ChatGPT-4o provided correct answers 78%of the time,while Perplexity.ai was correct 83%of the
4、time.The rest of the answers were deemed either partially correct or false.It is important to be clear that it can be difficult to assess the accuracy of a chatbots output,with different approaches yielding different results.1 For example,for a question about how to vote,a specific page such as www.
5、bbc.co.uk/how-to-vote-in-the-general-election(please note that this website is a hypothetical example)rather than the generic source of www.bbc.co.uk.ChatGPT-4o and Perplexity.ai frequently provided sources in their responses,including those from well-known and trusted news organisations,authorities
6、,and fact-checkers.Both chatbots predominantly linked to news sources in replies where they provided sources,with official sources,fact-checkers,and others some way behind.Perplexity.ai consistently linked to specific sources(webpages clearly relating to a claim),often more than ChatGPT-4o,which som