1、The democratization of image-generating artificial intelligence(AI)tools without regulatory guardrails has amplified preexisting harms on the internet.The emergence of AI images on the internet began with genera-tive adversarial networks(GANs),which are neural networks1 containing(1)a generator algo
2、rithm that creates an image and(2)a discriminator algorithm to assess the images quality and/or accuracy.Through several collaborative rounds between the generator and discriminator,a final AI image is generated(Alqahtani,Kavakli-Thorne,and Kumar,2021).ThisPersonDoesNotE,a site created by an Uber en
3、gineer that generates GAN images of realistic people,launched in Febru-ary 2019 to awestruck audiences(Paez,2019),with serious implications for exploita-tion in such areas of abuse as widespread scams and social engineering.This was just the beginning for AI-generated images and their exploitation o
4、n the internet.Over time,AI image generation advanced away from GANs and toward diffusion models,which produce higher-quality images and more image variety than GANs.Diffusion models work by adding Gaussian noise2 to original training data images BILVA CHANDRA Analyzing Harms from AI-Generated Image
5、s and Safeguarding Online AuthenticityExpert InsightsPERSPECTIVE ON A TIMELY POLICY ISSUEMarch 20242through a forward diffusion process and then,through a reverse process,slowly removing the noise and resynthesiz-ing the image to reveal a new,clean generated image(Ho,Jain,and Abbeel,2020).Diffusion
6、models are paired with neural network techniques to map text-to-image capabili-ties,known as text-image encoders(Contrastive Language-Image Pre-training CLIP was a milestone in this space),to allow models to process visual concepts(Kim,Kwon,and Chul Ye,2022).Thus,the commercialization of dif-fusion