As the use of AI continues to grow exponentially, it is becoming increasingly challenging to distinguish what is AI generated and what is created by humans, especially when it comes to imagery.
This creates a plethora of reputational challenges, especially for us in the public relations and news journalism space.
Misinformation, defamation, deepfakes, copyright issues — you get the gist. Viral tweets about an explosion at the Pentagon, Donald Trump’s arrest and the Pope wearing a puffy jacket are some examples of easily believable information. Artists and photographers are the other largely affected profession.
Leaders like Open AI’s Sam Altman and Google’s Sundar Pichai have repeatedly called for the regulation of AI. However, regulating it is so complex, I’m guessing any sort of legislation is going to take time to come into place.
The obvious flaws of AI generated images
If you have tried using AI tools to generate imagery, you will have a decent idea about the flaws of AI generated images.
Faces and objects are distorted or they clearly look like drawings, fingers are weird, the person has too many teeth, you see unnatural patterns, and there is a general creepiness in the imagery — your spidey sense will feel that something is off.
However, as the tools get better, especially the latest version of Midjourney (v5), the AI generated pictures that are coming out are pretty amazing.
What’s being done to help us know the difference between AI generated images and real ones?
This is a pretty neat feature where you can upload a picture or the URL of the picture and it will show you where else the image has appeared and in what context, so you can better judge its authenticity.
2. Google’s release of ‘About this image’ (US only at the moment):