In April 2023, German artist Boris Eldagsen won the open creative award for his photographic entry entitled, Pseudomnesia: The Electrician. But, the confusing part of the event for the judges and the audience was that he refused to receive the award. The reason was that the photograph was generated by an Artificial Intelligence (AI) tool. It was reported that Eldagsen “said he used the picture to test the competition and to create a discussion about the future of photography.” Was it the shortcoming of the judges that they couldn’t discern what was real and what was fake?
Generative AI presents the challenge of blurring the lines between what is real, and what is artificially created. Similarly, regarding textual content, the sheer confidence that a chatbot exudes while generating sometimes biased and falsified information does not give even the slightest hint to unsuspecting users about the content’s reliability.
Such threats that the AI tools like DALL-E, BARD, and, most importantly, ChatGPT pose aren’t limited to biased or falsified information. From a privacy perspective, the technology poses serious concerns.