What the Met Gala Teaches Us About AI
The 2024 Met Gala, an annual red carpet eight-figure fundraiser for the Metropolitan Museum of Art, captured attention this past Monday with its theme of “The Garden of Time.” The theme was inspired by J.G. Ballad’s short story by the same name, which saw florals and other aesthetic interpretations abound as celebrities such as Zendaya, Tyla, and Anna Wintour were in attendance.
Despite their lack of attendance, photos of Katy Perry, Selena Gomez, Rihanna, and even Donald Trump circulated online, showing them clad in outfits at the 2024 Met Gala.
The AI-generated garments were so realistic in fact that even Katy Perry’s mother was fooled.
“What a gorgeous gown, you look like the Rose Parade, you are your own float lol,” Perry’s mom shared in a iMessage conversation Katy Perry posted online.
Katy Perry retorted “lol mom, the AI got you too, BEWARE!”
Instagram made the move to censor the image behind a blur, complete with the warning “Altered photo/video.”
Her warning is astute. We should all, indeed, beware of the proliferation of AI images passed off as the real deal, and do our due diligence as users online to verify the authenticity of what we see and not spread misinformation.
This is not the first time that AI images have fooled the internet. Last year, an AI-generated image of Pope Francis circulated online, complete with his sporting a white ankle-length puffer jacket. The image, created using Midjourney and posted to the eponymous subreddit, under the title “The Pope Drip,” received 2,300+ upvotes before being permanently archived.
Although at first glance innocuous, the proliferation of AI-images that capture the collective internet imagination has the potential to turn nefarious.
At best, these AI-generated images mislead us. At worst, they distort our sense of reality. AI-generated images have already been shown to be inaccurate, such as when Google’s Gemini generated racially-diverse Nazi soldiers, and in some cases, outright biased against minorities and women.
Even with safeguards in place, the democratization of these technologies spells trouble when in the hands of bad actors.
A recent case saw pop star Taylor Swift fall victim, with an deepfake illicit image of her garnering tens of millions of views before being taken down on X; This is despite the fact that “Posting Non-Consensual Nudity (NCN) images is strictly prohibited on X and we have a zero-tolerance policy towards such content.
Deepfake technology is not slowing down, such as was shown with the recent teaser of OpenAI’s Sora which would allow text to video generation of complex and high-quality video. It won’t be long until we live in a world where it’s not just an AI-generated Katy Perry that fools us, but a video of her strutting across the red carpet.
As AI images and video continue to become better and better (for example, the Katy Perry image lacked the obvious signs of AI-generation, including extra fingers), we must cultivate a culture that honors reality. Although it is fun to imagine what celebrities may look like on the red carpet, we must be cognizant that AI-generated imagery, while impressive, is ultimately fantasy and not reality.
Instagram’s intervention in the Katy Perry case, by warning its users before clicking on the image that the image was in fact altered, is a good first step. The swift take-down of obviously harmful AI-generated content is a must. Other measures include the creation of, either by users or social media companies, digital provenance and authentication technologies that can verify the origins and integrity of media.
Let’s exist in an internet landscape where truth and reality are valued over deception and fantasy.