Computational Anthropology and Exploring Identity through Artificial Synthesis
A series of personal reflections while using A.I. image generation
About a month ago, I was purely terrified in learning about this relatively recent wave of hyper/photorealistic AI generated imagery through text prompts. And especially so when I learned of frameworks that could allow anyone to use anyone’s personal identity and both reimagine and decontextualize them by using a prompt of a few words. These days, while I’m remaining cautious and diligent in exploring ways AI image/and-other-media generation can be safely implemented, I’m also finding myself fascinated with the ways I can reimagine my own identity. Here’s the tale of this emotional rollercoaster; my concerns and also my excitement.
I started my AI image explorations with MidJourney. I used this tool to playfully reimagine the identities of celebrities. My prompts included, “Diana Ross in decadent art nouveau advertisement for Fanta soda,” “Malcolm X as Drake in the Nothing Was the Same album cover,”
“Megan Thee Stallion as Mona Lisa in the style of Basquiat,” “Obama in an Adidas tracksuit with an afro in pop art style”, “Sade in a Hype Williams music video,” “Lenny Kravitz in a 1960’s Havana Cuba poster,” “Frederick Douglass taking a selfie for Instagram,” and “Rihanna for President”
This felt ok for a few reasons: 1. The personally established identity of a celebrity is typically well-known and their images are everywhere; 2. Their identities are often meme-ified (many of which I’ve seen them share online themselves); 3. The renderings of these images are low quality and illustrative — they could not possibly be confused for reality or truth.
Then I learned of Stable Diffusion and things got a bit murkier. I was able to render photorealistic imagery of just about anything I could write in words. Deepfakes have already been a societal concern, but this felt next-level. It has made the…