We need to have a serious chat about AI.
Yes, I'm talking to all you "spiritual" folks who keep reposting and sharing those AI-generated pictures you love, without considering the impact you might be having.
Now, before we dive in, let me be clear: I'm a big fan of AI and technology. I chat with ChatGPT every single day. I'm typing away on this sleek new computer with all the latest apps, and my trusty Apple Watch is right here on my wrist. AI is ingrained in our lives, and there's no denying its importance. But amidst our enthusiasm, there are some crucial issues we need to address. Today, I want to talk about how we're currently sharing and interacting with the existing AI, and the importance of our intentions.
So, let's break it down.
It's a common misconception that technology, including AI, is inherently neutral and free from biases. Many people assume that because technology operates based on algorithms and data, it must be logical and objective. However, this overlooks the fact that the data used to train AI models and the algorithms themselves can reflect and perpetuate biases present in society.
The truth is, technology is created by humans, and as such, it can inherit the biases, prejudices, and discriminatory tendencies of its creators. When AI systems are trained on biased data or designed with inherent biases, they can produce biased outcomes, reinforcing existing inequalities and perpetuating discrimination.
Were you aware that medical technology exhibits racial bias? Take, for example, the Pulse Oximeter, a common device used during doctor's appointments and hospital visits to measure oxygen saturation levels. Unfortunately, it's known to provide inaccurate readings on surfaces with darker pigmentation. This is why we are told to avoid wearing nail polish during surgery. However, for Black individuals and People of Color with darker skin tones, this presents a significant challenge. Their melanated skin can lead to unreliable readings, potentially resulting in misdiagnosis and inadequate care plans or treatments.
Our beloved pocket assistants like Siri, Alexa, and Cortana, along with many others, tend to perpetuate inherent biases, particularly those related to gender and culture. Have you ever noticed that they're predominantly portrayed as female? It's a subtle reinforcement of the societal notion that women are primarily nurturers and caretakers, expected to cater to the needs of others, especially men and children. Despite individual's ability to customize the voices, such as opting for a suave British accent, the default and prevailing choice remains a female voice. So, while individual preferences may vary, the underlying bias towards gendered expectations remains prevalent in these technologies.
Speaking of voices, the voice recognition technology used in these tools was predominantly developed and tested by English-speaking men in Silicon Valley. Consequently, it often struggles to accurately understand and interpret higher-pitched voices and accents that deviate from that specific "standard". This includes accents from regions like the American South, which unfortunately suffer from stereotypical portrayals as being associated with ignorance, racism, and laziness. Additionally, there's a prevalent societal bias, especially in the U.S., that individuals with accents from other countries or those who struggle with English proficiency are inherently less intelligent. As Gloria points out in Modern Family, "Do you even know how smart I am in Spanish?" This highlights the unjust assumptions and biases embedded within voice recognition technology and its broader implications on diverse linguistic and cultural communities.
Alright, I get it. You're probably thinking, "umm, Kris, none of these are AI." Fair point, let's refocus. But it's crucial to grasp that technology is created by humans, which means it's susceptible to the same flawed behavior humans exhibit.
By the way, if you're interested in delving deeper into this topic, I highly recommend reading Sex, Race, and Robots: How to Be Human in the Age of AI by Ayanna Howard. It's frighteningly enlightening, offering valuable insights into the intersection of technology and humanity.
AI learns through the analysis of extensive datasets. It's been fed a wealth of resources over time, utilizing that data to learn, adapt, engage, and evolve. Much of our AI continuously traverses the internet, absorbing information, and expanding its knowledge base. Some refer to this process as "skimming."
For AI Art Generators, this skimming entails scanning the internet to extract ideas and elements from various pictures and artworks. When prompted, these generators sift through the vast digital landscape to create compositions based on the input, yielding results of varying success.
Some AI-generated art is stunningly beautiful, capturing the essence of our era with remarkable precision. Artists who use generative AI for their creations invest countless hours perfecting their craft, ensuring that each piece they release is a testament to their skill and dedication. The potential of AI as a creative tool is immense, offering artists innovative avenues for expression and exploration.
But let's shift gears and focus on the myriad of images circulating online that aren't the result of intentional artistic collaboration with AI. These are the creations of individuals casually experimenting with AI tools, later shared across social media platforms. Often, it's automated bots reposting these images with mundane captions like #happy or nature is beautiful! or simply WOW. Consequently, more bots and users stumble upon these posts, flooding the comments with expressions like "this is gorgeous!" "OMG amazing!" "I love this!" "I hope this is real!" and the classic "This is AI." This surge in engagement amplifies the visibility of these images, often catching the attention of the spiritual community, who then further disseminate them. Before you know it, that weird image with too many fingers has gone viral.
Here's the thing, when we engage with, repost, and share these images, we're inadvertently providing data for AI to learn more about art, humanity, and our preferences. It's essential to be intentional about our sharing habits because we're not just sharing a picture that resonated with us emotionally; we're also instructing both machines and humans on the nuances of human experience and what it means to be human.
So, let's examine some of the most prevalent ones circulating and the potential lessons they are teaching.
To begin, there's a plethora of fake nature pictures making the rounds.
Remember the eclipse? A significant number of the images shared afterward, at least that I saw, were AI-generated pictures falsely claimed by amateur photographers. This sparked a trend of individuals attributing AI to virtually every eclipse picture, causing frustration among professional photographers who meticulously prepared for the event and captured stunning images, only to be dismissed as fake by numerous skeptics. We are really bad at recognizing real life from fake life... and it's only going to get worse.
Then, there are the baby animal ones.
Take, for instance, this peacock. The Facebook page "Nature" posted it with the caption "Beautiful It's not everyday you see a baby Peacock!" (I would like to add that today is not that day either, because this is not a baby peacock).
The comments under the post include: "Sweet- I’ve never seen one!" "OMG I felt instantly in love! Those eyes! Those colors! 🥰" "Awww so precious." "Wow! Beautiful & adorable!!!" "Thank you that’s the first one I’ve ever seen" "HE IS BEAUTIFUL."
It's harmless, right? That miniature peacock with the giant eyes is undeniably cute, and it's natural to want to share it with others. It almost looks like a stuffed animal!
Here's the catch. We haven't exactly excelled at taking care of our planet. So, when we share these pictures without providing context, or worse, when we pass them off as real, we're not just misleading individuals who may lack opportunities to interact with nature; we're also implicitly suggesting that true nature isn't as awe-inspiring as virtual reality. This can lead to a devaluation of nature, making us less inclined to prioritize learning about, protecting, and appreciating the wonders of our extraordinary planet.
Here is a real peacock baby. While not as colorful as the viral AI-generated "mini peacock," it has its own unique beauty. This downy peachick is camouflaged, allowing it to blend into its surroundings and avoid predators. Unlike the artificial image, this little one doesn't have giant eyes, a pouty expression, or extra fluff – just the simple, majestic glory of nature as it truly is.
Why not share this picture and some facts about real peacocks, their babies, and their natural habitat? While the AI-generated image may tug at our emotions, sharing actual nature from our planet can inspire us to appreciate and protect the world we live in. Let's support one another in falling in love with our planet, so that we begin to take care of it at the level required for ongoing life.
And if you must share the AI-generated "mini peacock" image because it's just so adorable, caption it as such and explain why it resonated with you. But also consider sharing the wonders of real nature, reminding us of the beauty we need to preserve.
Now let's get into some of the problematic pictures of people.
"Look at the baby cabbage patch kids 🥹 They are so cute!"
Yes, they appear adorable, perfectly illuminated, all smiles, miraculously standing without support as young babies, clad in outrageously elaborate outfits.
While this image may seem harmless, when shared as real, it degrades our ability to distinguish AI from real life. As AI becomes more prevalent, it's crucial for us to discern what is authentic and what is a "deep fake." If you accept this image as real, you're less likely to scrutinize similar pictures, merely assuming their authenticity. Remember, pictures speak volumes, but by allowing falsehoods to infiltrate our perceptions, we disregard the tangible consequences of unquestioningly believing everything we see.
If we can be manipulated by images of babies in cabbage, Trump in church, Biden and Obama in pink suits, then we're susceptible to being influenced by false depictions of violence, meetings between political leaders (think Russia and the USA), or any other scenarios that could sway our voting choices, lifestyles, or actions. It's a slippery slope. (By the way, yes, I'm 100% certain that all of these pictures are AI-generated.)
And finally, we arrive at the pictures that are inherently racist. These are the ones that unsettle me the most because I see "good, spiritual" people who would vehemently deny being racist sharing them. Currently, there are two primary ones I've seen circulating that are deeply disturbing.
Let's address why is this racist, classist, and problematic. The portrayal of the children's faces relies on antiquated racist stereotypes of black bodies. It harks back to the demeaning imagery found in old minstrel shows and blackface performances, which caricatured black individuals instead of depicting them authentically in their diverse forms. Additionally, it veers into the territory of poverty porn, exploiting primarily Western audiences' emotional responses to depict black and brown individuals in "developing" countries as happy despite their lack of material wealth. Moreover, the depiction of their teeth as perfectly white and straight perpetuates Western beauty standards as the ideal. It's crucial to remember that AI draws upon content from the internet, and it lacks the ability to discern what is acceptable or not. Thus, when searching for images based on prompts like "happy black boys," it inadvertently retrieves and uses images like the ones below without recognizing their problematic nature.
To a relatively new and young learning machine, these pictures simply appear as "happy black boys." Feeling uncomfortable? Good, me too. Because AI lacks the ability to discern appropriateness and can inadvertently rely on outdated racist stereotypes, it falls upon us to exercise discernment and judgment. We must not only refrain from engaging with these pictures but also educate our technology and AI about why it's unacceptable. However, instead what I have observed is individuals sharing, reposting, and commenting on these images with remarks like, "Just totally in love with this work of art," "One needn't be rich to obtain happiness," "Really love it. Full and complete smile," as well as "so cute" and "#happy!"
Allowing such content to go viral perpetuates racist and classist tropes deeply ingrained in our societal systems.
And then there's this appalling creation. It's disturbing and racist to use the body parts of black boys to construct the illusion of Jesus' face. Upon closer inspection, you'll realize that these are mutilated AI renderings and not actual children. Despite being horrifying depictions of black children, the resulting Jesus figure still appears... white? It's perplexing. This image has gained viral traction within the spiritual community, "Amazing , thank you for sharing ☺️," "I did have to move away I saw it almost immediately. Absolutely beautiful," "So Cool," and "Amazing talent!"
Thankfully, there have been more calls to attention regarding this picture's falseness, with close-ups revealing incomplete bodies, missing parts, added elements, and peculiar expressions. When I pointed out that not only was this AI-generated, but also creepy and racist, one friend who had shared it simply responded that she "liked the image" and "thought it was cool" aka "don't mess with my vibe."
Here's my point: Our real world is brimming with incredible beings doing remarkable things. However, as we increasingly rely on AI to evoke emotions, resonate with us, and prompt reactions, our ability to distinguish between reality and falsehood diminishes. This is an already pressing issue that will only worsen in the future. As AI-Generated images flood social media it sends a troubling message to both humanity and AI: the truth holds less value than mere emotional gratification, and that fabricated content overrides authenticity.
So, what's the takeaway from all of this?
I'm urging two things from you.
First, when you encounter content on social media that elicits an immediate emotional response, pause and ask yourself why. What triggers this reaction? Why do you feel drawn to it, repulsed by it, or anything else? Is the content genuine or fabricated? And regardless of its authenticity, what drives your intention to share it?
Second, if you're inclined to post something, take the time to conduct some research. Verify its authenticity. If it's an AI-generated image or any other fabricated content that you still wish to share, consider the potential implications of supporting it. Does it perpetuate undertones of marginalization, oppression, discrimination, or prejudice? Is the message conducive to what you'd want AI to learn?
In essence, think about your intention behind engaging with the content and the impact it might have. You could even apply the three Buddhist gates of speech: Is it true? Is it necessary? Is it kind?
Then, when you do decide to post, ensure you provide proper description and context.
AI and technology are undoubtedly fixtures of our future, which isn't a bad thing. However, the responsibility to foster equitable and just technology doesn't solely rest on developers—it's a collective duty. So, before you post, comment, or even scroll, think critically about the content's implications and your role in shaping the digital landscape.
Scroll Responsibly.
Comments