Hot Girl Summer, Literally
Earth is in her hot-girl era … because AI is burning up the planet.
By Emily Sayre
Artificial Intelligence, or AI, has become a hot-term over the last few years, but it has actually been around for far longer than modern-day internet. In the 1950s, we saw the beginning of what is loosely considered AI, wherein digital computers and computer-controlled robots would perform tasks most commonly associated with and performed by humans. During this time, Alan Turing, a British mathematician, published a landmark paper called "Computing Machinery and Intelligence" which is considered to be foundational text for discussions surrounding AI, mostly provoking the thoughtful question of “Can a machine exhibit intelligent behavior indistinguishable from that of a human?” Scientists answered this question by conducting a practical test for computer intelligence: the Turing test. Before we dive into the AI landscape of today, I’d like to draw attention to this timeline from IBM. It goes much deeper in the history of artificial intelligence, and how it’s become what we know it as today, if you would like to learn more!
Now, let’s get into how AI has become so popular in recent years, and more specifically, if AI has been around for decades, why are we so concerned with it now?
AI in Recent Years
The Turing test is a test that involves three participants: a computer, a human interrogator, and a human participant. Through this test, the interrogator is tasked with determining which is the computer by asking both the computer and the human participant a series of questions. There are many interrogators on each test, and if a sufficient portion of those judges are unable to conclude which is the computer and which is the human, then the computer passes the test and is considered intelligent. If you’ve ever been prompted to complete a CAPTCHA, then you’ve participated in a Turing test; CAPTCHA stands for “Completely Automated Public Turing Test to Tell Computers and Humans Apart” and is used to protect websites from bots, spam, and other automated cyberattacks. There have been many Turing tests conducted since its creation, but it was not until 2024 when a computer passed and was truly deemed intelligent. That computer was ChatGPT.
Around this time, OpenAI launched two versions of DALL-E, an AI model that is able to generate highly detailed images from text. In 2024, OpenAI also announced Sora, an AI model that is able to generate videos up to one minute long from text, and Apple announced Apple Intelligence, which is an integration of ChatGPT, that allows Siri to speak more conversationally, complete more complex tasks, and execute more nuanced commands.
So what does all of this mean? Well, AI has been around for decades, but has generally worked in assistance with humans, not by imitating humans. When we look at what is largely considered AI today, it mostly programs which mimic humans, and of which we are most often talking about generative AI. Generative AI started gaining traction with the first iteration of ChatGPT in 2022. Since then, AI models have rapidly expanded, not only with generating its own visuals, but also generating its own language to more efficiently communicate with itself. Generative AI mimics the writing, creation, reasoning, and overall imagination of humans, and the only way this has been possible is because of the amount of money, power, and resources that have gone into training these models.
Climate Effects of AI
The climate effects of AI have rapidly expanded alongside the expansion and training of AI models for a few main reasons: the advances of computing power, the explosion of data, and the usage of materials that are required to make this happen. Most large-scale AI deployments are housed in data centers, a large group of networked computer servers, which are spread across the United States. The computers alone require GPUs, TPUs, CPUs, RAM, SSD or HDD, other networking components, and most importantly, power and cooling systems. To put into perspective how much power is required to run AI, researchers have said that by the end of 2026, the amount of AI servers running would consume 85.4 terawatt-hours of electricity annually, which is more than what most small countries use annually. With all of this energy consumption, it puts an increased pressure on the power grid infrastructure for the local communities. Keep in mind that this does not factor in the amount of power it takes to build these data centers, manufacture the computer parts and chips, deploy said computers, and more.
Data centers also create local air pollution by emitting fossil fuels, one of which is a greenhouse gas: carbon dioxide. When greenhouse gas is released, it traps heat within Earth’s atmosphere, contributing to global warming. Data centers also affect the local water resources by contributing to the thermal water pollution through energy, or heat, consumption and emission, and also through water-based cooling systems meant to keep the onsite computers functional and generate offsite electricity. These cooling systems are one of the many ways data centers perform heat rejection, a process where excess heat generated by equipment and infrastructure is moved to the outside environment. When this heat moves into the outside environment, it leads to the evaporation of freshwater into the atmosphere, putting unnecessary stress on our already-limited freshwater resources. All of this negatively affects the water-stressed, rural, and farm-land areas that these data centers are built in.
Usage in the Book Community
So why should the book community care about the effects of AI? Well, we have seen an increased use of artificial intelligence within the book industry in many different ways:
AI generated book covers
AI generated character art
AI generated profile pictures
AI generated social media trends
And most recently:
AI generated books for purchase
AI generated summer reading list
AI usage in major book publishers (see page 12-13)
AI generated audiobooks through apps such as Audible, Google Play, ResembleAI, Respeecher, ElevenLabs, and more
AI audiobook narration has become so popular that there are over 60,000 AI-generated audiobooks available just on Audible. HarperCollins, one of the big five publishers, has already teamed up with one of these generated audiobook apps: ElevenLabs, an app that not only allows you to upload text to be AI narrated, but an app that also lets you upload audio clips for an AI to mimic and clone. Read that again: an AI app that lets you upload any audio clip to be cloned through AI-generation.
ElevenLabs also has a GenFM podcast feature wherein AI co-hosts formulate an AI generated podcast based on the textual content that you upload to teach the AI model what type of discussion topics you want to listen to. They also have a conversational AI feature, which helps you deploy realistic and extensive voice agents, of which can be integrated with a phone service like Twilio and can talk directly to customers as a support agent, sales rep, or as any other customer-facing role. I mention this because both of these features are directly training this AI model on what a natural voice sounds like, how to talk conversationally, what the proper, or at least current, grammar is, and how to speak with our current sentence structure and flow. Overall, it’s teaching artificial intelligence how to mimic a human.
There’s a whole ethical debate on whether AI generated materials should be consumed or not (I know where I stand on this), but even if we ignore the ethics of it, the current and future AI usage within the book community is directly and negatively affecting local communities, climate change, and global warming. I encourage us all to be extra vigilant of the media that we consume – especially now that a big five publisher, who signs with real-life human authors, is partnering with a generative AI model that is directly hurting and taking jobs away from the narrators who ARE the book industry.
Know what books you’re reading and spread awareness of AI usage within the book community. Let’s not forget that just this year, Facebook allegedly raided LibGen and Anna's Archive, both of which are massive digital sites filled with stolen intellectual property (most of which are books, academic papers, and other various texts), and used those texts to train its latest AI model, Llama 3. I can guarantee that one of your favorite authors is on that list.
Now is the time to educate yourself on the harmful effects of AI and speak out against its usage – if not for our increasingly hot-girl planet and communities, but for our bookish authors, narrators, workers, and creators.