The Human Touch: Why Creatives and Scholars Say No to AI Storytelling

The novelist Ewan Morrison was both alarmed and amused to discover he had supposedly authored a book titled Nine Inches Pleases a Lady. Curious about the boundaries of generative artificial intelligence (AI), he asked ChatGPT to list the twelve novels he had written. “I’ve only written nine,” he shares. “Always eager to please, it decided to invent three.” The phrase “nine inches” from the fictitious title was borrowed from an explicit Robert Burns poem. “I find it hard to trust these systems concerning factual accuracy,” says Morrison, who humorously adds that he has yet to write Nine Inches—or its sequel, Eighteen Inches. His actual most recent work, For Emma, explores the implications of AI brain-implant chips, focusing on technology’s human costs.

Morrison closely monitors developments in machines like OpenAI’s ChatGPT, but he opts not to integrate them into his life or professional endeavors. He is part of an increasing number of individuals who resist AI; some are apprehensive about its dangers, while others find it frustrating, and many simply prefer human interaction over robotic assistance. Online, proponents of AI frequently label those who refuse to engage as ignorant luddites—or worse, as self-satisfied hipsters. I might fit both descriptions, considering my decidedly analog hobbies (board games, gardening, animal care) alongside my writing for the Guardian. Several friends champion ChatGPT for parenting tips, and I know someone who relies on it throughout her workday. Yet, I haven’t utilized it since experimenting shortly after its launch in 2022. While ChatGPT may have potentially improved my writing, this article was crafted with genuine words from my artisanal writing space—okay, more like my bed. Though I could have synthesized my interviewees’ perspectives from their social media and research papers like ChatGPT might do, conversing directly, human to human, created a more fulfilling experience. Two interviewees were even interrupted by their pets, which added to the humor (full disclosure: AI transcribed that noise afterward).

On X, Morrison occasionally engages with AI supporters, who often mock him with the term “decel” (decelerationist). He finds it amusing when people assume he is the one resisting progress. “Nothing halts accelerationism more than failure to deliver on promises. Hitting a brick wall effectively decelerates progress,” he asserts. A recent study discovered that over 60% of AI-generated responses were inaccurate.

Morrison became involved in this discourse due to what he describes as “alarmist fears about the possibility of superintelligence and run-away AI.” As he delves deeper into the topic, he concludes that these notions are fabrications meant to entice global investors to pour in billions—indeed, half a trillion dollars—into the quest for artificial superintelligence, which he regards as a fantasy driven by reckless venture capital.

Additionally, he notes copyright infringement issues present within generative AI, which draws from existing material and poses a threat to him as a writer, as well as to his wife, screenwriter Emily Ballou. He observes that in the entertainment industry, AI algorithms now influence which projects receive funding, leading to a stagnation where “the algorithms dictate ‘more of the same,’ as it’s all they know how to produce.”

Morrison has compiled a long list of grievances regarding AI’s influence over recent years. He is particularly worried about potential job losses (Bill Gates recently speculated that AI might usher in a two-day work week). Other concerns revolve around “tech addiction, ecological ramifications, and detrimental impacts on the education system—with 92% of students now utilizing AI.” He expresses concern over how tech companies monitor users to create personalized AI experiences and is disturbed by the use of AI-powered weapons in Ukraine, calling it “ethically revolting.”

Many share similar reservations about AI. April Doty, an audiobook narrator, is unsettled by the environmental implications—the immense computational power needed for AI-driven searches results in significant energy expenditure. “I’m frustrated that there’s no option to disable AI summaries in Google searches,” she says. “Whenever you look anything up now, it feels like we’re harming the planet.” She has begun using alternative search engines, asserting, “Yet, we’re increasingly surrounded by AI with no off switch, which angers me.” Whenever possible, she opts out of AI altogether.

In her domain, she worries about the number of books being “read” by machines. Recently, Audible, Amazon’s audiobook provider, announced it would allow publishers to create audiobooks through its AI technology. “I don’t know anyone who wants a robot to narrate a story,” she explains. “I’m concerned this will degrade the experience to the point where people abandon audiobook services altogether.” Although Doty hasn’t personally lost jobs to AI, many colleagues have, and the threat looms larger. “AI models can’t truly ‘narrate’; narrators need to convey the emotions behind the words, a skill AI simply cannot replicate.”

Emily M. Bender, a linguistics professor at the University of Washington and co-author of the new book The AI Con, has multiple reasons for steering clear of large language models (LLMs) like ChatGPT. “Primarily, I’m uninterested in reading something that lacks a true author,” she states. “I read to grasp another person’s perspective, and synthetic text-generating machines offer no real ‘somebody’ behind the words.” She likens AI text to a papier-mâché assembly formed from countless other individuals’ writings.

Does she feel “left behind,” as advocates of AI often suggest? “Not at all. My response to that is, ‘Where is everyone headed?’” she laughs, as if implying they are heading towards a problematic future.

Bender contends that turning to synthetic media instead of authentic communication diminishes human connection, both personally and within communities. She cites Chris Gilliard, a surveillance and privacy researcher, who emphasizes that this trend represents a technological movement by corporations to isolate individuals from one another, ensuring all interactions are mediated through their products. “We don’t require that—in our lives or our communities.”

Despite Bender’s long-standing critique of LLMs, she has unexpectedly encountered students submitting AI-generated work. “That’s quite disheartening.” She expresses no desire to police or blame her students. “My role is to help them understand that using an LLM denies them valuable learning experiences.”

Should people consider boycotting generative AI? “Boycotting suggests organized political action, and I think that’s worth exploring,” she remarks. “Additionally, individuals might be better off by avoiding these tools.”

Some individuals have held out against AI but are beginning to grapple with the reality that they might eventually need to use it. Tom, a government IT worker, has refrained from using AI in his official duties. However, he has observed colleagues employing it in other contexts. Promotion evaluations depend in part on the annual appraisals that employees must write. After being impressed with a manager’s appraisal, he asked how he had crafted it, assuming it took considerable time and effort. “His response was, ‘I just spent ten minutes—I used ChatGPT,’” Tom recounts. “He suggested I should do the same, which…”

Tom expresses a conflicting sentiment about the use of AI, feeling that relying on it seems akin to cheating. He is concerned that not utilizing it might place him at a disadvantage in his field. “I almost feel like I have no choice but to use it at this point. I might have to put morals aside,” he admits. In contrast, others are cautious and restrict their use of AI to particular tasks. Steve Royle, a professor of cell biology at the University of Warwick, employs ChatGPT for the mundane task of writing computer code needed for data analysis. “However, that’s the extent of it. I don’t want it to create code from scratch because it leads to far more time spent on debugging later. In my opinion, it’s a waste of time to let it handle too much,” he explains. Royle also fears that excessive reliance on AI could diminish his own coding abilities. “Those who advocate for AI say that eventually no one will need to learn anything. I don’t agree with that perspective.”

His responsibilities include drafting research papers and grant proposals, for which he firmly states, “I absolutely will not use it to generate any text.” He values the process of writing, stating, “When writing, you refine your ideas, and through rewriting and editing, you clarify what you want to convey. Allowing a machine to do that is missing the point.”

Reflecting on the societal implications of generative AI, filmmaker and writer Justine Bateman asserts that it represents one of the worst ideas our society has devised. She expresses her disdain for its potential to incapacitate human capabilities. “They try to convince people they cannot do tasks they’ve easily handled for years, like writing emails or bedtime stories for their children. We risk becoming mere shells of our former selves, devoid of knowledge and reliant on technology for every decision, from choosing a vacation destination to processing grief.” Bateman warns that this emotional hollowing out poses a significant threat to humanity.

She believes we are gravitating toward a world that many might find undesirable, adding, “This is completely antithetical to my vision as a filmmaker and author.” She critiques generative AI, comparing it to a blender that amalgamates countless examples into a jumbled output. “It’s theft and regurgitation. Nothing original emerges, and anyone who uses it while considering themselves an artist is stifling their own creativity.” While some studios, like Studio Ghibli, have committed to staying away from AI, others, such as Dreamworks, seem eager to exploit it. In 2023, Dreamworks’ founder Jeffrey Katzenberg claimed AI could reduce the costs of animated films by 90%. Bateman believes viewers will soon grow weary of AI-generated content, equating it to junk food that may appeal to some but fails to nourish the soul. To promote human-made artistry, she established Credo 23 and a film festival highlighting cinema created without AI, likening it to an “organic stamp” ensuring no AI involvement.

In her personal life, Bateman aims to navigate a reality with minimal AI influence. She is not against technology; rather, she enjoys it responsibly. “I hold a computer science degree and love tech. However, I also love salt, yet I don’t overuse it.”

Interestingly, everyone consulted shares a fondness for technology in some form. Doty considers herself “very tech-forward,” but she values human connection, which she feels AI jeopardizes. “We’re moving towards a world that no one genuinely desires,” she warns. Royle, who codes and manages servers, also identifies as a “conscientious AI objector.” Bender, specializing in computational linguistics and recognized by Time as one of the top 100 people in AI in 2023, states, “I believe technology should serve the community’s needs rather than the goals of large corporations.” She humorously adds, “The Luddites were admirable! I’d wear that label with pride.” Similarly, Morrison remarks, “I admire the Luddites for their resistance in safeguarding jobs crucial for family and community well-being.”

Similar Posts