Intro: This is Citations Needed with Nima Shirazi and Adam Johnson.
Nima Shirazi: Welcome to Citations Needed, a podcast on the media, power, PR and the history of bullshit. I am Nima Shirazi.
Adam Johnson: I’m Adam Johnson.
Nima: You can follow the show on Twitter @CitationsPod, Facebook Citations Needed, and become a supporter of Citations Needed through Patreon.com/CitationsNeededPodcast. All your support through Patreon is so incredibly appreciated as we are 100 percent listener funded. We run no ads, we don’t have corporate sponsors. We do that on purpose to keep the show sustainable, but we do need the support of listeners like you.
Adam: Yes, if you can, please subscribe to us on Patreon, it helps keep the episodes themselves free and keeps the show sustainable.
Nima: “Is artificial intelligence advancing too quickly?” 60 Minutes warns. “BuzzFeed CEO says AI may revolutionize media, fears possible ‘dystopian’ path,” CBS News tells us. “TV and film writers are fighting to save their jobs from AI. They won’t be the last,” CNN reports.
Adam: Over and over, especially in recent months, we hear this line: AI is advancing so fast, growing so sophisticated, and becoming so transformative as to completely reshape the entire economy, to say nothing of our shaky media landscape. In some cases, those in the press deem this a good thing; in others, a bad thing but in terms that get the problem all wrong. But the vast majority of all media buy the basic line that something big and transformative isn’t just coming, but is in fact already here.
Nima: We’re not going to predict the future, but we can comment on the present. Yes, AI platforms can generate low-level marketing copy, pro forma emails, and shitty corporate art. But progress in these capacities does not, as such, portend a radical advancement into actual human intelligence and creativity.
Adam: Meanwhile, there’s little to no evidence to support the claim that AI, namely large language models like ChatGPT, actually can perform — or even intervene to save time performing — any type of high-level writing craft, journalism, fiction, screenwriting, and a host of other creative productions.
Nima: So why do we keep hearing otherwise? What purpose does this type of uncritical, providential thinking serve? And who stands to benefit from the vague sense of a future of AI-written essays, articles, and scripts, no matter how terrible they may be?
Adam: On today’s show, we will explore media’s current Inevitability Narrative, namely its credulous warning that ChatGPT is about to do the work of media and entertainment professionals, examining the ways in which this narrative, despite the evidence to the contrary, serves as a constant, implicit threat to workers and a convenient pretext for labor abuses such as wage reduction, layoffs, and union-busting. And how this media hype works to obscure the very real, banal harms of quote-unquote “AI,” such as racism, surveillance, over policing and lack of accountability for the powerful.
Nima: We’ll be speaking with Dr. Lauren M.E. Goodlad is a Distinguished Professor of English and Comparative Literature at Rutgers, as well as a faculty affiliate of the Center for Cultural Analysis and the Rutgers Center for Cognitive Science. Dr. Goodlad currently serves as chair of a new interdisciplinary initiative on Critical Artificial Intelligence and as Editor-in-Chief of the multidisciplinary journal Critical AI, published by Duke University Press.
Lauren M.E. Goodlad: The labor market badly in need of a new tech bubble now that metaverse has gone to hell, and a few other things along with that, is, you know, thinking about AI as a big wedge for squeezing more productivity out of people and particularly creative people who in the past were considered immune to automation.
Adam: This episode is a spiritual successor to Episode 92: The Responsibility-Erasing Catch-all of ‘Automation,’ in which we discussed how the abstract notion of automation is used as a built-in, evergreen excuse for labor abuses. We won’t be getting into the general history of the use of machines as a threat to labor for this episode, so if you’d like to learn more about that, check out episode 92.
Instead, what we’ll be talking about today is a more contemporary, very fashionable, very sexy, and specific incarnation of the media-driven, don’t want to say scare or panic, we want to be generous here, but general fear, I guess.
Nima: Wizardry. Wonderment.
Adam: Increasingly for verticals such as media, entertainment, education, and other fields with kind of high level writing. We want to start off by saying that obviously, we are not software engineers, clearly, otherwise we wouldn’t have a media criticism podcast, we’d be making a quarter million dollars a year and be way happier. But what we do know is we know bullshit and we know what writing is, because obviously, you know, you and I are professional writers effectively and have been for some time.
Nima: I’ve been using ChatGPT to do all of my podcast work.
Adam: Oh, the whole time. Yeah. So, by the way, everyone who does the ChatGPT article, they always think they’re so clever by saying around the third or fourth paragraph.
Nima: That’s right.
Adam: ‘The first two paragraphs were written by ChatGPT.’ Whoa.
Nima: Citations Needed cold open was done by ChatGPT. It was not.
Adam: Yeah, like, I saw 25 different articles do that gimmick, and it’s like, you know, ‘His real name is Albert Einstein,’ it’s like, ‘No way! I can’t believe that reveal. That was fucking ChatGPT? I can’t believe it.’ But of course, it’s like, oh, yeah but they doctored it up, and they rewrote half of it. So it doesn’t really mean anything.
Nima: That it was sent to an editor who rewrote it.
Adam: Also it read like shit. Yeah, and I think anyone who is a professional writer by trade or does anything in that kind of line of work, even academics or screenwriters, what have you, I think there’s a tendency to not want to appear overly precious or territorial about what one does, and I think one of the reasons that a lot of people in media haven’t really criticized some of these underlying assumptions is because they don’t want to appear unhip. So we’re going to listen to a clip that I think kind of lives in the collective pop memory of so many people where Bill Gates, then the CEO of Microsoft, is on David Letterman in 1995 and they’re discussing the internet and I want to play that clip real quick and then I want to talk about it.
David Letterman: What about this internet thing? Do you know anything about that?
Bill Gates: Sure.
David Letterman: What the hell is that exactly?
Bill Gates: Well, it’s become a place where people are publishing information, so everybody can have their own homepage, companies are there, the latest information. It’s wild what’s going on. You can send electronic mail to people. It is the big new thing.
David Letterman: Yeah. But you know, it’s easy to criticize something you don’t fully understand, which is my position here.
Bill Gates: Go ahead.
David Letterman: But I can remember a couple of months ago, there was a big breakthrough announcement that on the internet or on some computer deal they were going to broadcast a baseball game, you could listen to a baseball game on your computer, and I just thought to myself, does radio ring a bell? You know what I mean?
Bill Gates: There’s a difference.
David Letterman: There is a difference.
Bill Gates: It’s not a huge difference.
David Letterman: What is the difference?
Bill Gates: You can listen to the baseball game whenever you want to.
David Letterman: Oh, I see. So it’s stored in one of your memory deals.
Bill Gates: Exactly.
David Letterman: And you can come back to it later.
Bill Gates: Exactly. It’s what we talked about earlier. Yeah.
David Letterman: Do tape recorders ring a bell? Yeah, I just don’t know. What can you, just knowing me, the little you know me now, what am I missing here? What do I need?
Bill Gates: Well, if you want to learn about the latest cigars or auto racing, statistics.
David Letterman: Well, you know I’ve got that covered. I subscribe to two British magazines devoted entirely to motorsports and I call the Quaker State Speed Line about two times a half hour. So now would a computer give me more than I’m getting that way?
Bill Gates: You can find other people who have the same unusual interests you do.
David Letterman: You mean the troubled loner chat room on the internet?
Bill Gates: Absolutely.
Adam: Yeah, this clip sort of makes the rounds all the time. It’s sort of infamous in that it sort of lives in people’s collective memory because it shows that nobody wants to be David Letterman in this clip.
Adam: It’s like the worst thing you can possibly be.
Nima: You don’t want to be the completely out-of-touch old man.
Adam: Unhip, luddite, confused.
Adam: Then there’s the opposite problem. The opposite problem is all the people who predicted in the late ’60s that we’d be playing golf on the moon by 1980. There is a kind of overhype and then there’s an under hype. What we want to try to do in this episode is we want to try to be sober about the reality of the moment without being either of these two kinds of tropes, and so I think to do so we have to be, we have to soberly assess what actually exists, and I think fundamentally we have to talk about the ways in which, I’m going to say something at the top and try to be as generous as possible, which is that if someone said, what impact have recent developments in LLM or these other kinds of quote-unquote “AIs’ have on software engineering and coding? For example, in a million years, I would never speculate about what the impacts on that market would be because I don’t understand software engineering. However, many people who don’t write professionally, many of whom are software engineers, feel entirely comfortable telling us how LLM and other quote-unquote “AI” technologies will impact our industries. I think that because it has made certain impacts in certain things that those kinds of one to one changes are being projected on to other industries where they don’t actually really make sense, and this has led to a lot of vibes type reporting, this type of reporting and general kind of social essays, it’s vibing and hard right now, and we’re just trying to get a sense of what’s the vibe and what’s actually reality.
Nima: And part of this, Adam, also has to do with accepting the premise that everything will fundamentally change, and it’s just about how to build fairer, better technology from the outset. But resistance to the idea that, you know, everything is going to be AI driven isn’t really there. So the premise is accepted, the framework of what is supposed to then be addressed through, you know, either criticizing the AI hype or leaning into it, the framework is being accepted at face value, and I think that what we’re discussing today is how well, you know, maybe start a little earlier in the conversation without accepting the premise, and then let’s figure out what actually needs to be done because, you know, advances in tech are very real, they have very real implications on our economy, on our communities, on our families, on our work, but it’s about how to assess what the threats or opportunities actually are, and not to just accept this very much, you know, big tech driven idea that well, you know, look, our world is fundamentally changing, it’s just what we do now.
Adam: Yeah, I mean, look, when it was developed in the 1950s, nuclear energy was a game-changer, but it didn’t write episodes of I Love Lucy. Not everything that’s changing has to change everything all at once. And I think that when you’re in the media criticism game, and you’re trying to assess the media’s perception of things like crime, terrorism, you know, really complex, very loaded things, and you say, ‘Well, maybe they’re overemphasizing this or that,’ people have a tendency to say, ‘Oh, well, you think it’s not real at all,’ and it’s like, no, no, no, it’s real, it’s just we need to have an accurate and sober interpretation of these things, because there are cynical people in power who have incentive to heighten things in a certain way, and I think it’s important that we assess the reality of the situation rather than just sort of accepting the most maximalist interpretation of events.
Nima: Right. Because then we’re just shadow boxing against what they want us to shadow box against and not having a different interpretation. So let’s dig into this and give a bit of background on the development of AI itself.
The through line of artificial intelligence research really originated after World War II, starting with English mathematician Alan Turing. Turing gave a lecture on the subject in 1947 and published an influential paper on it in 1950 entitled, “Computing Machinery and Intelligence.” In that paper, he introduced the quote-unquote “Imitation Game,” which we now know as the famous Turing Test.
In this research test, an interrogator is in a room separated from another person and a machine. The object of the game is for the interrogator, through a series of questions, to determine which of the other two is the real human person, and which is the machine. Turing predicted that machines could be programmed to perform increasingly well at the game, in other words, to get better and better, to learn how to convince the interrogator that they are in fact a person.
Though he recognized that machines could be programmed to achieve this, Turing didn’t have delusions about the sentience of those machines. He argued that the question of whether machines could think was quote-unquote “too meaningless” to merit discussion. Turing’s work would of course become deeply influential in this field, inspiring much more research on the topic throughout the 1950s.
That decade, the term AI, or artificial intelligence, was coined, in 1955 to be precise, by emeritus Stanford Professor John McCarthy. McCarthy defined AI as, quote, “the science and engineering of making intelligent machines” and defined “intelligence” as, quote, “the computational part of the ability to achieve goals in the world. Varying kinds and degrees of intelligence occur in people, many animals and some machines.”
Stanford’s Institute for Human-Centered Artificial Intelligence adds this, quote, “Much research has humans program machines to behave in a clever way, like playing chess, but, today, we emphasize machines that can learn, at least somewhat like human beings do.”
Adam: Now, let’s jump ahead to the mid-2010s, where the story really begins. By then, AI and automation as concepts, and vague threats to workers in industries like fast food and grocery stores, were seeping into the mainstream, as were fears about AI sentience and the prospect of letting AI run amok and dominate our lives. It was at this time that multiple major, slick, corporate- and billionaire-funded “AI ethics” organizations emerged, allowing businesses like Google and Amazon and billionaire figures like Elon Musk and Peter Thiel to continue to exert control over uses, development, and most importantly of all and in our podcast, public perceptions of AI.
For example, the Future of Life Institute, which was formed in 2014 with funding almost entirely from Elon Musk, with a stated mission to, quote, “steer transformative technologies away from extreme, large-scale risks and towards benefiting life.”
And in December 2015, we saw the launch of OpenAI, the quote-unquote “research laboratory” whose products are behind so much of the current AI narrative. At the time of its founding, OpenAI had a murderer’s row of tech investors in its top ranks.
Its funders, which initially committed $1 billion, included Sam Altman, whose startup accelerator Y Combinator catapulted companies like Airbnb and Instacart to fame, it’s probably the most famous accelerator in Silicon Valley; Elon Musk, of course, who needs no introduction; Reid Hoffman, co-founder of LinkedIn and frequent visitor to Jeffrey Epstein’s island and mansion in New York; Peter Thiel of very right-wing ghoul funding fame; and Amazon Web Services of Jeff Bezos fame.
So, originally classified as a nonprofit, OpenAI officially became a for-profit company in 2019. Since its founding, OpenAI has released multiple so-called “generative AI” products. So let’s define what “generative AI” is. Again, we’ll cite Stanford’s Institute for Human-Centered Artificial Intelligence, which defines generative AI as, quote, “a subset of artificial intelligence that, based on a textual prompt, generates novel content.” I’m sure everyone’s seen and played with one of these at this point.
In January 2021, OpenAI introduced DALL-E, which generates images based on a text prompt, for example: “An astronaut riding a horse in photorealistic style.” How does it work? Put simply, it picks apart images on the internet and assembles fragments of them into a new image — usually, one that looks distorted but still pretty impressive. OpenAI has since released an updated and much-improved version, DALL-E 2.
But the type of generative AI that’s really caused a stir in recent months is the large language model, abbreviated as LLM. LLMs perform tasks — or attempt to perform tasks — like answering questions, summarizing text, and generating lines of code, again, as a response to a text prompt. Their prose, harvested and pieced together from a large dataset of text, is meant to mimic that which is written by human beings.
OpenAI and Google both introduced their own versions in 2018. OpenAI has created multiple versions of its LLM, which is called GPT, short for generative pre-trained transformer. Open AI’s chatbot ChatGPT, introduced in November 2022, is built on the company’s GPT-3.5 and GPT-4 models.
ChatGPT, as I’m sure you know by now, became wildly popular for its kind of fun, impressive, very spooky ability to mimic not just what somebody would write but the tone in which they would write it largely for stunt-y and recreational purposes. Right on cue, every major Silicon Valley company — Google, Meta, which is Facebook, Amazon, you name it — has released their own or at least now its plans to soon release their own form of a LLM Chatbot.
Nima: Now, these technologies, especially LLMs, have garnered breathless, largely uncritical coverage in news media. In March 2023, The Future of Life Institute, the Elon Musk-associated collective we mentioned earlier, published a high-profile open letter calling for a pause for at least six months on the, quote, “training of AI systems more powerful than GPT-4.” Among its signatories were Musk, Apple co-founder Steve Wozniak, and former presidential candidate Andrew Yang. The letter, in part, stated this, quote:
Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?
The media was captivated. CBS ran the headline, “Elon Musk among experts urging a halt to AI training” in an open letter, and Science Magazine followed up in April with, “Alarmed tech leaders call for AI research pause.”
Adam: Now, of course, it’s very fair and reasonable to express concerns about quote-unquote “AI” and the way it will be deployed. But this letter was doing so in a very particular way that’s worth interrogating. It made no mention of power dynamics or identification of bad actors. It focuses a lot, as you’ll see in the rest of the show, on hypothetical harm of AI, without proper evidence based on a vague sense that it’s this ubiquitous threat when in many ways it really isn’t, and it’s rooted in an ideology called longtermism, which is an ill-defined philosophy very popular in Silicon Valley that appoints the ultra wealthy as benevolent influencers of humanity. Longtermism is under the umbrella of Effective Altruism, a bizarre, increasingly cult-like ideology that has taken over a lot of Silicon Valley, whose most popular adherence is the now disgraced Sam Bankman-Fried of FTX.
Now, in one thing, this is a criticism I’ve been making for probably like eight years now, is that the AI panic is itself a form of AI marketing in many ways. If it’s not done correctly and done in this kind of Terminator 2, it’s going to take over the whole world, this is effectively marketing copy for people invested in the AI sector. This is why every six months, Bill Gates and Elon Musk and people that are heavily heavily invested in this sector, do these grand warnings, warning about the way AI is going to take over the world, because that’s effectively a way of hyping up their own products. Something Brian Merchant of the LA Times, is one of the very few kind of really critical writers working on this right now, he’s a tech reporter for the LA Times, he wrote, quote:
Something to keep in mind as we think through this AI hearing — the more Sam Altman conjures the specter of an ultra-powerful AI, the more he and OpenAI stand to profit both materially and reputationally.
In response to a similar Elon Musk warning about AI, turning our universe into a world of terminators. In July of 2015, I wrote, quote:
Note the same parties warning about AI have billions invested in it. Warnings are thinly veiled investment hyping.
So it’s kind of a similar argument I’ve been making, which is to say, if they really thought that it was going to unleash all this horrible stuff, then clearly they wouldn’t be spending all their money on it. It’s a similar marketing strategy to, I don’t know if you ever see those gimmicky weight loss infomercials where they’re like, ‘Look, if you’re interested in losing 5–10 pounds, this is not the product for you. This is extreme hardcore stuff. Warning, only use this if you plan on losing 50 pounds or more.’ It’s like this is so powerful that you’re not ready for it. It’s a classic carnival barker tactic, and what makes it effective is that it does veer into and emerge with a real problem of quote-unquote “AI,” much of which we’ll get into with our guest, that there are actual problems about surveillance, policing, consolidation of power, widespread, institutionalized plagiarism, there are real threats to quote-unquote “AI,” but it’s not the sort of sexy, it’s going to completely reshape our entire world economy in a matter of months, perhaps years, which is why organizations like Future of Life Institute are given billions of dollars from people like Elon Musk, because they know that with this technology, there’s going to be pushback, and there’s going to be critics, they want to control that criticism.
Nima: Right, and so they kind of lean into this hand wringing, which is what we saw with that open letter. If this were such an existential threat that a mere six month pause of investment into even more powerful AI platforms would not nearly be enough. But what that letter does is it again, works as a marketing press release for this technology, while ignoring the decades of work, and the decades of similar warnings, by technologists that are not then mentioned in that letter. So the letter elicited quite a bit of rightful criticism, particularly from those very AI researchers and scholars who had been working on this for years. As authors of the seminal 2021 paper, Stochastic Parrots, which, among other things, addressed the lack of public interest design within large language models, noted in a response to the Elon Musk letter, this, quote:
The letter addresses none of the ongoing harms from these systems, including 1) worker exploitation and massive data theft to create products that profit a handful of entities, 2) the explosion of synthetic media in the world, which both reproduces systems of oppression and endangers our information ecosystem, and 3) the concentration of power in the hands of a few people which exacerbates social inequities.
But still in April, CBS’ 60 Minutes contributed to this very hype, broadcasting a laughably credulous report about Google’s ChatGPT competitor Bard.
Adam: Now when Nima says credulous he means credulous. You have to watch this thing in its entirety. It was roasted on Twitter for a good three days because it looks, feels and reads exactly, I don’t know what the corporate partnership is, but there has to be one or either that or 60 Minutes is run by a bunch of toddlers, but it is the most slackjawed marketing hype up copy you’ve ever seen.
Nima: So without playing the entire segment, which will explode your brains, we’re going to now listen to just a short clip where 60 Minute host Scott Pelley sits down with Google executives to test this Bard system.
Scott Pelley: Our experience was unsettling, confounding. Absolutely confounding. Bard appeared to possess the sum of human knowledge with microchips more than 100,000 times faster than the human brain. We asked Bard to summarize the New Testament and did it in five seconds in 17 words. We asked for it in Latin. That took another four seconds. Then we played with a famous six word short story often attributed to Hemingway, “For sale: baby shoes, never worn.” The only prompt we gave was “finish this story.” In five seconds, holy cow. “The shoes were a gift from my wife, but we never had a baby.” From the six-word prompt, Bard created a deeply human tale with characters it invented, including a man whose wife could not conceive, and a stranger, grieving after a miscarriage and longing for closure. I am rarely speechless. I don’t know what to make of this.
Adam: It’s just plagiarizing other people who’ve commented on the most famous short story ever. It’s bad and it’s rote and it’s just completing the thought that everyone else has had millions of times.
Nima: I love that the examples they give, like the gee wiz, Holy Cow examples aren’t even good. This tech can actually do really amazing stuff. Let’s be clear that the technology is incredibly impressive and startling but even the examples that they give are from someone who’s like 186 years old, being like, ‘What? How can I understand the most famous short story of all time that millions of people have commented on?’
Adam: Yeah, it’s using predictive texts, probably assembling or splicing together, plagiarizing other people’s responses to this prompt, the most cliche prompt that every creative writing class gives you. So it’s a prediction machine that can mimic human speech, and that’s far as impressive, but that gimmick where it’s then past a certain point, which we’ll get into.
So now let’s move to the kind of current media reaction to this type of technology, which is said to be replacing everybody from journalists to screenwriters to poets, and, of course, most perniciously of all, as always, academics, educators, teachers, right? Because they’ve been trying to replace teachers with iPads for 20 years, and so this is just another iteration of that, and people are excited to kind of push this narrative, especially those in management.
In February of 2023, Mathias Doepfner, the CEO of German media group Axel Springer, claimed that AI could replace journalists. Axel Springer owns multiple US media companies, including Politico and Insider (formerly Business Insider). Doepfner wrote in an internal memo, quote, “Artificial intelligence has the potential to make independent journalism better than it ever was — or simply replace it.” He added that tools like ChatGPT promise a “revolution” in information and would soon be better at the “aggregation of information” than human journalists.
The next day, media ran reports on Doepfner’s prognostications, releasing vague warnings about the potential of AI to take media jobs. The Guardian ran a piece headlined, “German publisher Axel Springer says journalists could be replaced by AI,” while CNN warned, “The owner of Insider and Politico tells journalists: AI is coming for your jobs.”
Politico, which again is owned by Axel Springer, urged journalists to embrace AI a few weeks before, on Feb. 6, with the mega-bait-y, Kent Brockman-influenced headline, “Why I Welcome Our Future AI Overlords.”
Now, interestingly, Axel Springer had been planning layoffs anyway.
Nima: Purely coincidental, I’m sure.
Adam: The company announced, on the same day of Doepfner’s memo, February 28, that it would cut around 300 jobs at its German publications and soon laid off a bunch of people at Insider the next month.
So again, this sort of we’re going to have AI replace a bunch of people, dovetails nicely with their desire to lay off people that were planned anyway, there is no indication that anyone actually been replaced by AI. One assumption is that somehow AI can save on labor. Proponents of AI claim it can reduce labor costs. This is true and very formulaic in rote forms of writing, say, for example, pro forma emails, let’s say you run a Salesforce and you need to push out a bunch of emails to your clients sort of reminding them that they need to pay a bill or asking them how their product is or whatever. Obviously, AI can reduce that labor quite a bit. But for any high level writing, it’s very useless. It doesn’t ever cut any net labor that goes into the process of writing.
Something that was kind of acknowledged by Insider editor-in-chief Nich Carlson, whose publication Axel Springer owns. In April 2023, Axios reported that Insider was, quote, “experimenting with ways to leverage AI in its journalism,” unquote. Then there was a memo released by the other editor-in-chief Nich Carlson, the memo is very long, so we can’t read the whole thing, but we’ll put it on our show notes, but basically, it’s a really strange kind of concession that it starts off with things like ‘Oh, you know, basically, the corporate suits are making me write this memo. We need to somehow integrate AI because it’s the next big thing and they think it can cut labor costs from our unionized and mouthy and annoying journalists and writers and editors.’ But then it goes on and says don’t use the actual text that comes out of it, use it as a tool, whatever that means, right? So use this tool that’s supposedly going to be more efficient, but don’t actually use the text that comes out, and by the way, it can casually libel people because it makes things up. The editors have to fact check every single little thing it says because it makes shit up, and oftentimes the writing is horrible, generic cliche, can’t integrate sources, can’t use context, can’t do any kind of critical race theory analysis, can’t do gender analysis, can’t do homophobia analysis, can’t do political analysis. So you basically have to rewrite everything anyway. But go ahead, use it because it’s an exciting new technology.
Nima: That’s right, it’s the new thing, and we can’t fall behind.
Adam: And this gets to a key kind of talking point, I think I want to bring up and we’ll discuss with our guest, which is that if you push any of these chatbot millerites that think it’s going to replace journalism, screenwriting, novel writing, almost all of whom are selling some kind of consultancy or selling some kind of product that’s supposed to be this new sexy tool that’s probably going to streamline this or that, they’ll always concede that it cannot as such write anything that’s any good. But what they’ll say is they can write something with, by the way, you have to take their class or their buddies class on how to write the prompts, right? There’s sort of a new cottage industry of people telling you to write the prompts to save you this time. So that’s more money and more effort that goes into it. That it’ll give you something that you can therefore go and rewrite with the implication that it’s going to save labor. Now this is something that someone says who’s never been a writer or been an editor, because being an editor is an incredibly nuanced and complex and political and ideologically difficult job, even for something that is not really seemingly ideological or political, even writing a novel, right? Because you have to make sure that you’re balancing many considerations at once. But I think a lot of people think editing is just like fixing a coma, which is what I’ve realized in this whole discourse. One thing that has become brutally clear to me is that a lot of people don’t actually know what goes into writing, don’t know what goes into high level writing and don’t know what goes into editing, which is why maybe I am being a little bit territorial and defensive. Anyone who’s ever edited a day knows that quote-unquote “rewriting,” bad writing is harder than just writing it better the first time. You’re not saving any net labor. In fact, you’re creating superfluous labor which is what Nich Carlson effectively says in this memo in a very passive aggressive way, you’re creating more labor. Because, again, you can do this now, I can just hand you a random, you know, word generator, monkey on a typewriter and say, ‘Go rewrite this,’ might I have saved you labor? No, it’s purely an ontological trick you haven’t actually saved, you’ve done the seven minute apps, you haven’t saved any labor, you’ve just redefined labor, right? You’ve redefined the terms. And so much of this discourse is predicated on that key point that it’s like, ‘Okay, well, it’s not going to actually do the writing we need it to do, but it can like be a tool.’ Well, what does that mean? It can be a tool, how? Because the writing it generates is so bad, and so cliche and so derivative and plagiarizes and gets facts wrong, that I have no idea what, ‘Oh, well, it’s going to in six months or a year,’ well, okay, then call me when that happens. Until then, what the fuck are you talking about? Because the thing it’s actually publishing is so bad that it creates more labor than it actually solves, and what it really does is it justifies C-suite labor, it will labor discipline, yes, it has a huge psychological aspect to it. But really, what it does is it justifies what are almost always superfluous middle managers and executives, when you work in a creative field such as, you know, with the WGA strikes, which we’ll get into, creative writing, screenwriting, when you’re a studio executive or you’re a middle manager at a publication who doesn’t actually do day-to-day writing, you need to create the appearance that you’re saving labor, and this is why AI has been such a boon to that class, because they are largely superfluous, and they need to look like they’re creating some kind of savings because they can’t write, they can’t edit for the most part, and so they need to look busy like they’re doing something, and these things like the NFT trend from, you know, year and a half, two years ago, the metaverse trend around that same time and a little before, it’s the next big thing. It gives them something to put in a memo to investors and to tell their own bosses, ‘Look at this new thing, we’re leveraging AI,’ blah, blah. And it’s like, okay, well, how? No one’s really like, ‘Oh, I’ve just got a bunch of garbage handed to me and I’m just going to rewrite it so now they can call me a rewriter and pay me a third.’ It’s like, no, that doesn’t work. That’s not going to pass the first sniff test of any labor negotiation.
Nima: Don’t be too unionized. Don’t be too demanding, because machines are coming for your jobs, and therefore, if you want to stick around, you have to be pretty compliant. We see this in the entertainment industry a lot and most notably currently with the ongoing Writers Guild of America or WGA strikes.
Now, on March 20, 2023, the WGA entered into contract discussions with the Alliance of Motion Picture and Television Producers, AMPTP, which represents Hollywood studios and streaming services. The two parties failed to agree on negotiations by the May 1 deadline.
On midnight of May 2, the WGA began a strike in response to deteriorating working conditions for unionized TV writers. As WGA member Josh Gondelman explained for The Nation, quote:
The conditions that caused this strike (the first by the WGA since the 2007–08 walkout) have been percolating for years. While overall production budgets have risen sharply, writer pay has declined by 4 percent over the past decade — 23 percent when adjusted for inflation. The shift of film and television to streaming has meant lower residuals (the money writers get paid when their shows are re-aired) and shorter seasons. The proliferation of so-called “mini-rooms” — where small writing staffs often work on a show before it is green-lighted — has many writers taking short-term jobs for less than their established rate.
Gondelman added that companies have long been able to increase pay and improve other working conditions for writers, but simply refuse to do so, quote:
Netflix, Paramount, Comcast, Disney, Fox, and Warner Bros reported a total of $28–30 billion in operating profit each year from 2017–21. In 2021 alone, eight Hollywood CEOs pocketed nearly $780 million between them.
Adam: It’s important to emphasize that these concerns, like Gondelman said, had been brewing for years, long before the introduction of ChatGPT. Yet in recent coverage of the strike, media have been portraying ChatGPT and other LLMs as the chief threat, or at least one of the major ones.
For some context, the WGA included one clause about future AI use in contract negotiations, seeking to, quote, “regulate use of material produced using artificial intelligence or similar technologies.” Yet media are portraying this as evidence that all writers are worried first and foremost about AI, rather than corporate greed.
Repeatedly, media warns of quote-unquote “AI scabs.” The New York Times wrote an article in April 2023, “Will a Chatbot Write the Next ‘Succession’?” Rolling Stone in May, “Why Striking Hollywood Writers Fear an AI Future.” Hollywood Reporter May 3, “As Writers Strike, AI Could Covertly Cross the Picket Line.” CNN May 5, “TV and film writers are fighting to save their jobs from AI. They won’t be the last.” Wired May 5, “AI, the WGA Strike, and What Luddites Got Right.” AP May 5, “Could AI pen ‘Casablanca’? Screenwriters take aim at ChatGPT.” May 6, Washington Post, quote, “Could AI help script a sitcom? Some striking writers fear so.”
So this became an overarching theme that the threat of AI was the reason for the labor disputes rather than, you know, important to some, less important to others, certainly something they maybe want to get ahead on or hedge their bets on, but the reason why they feel grinded to a nub is not because of some genuine threat that a fucking chatbot is going to write their script. It’s because the corporate CEOs are just greedy, and the corporate CEOs then have incentive to push this idea that they’re going to be replaced by AI scabs to further diminish and trivialize their labor. There’s a psychological component to it. Something that WGA board member Adam Conover, the creator of Adam Ruins Everything and former Citations Needed guest, has made clear, he states that AI written scripts are quote, “a fad that’s going to disappear in a year and a half.” Conover added that industry heads would likely try to use AI to generate scripts, then abandon it when it stops benefiting them, stating, “It’s not easier to replace us with AI than it is to find someone to write the scripts, and that’s not possible for them to do because it’s an extremely skilled profession.”
And so what Conover argues is that the real threat is not that they can genuinely replace, at any point in the process, or lessen net labor at all with AI, it’s that they may try to do it anyway, just like they tried to do Quibi, right? And that this could have real world harms, and this is why the union has an incentive to protect against that. But we’re not accepting the premise that the actual technology can really do their job. It’s at this point, just kind of evidence free speculation. This strikes me as a really good reason why we shouldn’t just accept the premise that these AI scabs are a genuine threat. Because if you do accept that, which is not the same thing as saying you shouldn’t hedge your risks in a contract, which I think is perfectly sensible, because again, who knows what the technology is going to look like in 10, 20 years, and then also, again, more importantly, more to Conover’s point, who knows what they’re going to try, which is not at all the same thing as what works. Because again, we know these are people whose job is to create the appearance of cost saving for their investors and corporate bosses and to pump up stock prices.
But you don’t need to accept that it’s an actual thing, and so one such piece written for Rolling Stone quoted a programmer and AI consultant, Dylan Budnick, who said to Rolling Stone, without any kind of pushback, quote:
In the tech world, some see machines as a genuine threat to creatives. ‘The writers are very right to be spooked by this,’ says programmer and AI consultant Dylan Budnick. ‘Studios can save a buck, wrangle creative control away from the writers to please advertisers-funders, and focus on editing a prewritten script instead of dealing with a range of voices and takes from a writers room.’
Most elements of a screenplay ‘can all be easily spit out by the models used by OpenAI,’ Budnick claims. ‘The job then becomes reading and editing, which is easily done by whoever has creative control.’ Given a prompt such as: ‘Write me a movie about Spider-Man meeting Batman, include stage directions, suggest actors, soundtrack, etc. Write it in the style of a detective noir film,’ the model can generate a roughly 50-page script.
Budnick implies that AI will streamline the writing process, reducing human labor required to write a script. But this isn’t true. Nothing he said here was true. So I emailed Budnick asking him to clarify what he meant when he made this statement that it can create a 50-page script, an AI generated script. What does that mean? What does that look like? To which he responded, quote, “it’s a matter of opinion on whether or not the script would be ‘good enough’ on first pass.” Then he said, quote, “You can modify the initial prompt to be whatever you’d like.” But this is simply not true. And I said, Can you provide me with a 50 page script that you’re talking about? Can you send me something that would be something that a writer could riff off of that sort of good enough? And of course he couldn’t, because he can’t, because it can’t actually do this, and there’s a lot of this, whoo, there’s a vibe, there’s a vibe that can do it, because it can do a sort of really interesting impression of writing for five or ten pages, but it’s not very good. It’s very banal, and anyone who has sat down to sort of adapt that or quote-unquote “rewrite” that or punch that up would just get rid of it and start over again, thus not creating any net labor savings, but in fact creating more work.
Nima: To discuss this more, we’re now going to be joined by Dr. Lauren M.E. Goodlad, Distinguished Professor of English and Comparative Literature at Rutgers, as well as a faculty affiliate of the Center for Cultural Analysis and the Rutgers Center for Cognitive Science. Dr. Goodlad currently serves as chair of a new interdisciplinary initiative on Critical Artificial Intelligence and as Editor-in-Chief of the multidisciplinary journal Critical AI, published by Duke University Press. Dr. Goodlad. will join us in just a moment. Stay with us.
Nima: We are joined now by Dr. Lauren M.E. Goodlad. Dr. Goodlad, thank you so much for joining us today on Citations Needed.
Lauren M.E. Goodlad: Thank you.
Adam: So to start off, we’ve kind of been skeptical of what we view as some of the the ways in which the predictions and assertions and media narratives are kind of getting ahead of the evidence with respect to what is generally called “AI,” which we’ll put in scare quotes for the purposes of this interview, and on many on the sort of left, center left, liberals are sort of conceding that inevitably it will sort of come for us all in a way that may or may not be true, but sort of seems prematurely conceding this premise based on what is a very, very impressive large language model, but isn’t necessarily as you make clear, intelligence in any kind of meaningful sense. So I want to sort of start off by kind of edifying our listeners, if you would mind sort of giving a recap of kind of generally how the technology works, and what you view as the ceiling of this technology in the short term, obviously, we don’t want to sort of predict, you know, 10, 20, 30, 100 years from now, we’re not that arrogant, although we are kind of swimming upstream a little bit here. If you could sort of tell us what you think, to the extent to which you feel like the vibes and the media vibes are getting ahead of what we’re actually seeing.
Lauren M.E. Goodlad: Okay, so that’s two great questions in one. Can what is being called AI replace high level writing? And if so, how or how do these text generating technologies work? And, like you, I tend to put “AI” in scare quotes, because it’s a lot of different things. And in this question, we’re talking about text generation. So let me say that the chance of something called AI writing journalism of any quality, screenplays or academic research in the near future is zero, and that’s because of how these models work. Again, AI is a baggy term, and right now, it’s really functioning as a marketing term that most people in the public associate with science fiction and empirically minded people still refer to it as data-driven machine learning, or perhaps data-driven deep learning, which in the domain of writing involves large language models. Now, these are disembodied statistical models. They do not feel or think or substantively know anything, but they are trained on massive amounts of data, most of which data has been scraped from the internet without any compensation or consent or knowledge of the person who generated the data. By scraping that data at massive scale, and then training a system on that data into these complex and highly parameterised models, you end up with a technology that is actually working as predictive analytics or pattern finding, but at a complex level, and looks to us like something a human might have written. So what we’re essentially talking about is probabilistic mimicry. You put an input, which could be a question or a few lines of text into the interface, and the model predicts a few paragraphs that might plausibly come next. But while it’s plausible, and it might provide some information, it could also be completely wrong. The models are unreliable, because language is huge, and seemingly random, what the field calls stochastic, and so they sometimes quote-unquote “hallucinate,” a term that the field uses for when they just make stuff up, that they fit into a familiar template that is a little bit like playing Mad Libs. So if I want a chatbot to write a bio for me, I prompt it, write a bio for Lauren Goodlad and it will get some things right, but it will also include books that I never wrote, awards that I never won or to take a different example, it will generate a Wikipedia page on the health benefits of feeding porcelain to babies. Another outcome of these being in effect statistical models that predict plausible completions, as a way of being useful is that the outputs are typically kind of boring. In order to get an interesting output out of them, you have to be a really good prompter, which some people are, and put in some original writing of your own. But even then, you’re not going to get something out that is really profound or original or striking or hilarious, you might want to think of the outputs that you’re going to get as something like the middle of a bell curve for that particular prompt. That’s what text generators encourage you to do, to write sort of in the middle of the curve, and so if you’re a creative person, you’re going to have to work really hard to prompt or tweak or edit to bring the model up to your level. Now, in theory, some might want to work that way, and some people claim that they do, but if you’re not a creative person, these models are not likely to be much help. You could end up with something, well, you would end up with something that is grammatical, but it would be more like filler than a passage of great writing or thoughtful analysis.
Adam: The compromise position, because I think when the kind of those some of the AI boosters are you say well, okay, for example, one quote, we site, a guy says, you know, it currently produce a 50 page script based on the prompt of making a 1940s noir film with Spider Man and Superman, and he gave this quote in Rolling Stone that received no pushback, and I emailed him, I said, well, okay, well, can you show me the script to see if it’s any good? And he said, ‘Well, no, it’s sort of a matter of opinion.’ I’m like, but it’s not really though, is it? I mean, and so the compromise position, when you push back is they say, ‘You’re right, there will always be a need for it for writers,’ but what they’re going to say is that they’re going to, they’re going to pump out some script, and then they’re going to just pay people to rewrite. But anyone who’s ever had to rewrite anything knows that there’s another word for rewriting, which is called writing, and always rewriting bad writing takes longer than just writing in the first place. Thus, no net labor was saved. And I do feel like there’s this vague vibe, and like you said, some people perhaps, again, can do a bunch of front loading on the prompt itself, but it seems like that claim that that somehow we’re going to just go along and tweak things the computer does also strikes me as kind of missing what high level writing does, which is, like you said, humor, nuanced context and the context of say, for an example of screenwriting, you know, fidelity to structure, and none of which it can really do. Everyone’s sort of scared to sound like a curmudgeon or kind of a Luddite, right? That’s sort of the ultimate scarlet letter, right? The dreaded L Word. If you can, can you talk about, it seems like we sort of need this thing to be a thing in a way that strikes me as not really aligned with what it’s actually producing and what it may be axiomatically can produce because it’s a very spooky, impressive kind of, I don’t want to say trick, because maybe that’s too belittling of the underlying engineering that’s gone into it.
Lauren M.E. Goodlad: Yeah, I mean, people have called it a parlor trick or a magic show. There’s certainly a lot of illusion involved, and that illusion comes from people. I’m not sure if you’ve ever heard of the ELIZA effect.
Lauren M.E. Goodlad: So back in 1966, I think, a computer scientist named Joseph Weizenbaum wrote a chat program called ELIZA, and doing this kind of thing was a very different technology then, it was more like writing a flexible script, and he came up with the great concede that it would take the guise of Rogerian psychoanalyst, you know, you would say, ‘Well, I had an argument with my boyfriend,’ and it would say, ‘Tell me more about your boyfriend,’ you know, that sort of thing. And it was very cleverly done, though it was completely simple, and what he discovered, much to his shock, and you know, chagrin, is that people wanted to talk to it, and they got nervous when he told them, ‘You know, I can see everything that you type in there, it’s my system,’ and he realized that this was a very bad idea. And later on in the ’90s, Douglas Hofstadter called this the ELIZA effect, which is basically people’s seemingly ingrained tendency to consider anything that is speaking in sensible language to be real and incarnate, and even to make excuses, you know, back in the early days, of, of these programs, people might say, like, ‘Well, maybe they’re drunk,’ or, you know, they would make excuses for how it can be that the language was not quite there. So that’s one thing to think about, which is that the ELIZA effect is something that we’ve known about for decades, and these much more sophisticated bots, of course, can do that at a much more effective level. But what you’re also pointing to is that the labor market, badly in need of a new tech bubble now that Metaverse has gone to hell, and a few other things along with that, is, you know, thinking about AI as a big wedge for squeezing more productivity out of people, and particularly creative people who in the past were considered immune to automation.
In the Writers Guild strike that’s going on now, you have exactly the kind of situation that you just described. These writers, for the most part, know that their work can’t be replaced. But in fact, the studios are using the models to generate really bad content, so that they can pay writers less to supposedly revise a script, which costs less than paying someone to write a script, even though as you say, revising a bad script could be much more work than simply writing one from scratch. So, you know, that’s the dream or for some the promise of AI systems that it’s going to make things frictionless and efficient, and cheap, commercial art will just sort of create itself, chatbots will replace psychotherapists, all kinds of cheap solutions are going to be available even if they don’t actually work. They will work well enough perhaps so that you can at least squeeze more productivity out of someone or demand a lower wage because you’ve got the cover of technology for, you know, renegotiating the conditions of labor, and that’s quite apart from the fact that there are huge questions of copyright infringement. We know for certain that the datasets used to train these models are full of texts and images, scraped without any consent, including in many instances stuff that is under copyright. So, one possible good outcome will be that these lawsuits that are now pending for artists and some coders for what is in essence data theft, will work for them.
There’s a letter circulating now for artists that calls the scraping of creative data the greatest art heist ever, and I think this is simply true, whether you’re a laborer or just a person, I mean, you know, even think of something like Wikipedia, which was created out of the collective largely volunteer labor of hundreds of thousands, maybe millions of people over decades, that has made Wikipedia reasonably reliable, and now chatbots are turning that into privatized training data that produces really fuzzy, sometimes inaccurate, sometimes crazy plagiarized chunks of information that are as not good as Wikipedia, and puts it behind a paywall, all to make a tiny group of people more rich and and more powerful.
Nima: Yeah, as you’ve, you know, really identified this kind of distinction between two questions that are being asked, right, does this “technology,” quote-unquote, work in any meaningful sense? Versus, I think, a far more interesting and accurate question of what are the politics behind it and where do rights and regulation and policy and law come into question here, right? So it’s not just that, you know, studio executives or university administrators are going to try and implement higher level or broader level use of AI, is not just the implementation, and whether that’s good or bad, but rather, and this kind of gets back to the vibes question here, how do you think this endless hype that we see, especially through media, the hype about AI, and you know, what it can do in our workplaces, how has that leading to decisions of, as you’ve been saying, cutting labor and other profit driven strategic choices just based on these assumptions of inevitability of this tech, you know, where are you seeing this also, in kind of other industries? Where do you see this going even further in the future?
Lauren M.E. Goodlad: Well, honestly, I’m not sure how it will play out in education, in terms of classroom practice, because these technologies have a long history of totally failing to teach people anything. I’m not sure if you remember MOOCs, which were these like, sort of fancy online classes?
Nima: Yeah, of course.
Lauren M.E. Goodlad: Which universities and the companies that were plugging them spent billions on about a decade ago, and very quickly discovered that extremely few people ever finished them because people are by and large not good autodidacts. Interestingly enough, professors are pretty good autodidacts, but most people are not, they need a community to help them along, and to make the work meaningful for them or think even more recently about Zoom. Now, you may not know that right at the very beginning of the pandemic, Eric Schmidt, who is the ex-CEO of Google, thought that the pandemic had created this opportunity to completely revamp New York City K through 12 as Zoom, and he was on a commission with Andrew Cuomo plugging that as the brave new future of K through 12. And of course, within months, it was clear that most students really hate Zoom, that most parents, you know, can’t figure out their work life around it, they were in effect, picking up extra teaching, and that motivating people to learn on Zoom is really, really hard. Although obviously, you know, Zoom can be good for things, it just can’t replace the in-person classroom. But, you know, putting aside this sort of sector by sector analysis, and the sort of targeted things that we can do, like try to stop data theft, I think that something we need to be very aware of, is that AI, which we have been encouraged to think of as a technology, is really a political economy. It’s a political economy that favors a very small number of huge companies and hurts almost everybody else, and these same companies got huge and powerful through almost wholly unregulated, unaccountable data surveillance, which they use for the selling of ads and the selling of data, and that concentration of data, as much as compute power itself, is what makes what is now called AI possible.
Adam: I want to talk a bit about what we call the inevitability narrative, which is, which always has a kind of air of bullying, sort of like, ‘Get on board, this is going to happen,’ again, from I think people who both have incentive to boost it, you know, all these kind of doomsday statements are very much self-serving, right? ‘I want to issue a statement being wary of the brilliant intelligence of Adam Johnson, reports Adam Johnson,’ right? ‘He’s going to be so brilliant, he’s going to just destroy the world.’ But a lot of it, it seems to be based on kind of ontological tricks, and how we define important concepts seems to be very fuzzy. You write that, quote:
The profound anthropomorphisms that characterize today’s AI discourse, conflating predictive analytics with intelligence, and massive datasets with knowledge and experience are primarily the result of marketing hype, technological obscurantism and public ignorance.
You even sort of mentioned the idea of hallucinating, which is a great marketing buzzword for making shit up, right? Everything sort of comes along with its own new words, that kind of to do the thinking for us, and even the idea like, ‘Oh, it’s not writing, it’s just rewriting because they’re gonna write you a script,’ but again, if it’s gibberish, that’s not rewriting, that’s just writing, because today, five years ago, 10 years ago, 30 years ago, a production company could hand a writer, you know, fake Latin and say, rewrite. I mean, this is an ontological trick. This is like the, There’s Something About Mary, you know, seven-minute abs, you’re just changing the terms, you’re not actually creating anything meaningful.
Lauren M.E. Goodlad: Right, right.
Adam: And I want to sort of talk about, because, again, it’s not like you’re going to fool the writer, the writer is gonna be like, ‘Well, I’m doing the same amount of work, you can’t just redefine my labor,’ it doesn’t really make any sense, right? Maybe in some sort of contractual way you can, but in the long term, you can’t really do that. So I want to talk a bit about how language is used to sort of push what we call kind of the, it’s a little bit of woo woo. Kind of like the way Deepak Chopra talks about quantum physics. It’s a bastardization of a concept. Can we talk about that language and how those slights of hands are happening very often?
Lauren M.E. Goodlad: Absolutely. So I mean, this is built right into the words “artificial intelligence.” With “artificial,” you know, somewhat being understood to refer to machines and the synthetic practices of engineers who build something, even though in this instance, there is a lot of human labor and human data. And with “intelligence,” really, nobody agrees on what intelligence is, and some of the definitions that are out there are very narrow, for example, John McCarthy defines it as the ability to achieve goals in the world. Which is, I guess, on a good day, I like to do that. So AI has never been properly defined, and tech entrepreneurs of a certain cast assume that no one actually knows what it is and how it works, and this is just great, you know, and again, the same Eric Schmidt, who just loves to be a talking head on these topics, and often, just by accident, says the most revealing things. He was on Meet the Press on Sunday telling the world that the only people who can understand the tech industry work for the tech industry, no one in government can get it right. Which is unbelievably arrogant and wrongheaded. As you point out, the strange thing is that people like Sam Altman, CEO of Open AI, Elon Musk, they’re both AI boosters, and they are doomers, and that wouldn’t make sense until you stop to think about how the narrative of the really super smart, super autonomous AI system that they built, as if they’re kind of like Victor Frankenstein, makes them look really important, makes their products look really powerful, while also encouraging the kind of regulation that would only matter in a science fiction novel, and also, of course, not understanding the plot of Frankenstein.
But you know, even among the people who the more sort of scientific people in industry who are not doomers, there is recently a real doubling down on the idea that these models are approaching something that used to be used somewhat reservedly, which is a term AGI, standing for artificial general intelligence, which at least for some people, like say, the computer scientist Yuda Perl, or Gary Marcus, used to stand for something that we do not have, might have someday, but that statistical models could never do because it’s something that is done by the human brain, which evolves over 500 million years, and is not a computer, even though many of my students are told in certain classes that their brains are computers, and on top of the brain not being a computer, being intelligent in this generalizable way that allows you to transpose something you learn in one context to another and sort of work through what the differences might be, involves a very immersed sense of living in the world with other people and animals and objects and making sense of them on an ongoing basis, and it also involves something called meta reflection, which is absolutely crucial to critical thinking. One goal of which, of course, is to think about the difference between what’s fair and not fair or true and not true, and that meta reflection is the ability to step back from your own actions, think about the consequences, think about the mistakes you might have made, or what you want to try to do better next time. Now, the models that we have now, are sophisticated enough that you can prompt them to simulate something like this kind of language, but it won’t actually be doing it, and the quality of the meta reflection will be really low and derivative and predictable. So I absolutely agree that we are moving into a kind of woo woo. We have papers coming out with titles like “Sparks of AGI,” which to their shame, the New York Times covered, and when these papers, they’re not really peer reviewed papers, when they make these claims, they completely ignore the fact that no one has access to the data or the architectures or the tremendous amount of human reinforcement that has been done to make these models work better. Which means you don’t actually know why they work the way they work unless you have access to that proprietary information, which makes it truly like a magic trick. But we know through investigative reporting that to make ChatGPT less toxic, you had to have teams of very low paid workers in Kenya reading this violent and disturbing content at industrial scale, they get like a couple of seconds and it just goes on and on and on all day, and we know that Open AI has been using human labelers just to improve results, again through labeling and canned answers. Now this isn’t AI, much less AGI, right? It’s just using low paid humans to make something look more impressive and be less toxic. Some people say, you know, it’s not really a technological breakthrough. It’s good engineering.
Nima: Yeah, there’s this whole element of like, what’s actually behind the curtain here and I, you know, I love that you mentioned the Eric Schmidt line about you know, ‘Only tech people can understand this tech,’ because there’s something else embedded in that, of course, which you kind of hinted at, which is, it means government and regulation and policy has to stay out of this, right? That this can’t be legislated because there’s no way that anyone outside the tech world is going to really understand this, which obviously fits into, as we’ve been saying, this idea of inevitability, but also has this idea of, ultimately, this is about the ongoing consolidation of maintenance of power, driven by profit, and that any suggestion that the technology should be designed from its inception in a different way, and then regulated in a, you know, in any kind of specific ways, goes out the window when you, you know, when you put forth the idea or kind of overall narrative of, ‘Hey, look, tech is just a tool, it’s going to keep evolving, yeah, sure, we just need to keep getting it better and better and then eventually, it’ll replace all of the drudgery and we can all live in this weird tech utopia.’ Dr. Goodlad, can you tell us how maybe that’s not true and how this idea of both what comes before the release of these new technological bombshells, what needs to be inherent in the design and then what needs to happen in terms of regulation? I’m thinking about what has been written about in say, you know, papers like, “Stochastic Parrots” by Timnit Gebru and Emily Bender, and others, like, what is behind this idea of, ‘Hey, look, it’s, it’s just a tool, and eventually we’ll get it right, and until then, let’s just enjoy it because it’s, quote unquote, free to use.’
Lauren M.E. Goodlad: Right, which of course, it’s, it’s not free at all. I mean, for one thing, its environmental footprint is huge. It’s been estimated that just prompting ChatGPT, not training it, which is a much bigger enterprise, but just asking it say 25 questions, uses up 500 milliliters of fresh water to cool the servers, and there are sort of anecdotal stories that people at Microsoft who work on other projects, they can’t even like do their work, because they have to make sure that, you know, all the compute is being used to build enthusiasm for and get people to use chatbots for for every damn thing. You know, I wouldn’t say, if these things just sort of came without any of these external costs, like concentration of power, incredible inequality, and embedded racism, and environmental problems, I wouldn’t say that it is not useful sometimes for some kind of jobs to automate writing. There was a interesting study done that was repeated in an article I read recently, that said that the average worker spends five hours a day on email, and I thought about that, because I’m thinking to myself, one explanation for that is that the person writes the same kind of email all day long, and that certainly could be helped with automation, doesn’t necessarily have to be a chatbot, but why not, and another reason for it is that the person doesn’t have very much to do. That’s not a problem that automation can fix. And a third reason for it, which I think it’s probably true of a lot of people like consultants and lawyers, is that these are actually very substantive mails, and they have to pay attention to them, and they’re giving some kind of advice, some sort of professional expertise is going into them, and there’s not that much that automation can really do in that situation, except possibly embroil you in accidentally writing something that you shouldn’t have. So, I think that we really need to make those distinctions. But you know, as far as the inevitable narrative, I agree that you’re putting it exactly right when you say that it is a narrative that’s sort of being pushed and it’s sort of interesting, the way that different people become complicit. There are media outlets that have wonderful tech journalists that do a fantastic job and I’ve spoken to quite a few of these terrific people, but then there are media that seems to need to have at least a certain number of people just pander or write for clicks or maybe even like a kind of access journalism, you know, maybe they want to be invited to the OpenAI Christmas party, and I find this kind of oscillation between okay journalism, and really just ‘How could they have published that?’ To be true, oddly enough, of the New York Times, which I, you know, I just think I have said it many times on social media, I think that they need to get a tech editor to rein in some of the bunk and hype because in some instances, people just may be getting attention for it and not even realize that, you know, they’re doing a very poor job. I don’t want to say that everyone who writes for The Times is doing that.
Adam: I think one of the reasons they’re good at that, rather, I think one of the reasons why this narrative has become difficult to sort of contextualize and make sober is because both narratives about it being this great thing that’s going to solve, you know, I think Adam D’Angelo said it was going to solve politics, like each politician is going to get an AI, something totally nonsensical, he was roasted on social media about it. But then there’s also people who sort of do the doomer stuff, which appears to be criticism, right?
Lauren M.E. Goodlad: Right.
Adam: All this sort of T2, Terminator 2 doomerism is kind of an earth sense form of criticism. It sort of ostensibly looks like criticism, but as you say, it’s not, it’s just more bullshit marketing hype. The counter narrative is being filled by that which is ostensibly critical, but it’s actually really just a more sinister form of boosterism.
Lauren M.E. Goodlad: Yeah, I think that’s completely true. I would not underestimate the degree to which everybody likes the idea of a new tech bubble, you know, it’s good for the stock market, and everybody sort of buys in and comes together over it because it just seems to me as if there’s no end to the amount of leeway, I mean, let’s put it this way, if I as a professor spent one billionth of the amount of money that has been wasted on NFTs, say, for for some kind of research project, I would be lampooned, and I would never get another grant, and, you know, I would just be thought of as catastrophically inefficient. But what we see is people just throwing money at all kinds of things. Now, I do think that data driven machine learning is more useful than, you know, the metaverse.
Lauren M.E. Goodlad: And I think part of the reason why it can be sort of tucked up in the way it is, is that it’s being, you know, marketed as something that could disrupt, you know, which, of course, is a word that is, it’s sort of like, you know, Pavlov’s dog, you know, you hear the word disrupt and the bell rings and people start to invest. In reality, it would be great to see people using machine learning more often to do research on things like climate change research or cancer research or creating new drugs. These are all things that machine learning might be very good at, and it is being done by some, you know, pharmaceutical companies, but tech companies per se don’t get involved in boosting that because those are highly regulated domains, and they have a lot of big companies already in them, and so they can’t just sort of create this narrative whether it’s holy boosting, holy dooming, or some mix of the two and and just get this great stock market thing.
Adam: Well, that’s a good thing you brought up because I want to I want to talk about what it actually can do and is doing because I do think one thing we mentioned earlier is that unlike sort of NFT bubble or metaverse bubble, it actually does things that are valuable from a capitalist perspective in terms of efficiency, or even non capital I mean, you know, if we lived in a socialist country tomorrow, we still want to get rid of pointless labor. The question is, well, who’s bearing the fruits of these efficiency savings, right? I think that’s the issue. It’s not going to be the worker for the most part.
Lauren M.E. Goodlad: Right, such as they are.
Adam: Such as they are, right. And I want to talk about that because it does, like you said, sort of proforma writing, a lot of useless kind of white collar labor. I know a lot of software engineers have said it can do kind of low level coding, although I think there’s some debate about whether or not that’s actually, you know, how impactful that will be, and to some extent, I think people then in those domains are beginning to project a comparable amount of labor savings in the sort of more rarefied or creative domains. You can do sort of shitty corporate art for let’s say, like a brochure, right? Or instead of paying a designer, you can just plagiarize seven different designers, synthesize it, call it AI. So talk about the workers it will hurt, if you would, and then follow up with that, I do want to talk about how, because I think it’s a similar question about impacts and harms. One thing you touched on is that others, I know Naomi Klein and The Guardian have argued that all the Terminator 2 stuff really does distract from the actual harms it will do, in terms of both automating certain jobs, and also perpetuating racism, surveillance, etcetera. So can you talk about that as well?
Lauren M.E. Goodlad: Sure. So, before I start, I just want to say a word about self-driving cars, which is like a zombie technology, right? It’s an idea that just won’t go away, and again, you know, billions have been thrown at it, and it’s on its eleventh life. There are towns, including San Francisco, that, you know, of course, it’s San Francisco, they just won’t pull the plug by saying to Waymo, ‘No, your cars are causing massive traffic jams, they’re running over fire hoses.’ Nobody wants to point out that Tesla’s occasionally kill people. So, this is a perfect example of a technology that nominally is useful, right? But look how hard it is to achieve and at what costs and how many settlements are being made, and how many town centers are suffering, when really what we all know we shouldn’t be doing is having a different kind of transportation entirely, we should be looking at mass transportation that is really green. So there’s lost opportunity costs too, and that is part of the harm. As I think I already mentioned, and this may be something that Klein said in The Guardian piece as well, one of the payoffs of the the doomer narrative, apart from the fact that people on Fox News then have something to tell their viewers, is that it encourages the kind of regulation that is very toothless, right? So, you know, if it was sort of like, ‘Well, every time that somebody is libeled by a chatbot that you designed you’re on the hook, and we’re going to have an investigative body,’ as Gary Marcus suggested to the Senate, I believe it should be something like the FDA, and that idea doesn’t seem to, well, it’s early days, maybe somebody will return to it, but the doomer narrative distracts and also acts as though these other problems are just going to go away, we’re going to mitigate, we’re going to improve, it’s early days, you know, but let’s make sure that we have, you know, staunch regulation to make sure that there’s never a robot takeover of the paper clip factory. So I just find it extremely hard to imagine that we’re going to get to some sort of happy set of utilities with these things really quickly. The main thing that people want out of a technology is that the kinds of efficiencies that you get from it, such as they are, are broadly distributed, with the rewards of them actually being something that generates efficiency and/or profit for the entire society so that everybody can get something.
Nima: One of the things we talk about a lot on this show is how so many of these narratives just lead to the kind of house always wins end game, right? That if you fret about the AI dilemma, as a technologist, your solution is more investment in that stuff, to then solve the destruction that you’ve said it’s going to, you know, bring down upon humanity, and so it’s just kind of an endless cycle of just keep investing in the tech.
Adam: Right. Well look at what they relied on, which they always rely on, during the congressional committee hearings, it’s ‘We have to do it otherwise the big bad China is going to do it,’ and this just, of course, makes everyone turn their brains off. So now there’s this AI arms race, it’s the missile gap 4.0, I don’t know what iteration we’re on now. So we have to just, basically Ted Lieu said this, supposedly Liberal Democrats said this, we basically have to like hand over the reins to Silicon Valley lest the big, bad Chinese beat us to this space race, that is a recipe for despotism.
Nima: Which also plays into narratives about the lethargy and glacial pace of government versus this conflicting or competing sunny Silicon Valley narrative of speed and innovation, right? So we can’t trust government, because it’s unresponsive and too slow, and this is all moving so fast so we have to rely on the tech industry to fight this AI cold war with China. It’s just an endless, endless cycle that we all know then what it is saying and where power is going to continue to sit and where money is gonna continue to flow.
Lauren M.E. Goodlad: So I have two things that I want to say. One involves chatbots themselves as writing tools. So what we’ve been talking about all a lot, and I do think that some people will adopt them, they’ll just enjoy them, and I think that adopting them for certain writing tasks is not necessarily the worst thing in the world, especially for people who have to write these boring things all the time. But the sort of bait and switch that both, that Bing in particular, Microsoft’s very abject search engine, that up until this point, I think, most people would get wrong on a multiple choice exam. What was the name of Microsoft’s search engine?
Nima: Ask Jeeves?
Lauren M.E. Goodlad: Right, exactly. I think that the idea that these models are going to replace search, that’s a problematic idea for a number of reasons. They’re just not accurate enough, and search is a more active tool that forces you to make decisions, evaluate where the source material is coming from, and so forth. So unless and until the so-called hallucination problem is fixed, it’s actually a very bad idea to be using chatbots for search, and I hope that that eventually becomes clear. I also am kind of hoping that there will be public protest among teachers and others about doing things like putting the full model into, say, Word or Google Docs. In actuality, it probably wouldn’t be the full model, because I think that that would be extremely expensive, but even if it’s like, you know, certainly bigger than the little bit of auto completion that you get now in your emails and your documents, if it’s substantially more than that, if you can just sort of switch it on because it’s there, that’s something that people could protest.
Adam: Yeah, no, it has tremendous political consequences for five people in Silicon Valley to choose our thoughts for us.
Lauren M.E. Goodlad: Exactly.
Adam: And people seem to be casually accepting this, and it’s literally the words are choosing the meaning, not the meaning choosing the word.
Lauren M.E. Goodlad: Yes. And the truth is that these companies are not always good at predicting their own public relations disasters. So I actually think that they all rely upon the goodwill of the public. There are, let’s face it, there are other choices out there that we could be making, you know, so I think public pressure could work.
Nima: Thank you so much for joining us today. We’ve been speaking with Dr. Lauren M.E. Goodlad, Distinguished Professor of English and Comparative Literature at Rutgers, as well as a faculty affiliate of the Center for Cultural Analysis and the Rutgers Center for Cognitive Science. Dr. Goodlad currently serves as chair of a new interdisciplinary initiative on Critical Artificial Intelligence and as Editor-in-Chief of the multidisciplinary journal Critical AI, published by Duke University Press. Dr. Goodlad, thank you, again, for joining us today on Citations Needed.
Lauren M.E. Goodlad: Thank you very much.
Adam: One thing we were concerned about when we were working on the show is we didn’t want to belittle or diminish the risk, there are jobs that will be replaced, and we don’t want to sort of, and that as a problem, and labor needs to stand up for those people, and make sure that’s the extent to which there’s a transition, it is just not just, you know, go learn to code or whatever. I think that, however, one needs to be sober about what the reality is, and not, I mean, Adam D’Angelo, one of the cofounders of Facebook, who now runs one of these, he’s the CEO of Quora, the website where people ask and answer questions, he had a very long, rather unlettered and incoherent thread about how AI was going to run politics, and everyone was going to have their individual AI and we were going to elect AI on platforms, and I just felt like I was, and this guy’s not nobody, right? He’s a co-founder of Facebook, is probably worth a couple hundred million, and I’m thinking, holy shit, people really think this is magic, and it’s not, and there’s no reason to think it will be. You know, we can kind of laugh at it and mock it for being a little bit absurd but I do think there’s an implication behind the people who run these systems, because the thing they’ll shoot for will be this sort of dystopia magical thing, but what they’ll land on is usually just consolidating power, monopoly power, surveillance power, the harms are much less sexy.
Nima: Yeah, and a massive amount of intellectual theft.
Adam: Yes, right. I think that’s really the annoying part about all this is that the criticism lane has been occupied by what is effectively a faux criticism.
Nima: Well, right. And then you have a credulous media basically just doing marketing for these platforms, which, again, we hear like, ‘Oh, but this democratizes knowledge or it does this,’ it just consistently, like it always has, puts more money into the pockets of the already exorbitantly wealthy, and you know, boosts stock prices for these massive tech companies that then own this stuff, but they own this stuff that just draws on other people’s work, right, that has been put on the internet or published digitally that you can find on the internet, you can scrape all this data, but it’s people’s work. It’s being taken from the last thing that was going to democratize and change everything, right? We’ve seen how that’s gone. Not that it didn’t change everything in a certain way.
Adam: Bitcoin was going to democratize, NFTs were going to democratize, blockchain was going to democratize, I think we’re on our seventh iteration of things that are going to democratize such and such.
Nima: I think that if there’s a way to, you know, reframe these conversations, it is not to just credulously accept the AI dominated, automated future, but rather to interrogate what power systems are behind these things and how to build power within community, within your workplace to actually push back on this and not just accept at face value that, you know, we have to accept our AI overlords.
But that will do it for this episode of Citations Needed. Of course, you can follow the show on Twitter @CitationsPod, Facebook Citations Needed, and become a supporter of the show through Patreon.com/CitationsNeededPodcast. This show is 100 percent listener funded so all your support is incredibly appreciated. And as always a very special shout out goes to our critic level supporters through Patreon. I am Nima Shirazi.
Adam: I’m Adam Johnson.
Nima: Our senior producer is Florence Barrau-Adams. Producer is Julianne Tveten. Production assistant is Trendel Lightburn. Newsletter by Marco Cartolano. Transcriptions are by Morgan McAslan. The music is by Grandaddy. We’ll catch you next time.
This Citations Needed episode was released on Wednesday, May 31, 2023.
Transcription by Morgan McAslan.