Episode 217: A.I. Mysticism as Responsibility-Evasion PR Tactic

Citations Needed | March 26, 2025 | Transcript

Citations Needed
43 min readMar 26, 2025

[Music]

Intro: This is Citations Needed with Nima Shirazi and Adam Johnson.

Nima Shirazi: Welcome to Citations Needed, a podcast on the media, power, PR, and the history of bullshit. I’m Nima Shirazi.

Adam Johnson: I’m Adam Johnson.

Nima: You can follow the show on Twitter and Bluesky @citationspod, Facebook Citations Needed, and become a supporter of the show through Patreon.com/CitationsNeededPodcast. All your support through Patreon is so incredibly appreciated, as we are 100% listener funded. We have no corporate sponsors, no philanthropic grants, nothing of the like. We are supported solely by the generosity of our amazing listeners, and that is you. So if you have not signed up, we urge you to. It really does help.

Adam Johnson: Yes, it does keep the show sustainable and the episodes free for all.

Nima: “Israel built an ‘AI factory’ for war. It unleashed it in Gaza,” laments the Washington Post. “Hospitals Are Reporting More Insurance Denials. Is AI Driving Them?,” reports Newsweek. “AI Raising the Rent? San Francisco Could Be the First City to Ban the Practice,” announces San Francisco’s KQED radio.

Adam: Within the last few years, and particularly the last few months, we’ve heard this refrain time and time again: AI is the reason for an abuse committed by a corporation, military, or other entity. All of a sudden, the argument goes, the adoption of “faulty” or “overly simplified” AI caused a breakdown of normal operations: spikes in health insurance claims denials, the skyrocketing of consumer prices, the deaths of tens of thousands of civilians. If not for AI, it follows, these industries and militaries, in all likelihood, would have implemented fairer policies and better killing protocols.

Nima: Look, we’ll admit it, the narrative does seem compelling at first glance. There are major issues in incorporating AI into corporate and military procedures, but in these cases, the AI or machine learning isn’t really the culprit. The people actually making the decisions are. UnitedHealthcare would deny claims regardless of the tools at its disposal. Landlords would raise rents with or without automated software. The Israeli military would kill civilians no matter what technology was or wasn’t available to do so. So why do we keep hearing that AI is the problem? What’s the point of this frame, and why is it becoming so common as a responsibility-avoidance framework?

Adam: On today’s episode, we’ll dissect the genre of investigative reporting on the dangers of AI, examining how it serves as oftentimes a limited hangout offering controlled criticism, while ultimately shifting responsibility towards faceless technologies, away from powerful people and moral agents who make decisions.

Nima: Later on the show, we’ll be joined by Steven Renderos, Executive Director of Media Justice, a national racial-justice organization that advances the media and technology rights of people of color. He’s also the creator and co-host with Brandi Collins-Dexter of Bring Receipts, a politics and pop-culture podcast, and is executive producer of Revolutionary Spirits, a four-part audio series on the life and martyrdom of Mexican revolutionary leader Francisco Madero.

[Begin clip]

Steven Renderos: You know, we’ve been sold a myth that tech can inject fairness in an unfair system. It can eliminate racism. It can undo censorship, you know, holy moly, it can end war, or at least the cost of war. We’re sold a sort of, like, false utopia. And shoot, I would love to live in that world if it were actually true, but in practice, it doesn’t work that way.

[End clip]

Adam: Yeah, this episode is a Spiritual Sequel™ to Episode 92: The Responsibility-Erasing Catch-all of ‘Automation’ and Episode 183: AI Hype and the Disciplining of “Creative,” Academic, and Journalistic Labor. In both episodes, we detail how AI and other kind of vague notions of machine learning, LLM, automation, have been used as pretext for layoffs, union-busting and other anti-labor practices. It’s also increasingly, as we’ll discuss in this episode, a way of offsetting moral responsibility. We are not arguing that the use of these technologies is somehow not real, or that there aren’t abuses tied to it, but what we’re arguing is that it is becoming increasingly a way for those at the top to do their favorite thing in the world. Because what’s the thing those in power want more than anything? We’ve talked about this on the show a lot, whether it’s electeds, CEOs, what’s the first thing they teach you in Crisis PR, 101? You want to have power, but not accountability, which is to say you want the fun parts of being in charge with none of the bad parts of being in charge, which is to say being held accountable or having to own up to things you do that are bad or evil or unpopular, one of the increasingly popular ways that those in power, whether it be militaries, elected leaders, CEOs, is to blame bad things on AI, and anything with AI in it will get a sexy, credulous media report these days, because that’s just what’s in right now. And you sort of add AI to any story, it doesn’t matter what it is, it instantly becomes, taken on its face, as a legitimate reason why something happens. And what we’re arguing in this episode, as we did in Episode 92, that in many ways, like automation, it is pretextual to avoid human responsibility. And the avoidance of human responsibility is the number one, if not top five, animating causes of PR and public relations for those in power, they want to avoid responsibility and they want to offset it onto someone else.

Nima: It’s based on the data, and we’ve been seeing this time and again, especially with the rise of AI in the last few years. Now, all sorts of industries alongside quote-unquote, “modernizing” militaries, have been adopting automation technologies for decades, if not longer. For instance, the insurance industry began to incorporate automation software for cost-cutting purposes back in the 1980s, as PCs, personal computers, became more widely available. Meanwhile, the American military’s use of so-called AI dates back to at least the early 1990s for largely logistical purposes, like scheduling transportation of supplies.

But in more recent years, so-called AI, artificial intelligence, has of course gone mainstream as a concept, as have concerns about its use and abuse. Some of these concerns are disingenuous, though. Tech billionaires like Sam Altman of OpenAI and Elon Musk, Congress members like Chris Murphy of Connecticut, and other major public figures have disingenuously cautioned the public again and again about what apocalyptic damage AI could be capable of. Now, as many critics have pointed out, these warnings of ultra-powerful technology often function more as an advertising strategy for AI companies and products than anything else. Even bad news winds up being good news for them.

But there are plenty of legitimate concerns as well about the harms of AI in industries and militaries and how its use will affect the public, and this is what we’re going to get into today. In this climate of both genuine and not-so-genuine consternation about AI, we’ve seen major media outlets conduct quote-unquote “investigative reporting” on the perils of artificial intelligence. While this work often sheds light on an important issue and offers criticism of worthy targets, it’s also kind of a limited hangout, meaning it singles out AI as the leading cause of cruelty, thus minimizing deliberately, or erasing completely, the culpability of the people who are actually in charge.

Adam: The first industry we’ll discuss where this is a popular responsibility-avoidance narrative is insurance. The insurance industry, namely, health insurance, is very, very happy to talk about their use of AI when making decisions about who lives and who dies. Stat News, for example, released a series of investigative reports throughout 2023 entitled, quote, “Denied by AI: How Medicare Advantage plans use algorithms to cut off care for seniors in need.” Unquote. And since late 2024, this coverage has broadened significantly, largely for two reasons. In recent years, multiple class action lawsuits have been filed against health insurance companies, including Cigna and united, alleging that they knowingly use, quote, “faulty algorithms,” unquote, to review and overwhelmingly deny patients claims. And two, of course, in December of 2024, UnitedHealthcare CEO Brian Thompson was assassinated, allegedly by Luigi Mangione.

But here are some examples about insurance companies denying care for people that is blamed on this kind of agency-free technology. Newsweek, November of 2024, quote, “Hospitals Are Reporting More Insurance Denials. Is AI Driving Them?” CBS News, same month, “UnitedHealth uses faulty AI to deny elderly patients medically necessary coverage, lawsuit claims.” Quartz magazine, December of 2024, “How UnitedHealthcare and other insurers use AI to deny claims.” Fox 5 New York, also December of 2024, “UnitedHealthcare under fire for using AI to deny Medicare claims.” WBUR, Boston’s NPR station, the same month, “How insurance companies use AI to deny claims.” NPR’s Marketplace, also the same month, “AI has a growing role in processing health insurance claims.” Unquote.

Now to be clear, it’s useful to know that AI is being used, supposedly, by health insurance companies. The stakes of faulty algorithms denying claims and thus denying healthcare, is extremely high, but what’s lost in this coverage is that the framing of it is blaming AI or saying it’s faulty. The implication being that somehow there was some mistake in the numbers. The executives ultimately make these decisions, right, garbage in, garbage out, and they are deliberately using technology as a way of offsetting their own responsibility, knowing the outputs will lead to denying more claims.

Nima: Right. These technologies are actually beneficial to the companies.

Adam: Well, right? Because the technologies themselves aren’t making the decision. Technology can’t make decisions. It can only do what you put into it. Right? These claim denials are essential to the insurance company’s profitability, so they’re going to happen regardless of whether or not they have software. It’s not as if, in an alternative universe, if they didn’t have this AI software, they would have approved more claims.

Nima: Everyone just starts getting payouts, right. We already knew this was bad.

Adam: There’s just a set number of legitimate claims they need to deny every year to be profitable, and they hit that no matter what. Indeed, high claim-denial rates existed long before companies incorporated so-called AI into their claims-evaluation process. It’s not exactly known when the insurance company began to implement AI for these purposes, but there are some indications. According to a lawsuit against UnitedHealthcare, the corporation started using its automation software, which was developed by a company it acquired called NaViHealth, in at least November of 2019 and they continue to use it. And according to the aforementioned 2024 Newsweek report, claim denials for Medicare Advantage began to accelerate between 2019 and 2020, potentially and probably when companies began to automate claim reviews more aggressively. Newsweek specified that, quote,

In 2019 — the year before Optum acquired tech company NaviHealth — UnitedHealthcare denied 1.4 percent of Medicare Advantage beneficiaries’ claims for admission to skilled nursing facilities, according to the report. In 2022 — the first full year that NaviHealth was managing Medicare Advantage claims for UnitedHealthcare — the denial rate was reportedly 12.6 percent, or nine times higher than before the company acquired NaviHealth.

So what’s more likely going on here? That they were otherwise going to approve more claims, or that they are using AI as a pretext to justify denying claims? Because they can throw their hands up and say, Wow, that’s just what the algorithm told us. Because again, ultimately the decisions are made by humans on the back end. It’s not as if this is just automated. They know what they’re denying. A human is involved in that chain of decision-making, but algorithms and AI and these kind of automation technologies provide the perfect excuse to deny claims that they very much want to deny anyway, because then when they get sued or they get PR backlash, they can do what they like to do, which is center the technology as the responsible agent.

Nima: Even before 2019 when AI started really rolling out into the healthcare industry, claim denial rates among insurers were already extraordinarily high. A 2019 study from the Kaiser Family Foundation found that in 2017, Healthcare.gov insurance companies denied 19% of claims, nearly one in five, for in-network services. The study also showed that some insurance companies had a denial rate as high as 45%. And a 2017 survey conducted by the Doctor-Patient Rights Project found that US insurance companies denied coverage for nearly 25% of people with chronic conditions or persistent illnesses. In a third of those cases, these patients said their conditions worsened after being rejected by insurance. Beyond this, much reporting that strictly investigates AI-driven claim denials doesn’t really interrogate the very morality of the claim denial itself, that a company gets to and will determine whether someone can have healthcare or not in the first place, or whether someone will go broke trying to pay for it. This, one can argue, is the primary concern, not the AI technologies behind the denials, but the fact that these things can be denied at all. Which tools a company uses to do so are kind of a secondary concern.

Adam: From a PR standpoint, if I’m UnitedHealth, which headline would I rather have? The CBS headline we read before: “UnitedHealth uses faulty AI to deny elderly patients medically necessary coverage, lawsuit claims,” or “United denies elderly patients medical coverage?” One removes the moral decision-making from UnitedHealthcare executives, or at least obscures it. It abstracts it out, uses AI, which again, has become this kind of mystical catch-all, that assumes there’s no human agency in that process, as a way of blaming it. Because, again, the thing being blamed here is AI, when that’s just obviously the pretense. It’s really not the person. Ultimately, the CEOs are making these decisions to kill people for profit.

Nima: Now, health insurance is just one of the industries that uses this PR tactic. There are plenty of other examples. Take real estate. Last year, the Justice Department filed a lawsuit against the company RealPage, which produces algorithmic rent-pricing software. The suit alleges that the company enabled price-fixing among large residential landlords. In light of this, in recent months, we’ve been told that landlords are, what else? Quote, “using AI to raise their rents.” Here are some examples from the press, from radio station KQED, from July 16, 2024, quote, “AI Raising the Rent? San Francisco Could Be the First City to Ban the Practice.” End quote. In October of 2024 from NBC San Diego. Quote, “Could price-fixing AI be a reason for high rents in San Diego? Some think so.” End quote. December 2, 2024, from The Markup, quote, “Landlords Are Using AI To Raise Rents — and Cities Are Starting To Push Back.” End quote. And just earlier this year, January, 8, 2025, in Futurism, quote, “Rent Too High? Blame AI, New Report Finds.” End quote.

Adam: Now, clearly, a platform or technology that allows landlords to price-fix and collude is undoubtedly going to contribute to higher housing prices, but this framing isolates and identifies the software or the quote-unquote “AI” as the problem, not the landlords themselves. In other words, it’s not, AI is raising rent. Landlords are raising rent and using AI to offset their moral responsibility for doing so. Landlords would raise rent regardless of whether any of this technology existed, because they will raise rent to the extent to which they can efficiently and sufficiently collude with other landlords to do so, and they will go as high as they can until they’re sued or regulated into not doing it. Because the limiting factor is not a lack of technology. The limiting factor is political pressure from tenants’ rights organizations and from lawmakers and from their, you know, asymmetry of information with respect to other landlords, thus the collusion.

Nima: Now the Justice Department, in the aforementioned report itself, exceptionalized the software as the sole culprit, writing this in the press release announcing its lawsuit against RealPage. Quote,

The complaint further alleges that in a free market, these landlords would otherwise be competing independently to attract renters based on pricing, discounts, concessions, lease terms, and other dimensions of apartment leasing.

End quote. But landlords historically have not done these things and wouldn’t just all of a sudden decide to do things if they didn’t have RealPage’s algorithms at their disposal. It’s been well documented that many landlords would rather let an apartment or other housing unit sit empty rather than offer quote-unquote “discounts” or “concessions,” as the Justice Department says, lest their property lose its — what else? — quote-unquote, “value.”

For example, in March of 2021, a year into the COVID-19 pandemic, the Wall Street Journal reported that New York City landlords were taking apartments off the market, while demand and rents were relatively low, betting that the quote, “market would rebound in the spring,” end quote. And amid the catastrophic wreckage of the January 2025 fires in Los Angeles County, real estate agents and brokers told the Los Angeles Times that, quote, “property owners [were] making fewer properties available because of a state law barring new listings from charging more than $10,000 a month during the state of emergency,” end quote.

And there have been numerous documented cases of landlords taking rent-controlled and rent-stabilized units off the market so they cannot be leased. According to an internal state housing agency memo obtained by the publication The City, in New York City, the number of rent-stabilized homes reported vacant on annual apartment registrations increased from fewer than 34,000 to over 61,000 in 2021, again, during the COVID-19 pandemic.

Adam:Another popular genre we see this is in the military, especially in the context of Israel’s destruction, quote-unquote, “war” on Gaza, genocide in Gaza. We’ve seen a lot of articles talking about AI’s role in being responsible for the indiscriminate killing of civilians, which have the air of kind of subversive journalism, and some of the information itself is useful, but the way it’s framed and the way it’s contextualized gives a whiff of responsibility avoidance. The notion that AI alone is making these decisions has appeared in recent reporting on the IDF repeatedly since October 7, 2023. Many media outlets have pushed the idea of the supposed exposés on the IDF using AI as it kills tens of thousands of people in Gaza.

The New York Times exemplified this with an investigation published on December 26, 2024 headlined, “Israel Loosened Its Rules to Bomb Hamas Fighters, Killing Many More Civilians,” unquote. The Times stated that, after reviewing dozens of military records and interviewing more than 100 soldiers and officials, it had found that, quote, “Israel weakened safeguards meant to protect noncombatants, allowing officers to endanger up to 20 people in each airstrike,” unquote.

This article, which, if you read it, was kind of a textbook definition of a limited hangout. First, the headline, “Israel Loosened Its Rules to Bomb Hamas Fighters.” So the assumption is that they have any rules at all, which they really don’t. We have dozens of testimony that they pretty much could shoot and kill whoever they wanted while in Gaza, didn’t really matter. The idea that they’re targeting Hamas fighters, which is not really true, especially when, if you listen to Israeli officials, they pretty much think everybody is Hamas. They’ve made several comments to that effect, they think the UN’s Hamas. They think Doctors Without Borders is Hamas. They think the Red Cross, Red Crescent is Hamas. So this idea that they’re targeting Hamas fighters is a total liberal myth. That’s not really true at all.

Then the Times we go on to describe this process as being co-opted by this runaway AI technology, writing, quote,

The military struck at a pace that made it harder to confirm it was hitting legitimate targets.

Again, assuming that this is something they care about, which we have no evidence to support. Quote,

It burned through much of a prewar database of vetted targets within days and adopted an unproven system for finding new targets that used artificial intelligence at a vast scale.

Unquote. But contrary to the Times’s assertions, Israel did not loosen its rules. Rules can’t be loosened when there’s no rules to begin with. Since the start of the conflict in October of 2023, Israel officials have been very clear that their intention was to kill as many civilians in Gaza as possible. Deputy Knesset speaker Nissim Vaturi sought to, quote, “wipe Gaza off the face of the earth,” unquote, and added, quote, “Gaza must be burned,” unquote. Several members of parliament, including Avigdor Lieberman, former Minister of Defense, have publicly stated that there are, quote, “no innocents,” unquote, in Gaza or anyone, quote, “uninvolved” in Gaza. Former Minister of Defense Yoav Gallant, a few days after October 7, said Israel was, quote, “fighting human animals” when announcing a complete siege of Gaza, saying there would be no water, no fuel, no food, completely cut off. As of mid-January 2025, Israel has killed over 60,000, officially, 60,000 civilians in Gaza. The number is probably somewhere closer to 200,000.

Nima: This idea of the Israeli military loosening rules of engagement, and just where the responsibility lies for that, has been the subject of media reports for decades. In March of 2009, the Israeli daily paper Haaretz had this headline, quote, “IDF Killed Civilians in Gaza Under Loose Rules of Engagement.” In 2015, The Washington Post ran an article saying, quote, “Israeli veterans say permissive rules of engagement fueled Gaza carnage.” Remember, these are many, many years before 2023, 2024. This has been a common refrain. We’re just getting the added offset of responsibility here to something we already know is happening, to something that has been happening for decades and decades, onto new technology.

Adam: Well, it also paints the image of this handwringing, deeply worried, mopey Israeli official who kind of doesn’t want to kill civilians, when literally every fucking piece of evidence, from Instagram posts to Telegram to testimonies by soldiers themselves, says they are absolutely 100% wired and conditioned to just fucking kill everybody. And that’s why Israeli officials are heavily sourcing these pieces, because this is now the new narrative, right, people understood that this was going to be something investigated by, you know, other governments, by the ICC, Israeli soldiers can’t travel now to several countries in Europe and South America, that there needed to be a manslaughter charge rather than murder. There needed to be a kind of, everybody can see the destruction. You can look at the fucking dark side of the moon that Gaza looks like. So what we needed was we needed to sort of begin to ameliorate our souls. We needed to begin to soothe the liberal conscience into thinking that actually this was kind of a combination of hot-headedness post-October 7, which both the Washington Post and New York Times article we’re going to discuss refer to, that they kind of loosened rules, that they supposedly had these rules, and then in a moment of panic and fury and revenge, they loosened them, but they still feel really bad about it.

Nima: Anyone who knows anything about what the Israeli military and government has done for 70 or 80 years straight knows that this is bullshit.

Adam: And the Washington Post chimed in, also sourced primarily from Israeli officials, with their own exposé, headlined, quote, “Israel built an AI factory for war. It unleashed it in Gaza,” unquote. The piece reported that, for decades, Israel had been developing a program to, quote, “place advanced AI tools at the center of the IDF intelligence operations.” The Post suggested that automation had essentially led Israel to kill more civilians than it really wanted to, and to adjust standards to allow for this. The paper wrote, quote,

People familiar with the IDF practices, including soldiers who have served in the war, say Israel’s military has significantly expanded the number of acceptable civilian casualties from historic norms. Some argue the shift is enabled by automation, which has made it easier to speedily generate large quantities of targets, including low-level militants who participated in the October 7 attacks.

Unquote. This seems to imply that automation was what spurred Israel’s disregard for Gazan civilians. Left unmentioned from both the Washington Post and The New York Times articles is that high-level government officials, including Benjamin Netanyahu, who made biblical references to genocide, including Yoav Gallant, who we already fucking referenced. Again, this was laid out by Amnesty International and their claims of genocide. It was laid out by Ken Roth and his claims for genocide. It was laid out by Doctors Without Borders and their claim of genocide. It was laid out by the ICJ and their claims of plausible genocide. They made repeated genocidal comments in October and November of 2023 that clearly indicated that they did not think anyone in Gaza was innocent, that they were collectively guilty. There are dozens of examples of high ranking officials making these claims, making these threats, making genocidal statements. None of them are mentioned in either the New York Times or Washington Post article about their supposed concern for civilians.

Nima: Because it also, Adam, connects Israel to this startup nation propaganda line as well. The idea that Israel is such a leader in technology across the world, that it’s also deploying these AI technologies that soon are going to help us all. But first, they test them out on Palestinian civilians. And sometimes, you know, they go too far, but I guess that’s just part of the testing. That’s just part of how they figure this out, so that they continue to be the most kind of technologically advanced and innovative and inventive nation-state in the world. And so this is the price, I guess, that is paid for Israel being the startup nation, again, you know, registered trademark. This is something that they have been using in their own propaganda for years and years. This is the price that must be paid in blood so that we can all have, you know, better, iPhones down the line. This really connects AI tech to the genocidal killing machine to further absolve Israeli military and government leaders from doing the things that they said they were going to do and that they have been doing for decades already.

But what we see across media is this absolution given by artificial intelligence, repeated again and again, while sidelining the admission by Israeli officials of literally what they are then going to do. So this Washington Post piece that we just mentioned also tells a story of multiple Israeli occupation soldiers who were shocked–shocked, aghast! — at the military’s wanton use of AI to kill Palestinians. The paper itself noted that one of the IDF’s AI projects called Lavender used facial recognition tools to identify whether a person was a member of Hamas, and therefore, of course, a legitimate target. Later, the Washington Post, desperate to portray these genocide-committing soldiers as just a bunch of, you know, good apples in maybe a rotten war, relayed an IDF soldier’s testimony this way, quote,

…some soldiers grew concerned that the military was relying solely on the technology without corroboration that the people were still active members of the terrorist organization.

Concerns about proportionality also took a back seat: Some people captured in the photographs might have been family members, and IDF commanders accepted that those people also would be killed in an attack, the soldier said.

End quote. The Post goes on to write this, quote,

At one point, the soldier’s unit was ordered to use a software program to estimate civilian casualties for a bombing campaign targeting about 50 buildings in northern Gaza. The unit’s analysts were given a simple formula: divide the number of people in a district by the number of people estimated to live there — deriving the former figure by counting the cell phones connecting to a nearby cell tower.

Using a red-yellow-green traffic light, the system would flash green if a building had an occupancy rate of 25 percent or less — a threshold considered sufficient to pass to a Commander to make the call about whether to bomb.

The soldier said he was stunned by what he considered an overly simplified analysis. It took no account of whether a cell phone might be turned off or had run out of power or of children who wouldn’t have a cell phone. Without AI, the military might have called people to see if they were home, the soldier said, a manual effort that would have been more accurate, but taken far longer.

End quote.

Adam: If it wasn’t for that dastardly AI, they would have called, I guess, and not bombed? I mean.

Nima: That’s right. Notably, Doctors Without Borders wrote the following in a December 2024 report, quote,

During the one-year period covered by the report — from October 2023 to October 2024 — [Doctors Without Borders] staff alone have endured 41 attacks and violent incidents, including airstrikes, shelling, and violent incursions in health facilities; direct fire on the organization’s shelters and convoys; and arbitrary detention of colleagues by Israeli forces.

End quote.

Adam: The vast, vast majority of them, they didn’t even claim Hamas was anywhere near them. So how does this account into the Washington Post’s read of events, or the New York Times’s read of events? Was this an algorithm that told them to kill doctors and blow up hospitals? No, they’re just blowing up anything that sustains life. And we know this because they said they were going to do it, because high-level officials said they were going to turn it into a, quote, “tent city,” and that they were fighting, quote-unquote, “human animals.”

So again, there’s a reason why Israeli officials are curating these ‘AI made us do it’ stories. It is now soothing of the liberal conscience, and to some extent, creating a kind of firing squad where there’s five people shooting and no one knows which has the bullet. The market isn’t just American liberals and American liberal Zionists. It is also the, like, 19-year-olds they have pushing these buttons killing people. They want to sort of make them feel better about wasting entire city blocks and three generations of Palestinian families with the pushing of a button. So the AI helps offset that internally, because presumably, after, you know, some of them will probably feel kind of bad about it on some level, and offsetting that responsibility on the technology creates this firing squad dynamic where no one’s really responsible.

Nima: It’s like you get a PTSD support group, and you can just say, AI made me do it.

Adam: Yeah, and this is the role, because there is a buyer’s market for responsibility-avoidance narratives, and AI fits that role perfectly. It may not have been why it was created originally, or certainly not its sole purpose, right? It still serves other functions. But its main public relations and internal propaganda feature is that it avoids responsibility and makes it look like, Oh, well, we would have killed 30,000 Palestinians, not 60,000, if it wasn’t for that dastardly AI technology. This is why Israeli officials are curating this narrative. These are not, like, leaks. They were not meeting in a parking garage in Foggy Bottom and recovering secret documents. This is a curated narrative. It’s why five different outlets did it at the same time. It’s a way of offsetting their own moral responsibility for deciding to do a genocide, which, again, they explicitly said they were going to do.

Nima: Even more recently, we’ve seen the ‘AI can fix all’ and used as an excuse to actually just move forward the ideological priors of those who are now leaning on AI to say, Hey, just following the algorithm, and I’ll see where it leads. And that’s what we’re doing. But actually, the ideology is driving everything. We see this most recently with Elon Musk and the Department of Government Efficiency, or quote-unquote, “DOGE.” Hilarious. So we have this article from the New York Times, from February 3, 2025, with the headline, quote, “Musk Allies Discuss Deploying A.I. to Find Budget Savings,” end quote. And then this article was reported on by other outlets as well, including Axios, which ran its own headline. Quote, “Musk’s DOGE crew wants to go all-in on AI,” end quote.

Adam: Yeah. And it’s like these sort of buzzy headlines. And you read The New York Times article, they’re very vague on specifics, but when you kind of do drill down on some specifics, or you try to get answers on specifics, what they mean by employing AI is, first and foremost, using automated software which isn’t even really AI, or even sort of LLM or anything like that. It’s just to find DEI words, they kind of just mass brute force look for certain keywords. So it’s like a scales racism machine, its primary function. And the second thing is, it’s just, again, it’s a pretext to just fire a bunch of humans he doesn’t like, who Musk’s and his rightwing ideological allies perceive as being lazy or superfluous, minoritized labor, gendered labor, and using AI with this kind of hand-wave-y magical wand to say, Oh, we’re going to replace these functions, don’t worry, with AI. But that’s not really possible. I mean, that’s really the key point here. As Kyle Chayka at the New Yorker notes in his article, “Elon Musk’s A.I.-Fuelled War on Human Agency,” from February 12, 2025, and it’s kind of buried, but it’s really a salient point here. He notes that, quote, “One of the alarming aspects of this approach is that A.I., in its current form, is simply not effective enough to replace human knowledge or reasoning.” Unquote.

Clearly, that’s kind of the key point, which is, like, AI can’t actually do the things it’s claiming to do, and that’s not really what Musk, on some level of course, Musk knows that. The point is just to fire people and to throw out this AI buzzword to make it look like, oh, because, again, people have his perception, like at the IRS or the DMV or whatever kind of government office people perceive as being inefficient and evil–

Nima: They’re like, Oh, a computer could just do that. Why do we have? Yeah.

Adam: But they can’t. Because if they could, they would have already implemented it. And again, to some extent, LLM-type technologies can reduce overall net labor in some senses. They can sort of automate, performing emails, etc. They can do kind of search functions. But they’re not replacing humans, because a) that technology just isn’t this magical device people keep making it out to be, as we’ve noted on the show many times. But b), ultimately, these are decisions that have to be made by humans because they are value-based. They are abstract. They require critical thinking. They require data that isn’t limited to the year 2023, and other kind of information siloed off because of various IP conflicts. The whole thing is obviously absurd, but again, people like Musk, they can just throw out AI, and it’s repeated by the media in this kind of uncritical way, because the goal is to use this magical, buzzy word to launder the explicitly racist ideology, but in this case, the desire to just fire people because they want total control over the so-called bureaucracy. And AI provides this kind of catch-all buzz word to obscure that this is just simply, in this instance, a rightwing takeover of government. It kind of gives the impression that the actual underlying labor being done is actually still being done, and it’s just not.

Nima: And to the point about the media reporting on this uncritically and just taking at face value whatever Musk and his allies are saying they’re using AI for, without the critical thinking of, like, what is the ideology behind this? What are the actual effects? Here you have this subheadline from the New York Times piece that I referenced earlier and the subhead is this, quote, “A top official at the General Services Administration said artificial intelligence could be used to identify waste and redundancies in federal contracts,” end quote. And early in the article itself, again, this is from February 3, 2025, in the New York Times, there’s this, quote, “Musk allies who have taken on roles inside government agencies are evaluating how to harness A.I. to identify budget cuts and detect waste and abuse, according to people familiar with internal conversations, who spoke on the condition of anonymity out of fear of retaliation,” end quote. There’s a lot packed into that one sentence,

Adam: Right. So they’re finding waste and all these kind of post-ideological efficiencies. And again, it’s not really what they’re doing, as we’ve established many times on the show, in previous News Briefs and elsewhere, they just use the fog of AI to carry out an ideological agenda, and that ideological agenda is just not mentioned by the New York Times, because God forbid that’s the thing that’s centered. It’s all kind of post-ideological technobabble.

Nima: We’ll now be joined by Steven Renderos, Executive Director of Media Justice, a national racial-justice organization that advances the media and technology rights of people of color. He is the creator and co-host with Brandi Collins-Dexter of Bring Receipts, a politics and pop-culture podcast, and executive producer of Revolutionary Spirits, a four-part audio series on the life and martyrdom of Mexican revolutionary leader Francisco Madero. Steven will join us in just a moment. Stay with us.

[Music]

Nima: We are joined now by Steven Renderos. Steven, thank you so much for joining us again on Citations Needed. It’s great to have you back. We haven’t had you on for, I think, five years.

Steven Renderos: It’s all good. I spent those five years becoming an executive director. So here I am.

Adam: All right. I want to begin by discussing our kind of broader theme, and then we’ll drill down into some of the related points, which is this trend we see with those in power using whatever they’re doing, kind of Objectively Evil Stuff, capital O, capital E, capital S, Objectively Evil Stuff, they kind of appeal to mystical technology as this catch-all, responsibility avoidance, PR thingamajigger, because it’s not them, right? There was some algorithm or some AI or some formula that told them that they had to close the Walgreens or bomb an apartment building in Gaza or raise rents, and it’s not really their fault. That there’s some sort of output that came down through the machine, and that they’re simply agents of that output. So I want to begin by talking about the rise of this, people in power as middle management for technology. You see this a lot also, especially with racist facial-recognition software. A lot of police tech does this. Predictive policing. Well, it’s not us. You know, we didn’t want to go to 90% Black neighborhoods, but you know, that’s just where the algorithm sent us. Talk about how technology can double as a moral laundromat.

Steven Renderos (via Media Justice)

Steven Renderos: The thing that it makes me think about is a couple years ago, Meta settled with the Department of Justice over a lawsuit where they had been violating the Fair Housing Act by encouraging, enabling, and causing housing discrimination through its advertising platform on Facebook. What was interesting is, specifically, what advertisers could do is set their ads to avoid being seen by certain people based on certain characteristics, like race, religion, gender. I mean, take the Civil Rights Act in any protected class, like you could target an ad to not reach those folks. So what I thought was interesting about when I saw those headlines come out and the stories emerge was in part, that part of what I thought was missing from that story is just the role of the advertisers themselves. I’d spent years actually organizing around affordable housing. That was my first job out of college, was working at a tenants’ union, and the things I would see for-profit developers do, the tactics they would try out, were just horrendous. It didn’t matter that there was a Fair Housing Act. They found different ways to discriminate. But I think the technology today, what it does, is it creates an opportunity for bad actors to do the bad things that they want to do.

And I think the reason they’re able to do that is because I think technology delivers on two false promises. And the first is techno-solutionism, the idea that technology can somehow fix society’s problems. You know, we’ve been sold a myth that tech can inject fairness in an unfair system. It can eliminate racism. It can undo censorship. You know, holy moly, it can end war, or at least the cost of war. We’re sold a sort of false utopia. And shoot, I would love to live in that world, if it were actually true. But in practice, it doesn’t work that way. And I think the other false promise of technology is this idea of modernity.

And you see this, really kind of pronounced in places that allows authoritarians to be authoritarian without needing to necessarily face the costs of acting in those ways. You know, my family’s from El Salvador, and so I think about this a lot with our current populist dictator, who calls himself the coolest dictator in the world, Nayib Bukele. He’s very strategically utilized technology, particularly crypto, as a way to demonstrate to the populace, like, We are a modern country, while in effect, also inoculating himself and creating space to do really terrible things. Same thing happens in Saudi Arabia, you know, inviting in tech money that helps to kind of obscure and hide human rights abuses. And I think the reason for that is because people conflate technology with it being better, that we’re in a better place now. Because of that, we are modern. And I think that’s like, for the rich and for the people in power, this is what technology does. It just enables the capacity to distract while still being able to do many of the horrible things they were doing before.

Nima: Yeah, the Nazis made many technological advancements. That may not be the same thing as making the world better. And I think we can see that with the panoply of tech billionaires behind Trump during the inauguration. I mean, we’re just seeing how tech is kind of a stand-in for achievement, for advancement, rather than actually kind of reckoning with the toll that it actually takes. And this actually gets to my question here, Steven, which is that what we want to do on this episode is kind of hold two ideas at the same time. One, that offsetting human responsibility onto tech is itself a grift, as we’ve been talking about, kind of what the purposes of that, this responsibility laundering. But also we don’t want to pretend that the threats that certain technologies pose aren’t real in and of themselves. They very much are. And as you well know, tech most often reflects the biases, the ideological preferences, the racism, the dehumanization of its designers and of the data that it is working off of, right? So garbage in, garbage out. This has been the focus of activist pushback for years, and certainly since the latest LLM, or large language model and AI craze began in earnest just a couple years ago, really in late 2022.

So to obscure this and the broader issue of wide-scale plagiarism that these large language models deploy and are built off of, Big Tech-funded groups have created an alternative universe of, I guess, what we’re calling, like, fake critique, right? Like, focused on apocalyptic claims of superintelligence and what that portends for our world. But even the threat and kind of hype is itself a distraction, of course, does the work of marketing the same technology anyway, right? It does the same thing, whether there’s a threat or this kind of boosterism, it’s still constantly talking about tech, which is part of the marketing puffery, really playing on, you know, dystopian pop culture references all the time, from 2001 and War Games; Matrix; I, Robot. So, Steven, how does this, like, T-2: Judgment Day schlock, how does that serve to kind of crowd out genuine everyday threats being addressed by movements and communities? How does it sort of crowd out those very real and harmful issues in favor of both kind of tech hucksterism and boosterism and hype?

Steven Renderos: Well, first, let me just say thank you for using a cultural reference that’s near and dear to my heart, Terminator 2. I mean, it’s actually a really apt reference here, because, you know, there’s that scene, the first scene, where Schwarzenegger’s character runs into Sarah Connor in Terminator 2, and he, like, reaches out and is like, Come with me if you want to live. And that’s essentially what the terminators of our time, Sam Altman, you know, at OpenAI, Elon Musk and Mark Zuckerberg, that’s what they’re telling us. You know, We hold the keys to humanity’s salvation. Now, we also created the problem, but we are the ones that can fix it.

And for me, I think I’ve been thinking about artificial intelligence, particularly since the release of ChatGPT, and some of the public discourse since then, and including the fake criticism that you’re referencing. I’ve been thinking about it as a moral panic, which is a concept popularized by cultural theorist Stuart Hall, where you take an internal problem of the society and you externalize it onto something else. And we can talk some more about, like, how that was playing out in Stuart Hall’s time, kind of in post-colonial Britain in the 1970s, but essentially, I think, and he was focusing a lot on race, and seeing the problems of society that was dealing with, a stagnated economy, a working class that was struggling and other societal challenges got externalized to be about the Black people that are here. They’re the reason that all these things are bad, which you know, for those of us paying attention to what’s happening today, and how immigrants are being scapegoated, I think can recognize that playbook playing out as well.

But I relate this to artificial intelligence, because as a humanity, we are facing the very real possibility of extinction as the planet kind of descends into climate disaster, and our extinction is a very real problem, like, it’s not a foregone conclusion that the next generation and the generation after that, will have an actual planet to inherit. And so it’s not that killer robots may come in the near future and kill us. It’s that we are careening towards that climate disaster. And the hype and hysteria around artificial intelligence, what it hides, is that AI itself is actually precipitating that fall. AI and the technological infrastructure that powers it is built on extraction. You know, you need minerals like lithium and copper, you need water to cool down computers, you need electricity to power massive data centers. And the demands on all of those things are compounding in this very moment at massive scales. It’s why you had the parade of tech bros at the inauguration, and shortly after that, you had an announcement of a $500 billion data center that’s going to get built in Abilene, Texas, the size of, like, 800 football fields, is because they need this massive amount of infrastructure in order to power their LLM dreams that they have.

Adam: And to be clear, this is a real-world thing that’s happening now. The Washington Post did a longform piece back in December of 2024 about how the Navajo Nation, many of them are living in the dark because of nearby data centers in Arizona that’s fueled by so-called AI. So this is something that’s happening now. It’s not a far off prediction, that we are seeing inequities in power manifest, especially in places like Arizona, where they just gave AI a full check to do whatever they want.

Steven Renderos: Absolutely. I mean, and I think that’s the thing. I think you’re starting to see the consequences, as always, hitting the most vulnerable, the most marginalized populations first, but where AIs are being built all over the country. First of all, they’re not in urban centers. They tend to be mostly in rural places, because that’s where there’s availability of land. But you are going to get to a place where the demands of data centers, of water and power, are going to put local jurisdictions in a choice point. Do we keep feeding this data center that we’re told is so critical to our local economic base and the economic base of this country? Or do we send that water and electricity to the homes of people? You could see this playing out, for example, in Eastern Oregon, where a bunch of data centers are being built. It also happens to be the same power grid that powers Portland. You could see a very real possibility in the near future where people are going without power in order to power this data center nearby.

Adam: This reminds me so much of the, you know how Alex Jones or rightwing conspiracies are always about projecting dystopian futures onto white populations that already exist onto poor and non-white populations? Like, how you talk about FEMA camps, and then it’s like, Well, we have those at the border. They’re for migrants. And in a similar way, it’s like we are projecting this sort of AI dystopia of killer robots destroying our planet that is itself fueling a very real thing that’s killing our planet, which is climate change, which is actually a thing that exists. It’s not very sexy. Doesn’t make a good sci-fi film, Roland Emmerich’s film The Day After Tomorrow notwithstanding.

So then we have this kind of shadow boxing and a lot, as we discussed at the top of the show, a lot of these sort of Terminator 2-type organizations are funded by the AI industry themselves, because they obviously double as marketing hype. It’s so powerful that it’s going to, it reminds me of those weight-loss infomercials where they say, This drug is not for people who just want to lose five to 10 pounds. This is for people who need to lose 50 pounds or more. Like, it’s so powerful. And that’s the same kind of snake-oil salesman, carnival barker stuff you’re getting from this discourse. And that’s been pushed back again by a lot of groups. But ultimately, what we sort of get in the mainstream, so-called AI discourse, is this kind of facile, Oh, you know, it’s going to be taking all of our jobs in five years. It’s like, Well, it’s going to be boiling the Earth in five years and vomiting out slop. But sure.

I want to talk a little bit about facial-recognition software, or something near and dear to your heart, because that is being pumped into the slop in many ways, and creating a lot of these so-called AI systems, and the kind of broader regime of eroding privacy. Specifically, you have a lawsuit against Clearview AI, I want you to talk about your lawsuit with others against Clearview AI, which is a facial-recognition software, and how this kind of emerging technological tyranny can be pushed back against because, again, I know it may seem inevitable, especially with the bro-tocracy taking over in Washington, but there are ways in which people are pushing back.

Steven Renderos: Absolutely. Yeah. First off, I think I want to shout out Just Futures Law. They’re an organization that does litigation and other legal work focused on this kind of intersection of technology, immigration and criminalization, and they were really instrumental in trying to test out a lawsuit against Clearview AI. For folks that are not as familiar, Clearview AI is the world’s largest facial-recognition company. They’ve gotten that claim for a couple of reasons. They both have the software that they sell to law enforcement, to governments, to all sorts of different players. The software that they can use to run kind of facial recognition searches on, but they also have a massive, massive database with billions of photos. Because to make facial recognition software work, you need to pair the input, the photo that you’re using, with a data set that you’re running against that photo.

And so what they’ve been able to do is create a pretty massive database of photographs that they also sell to law enforcement agencies and a bunch of government players, mostly. Now, where they got that database from was by scraping the internet of our photographs. It’s very likely, I think, in some estimates, they say that probably up to, like, two-thirds of people who are on social media may have a photograph of theirs scraped up into Clearview AI’s database, and it’s billions and billions of photos that they illegally, essentially, extracted from the web by going on platforms like Instagram and Facebook and going to Flickr accounts and just extracting photos.

So they built up this massive database. In California, where the lawsuit was filed, it was done on behalf of organizers and activists like myself and other folks, particularly in the immigrant rights sector there in the Bay Area and throughout California, who were essentially taking a look at state privacy law in California, that we felt like our privacy rights have been violated by this company who stole our data in order to build a business model for themselves. And so the lawsuit was really attempting to see, could we win some concessions? Could we pressure Clearview AI to essentially get out of the business of facial recognition, even if just for folks in California? And this kind of follows the model of other litigations that have taken place in other states similarly, where you’re leveraging a state law in the books to go after a tech company. I’m an organizer. I’m not a lawyer. And I think for me, it was interesting to try a different tool in the toolbox to go after these companies, besides, like, corporate accountability.

Nima: Yeah. The Al Capone strategy.

Steven Renderos: [Laughs] Right.

Nima: Tax evasion, not racketeering.

Steven Renderos: We haven’t tried that yet. I mean, that might be next. The lawsuit, you know, we got pretty far. I think the obstacle we ran up against in the lawsuit was that there were other lawsuits out there. There was a similar one in California as well, from a private attorney. And once you get into the courts, it sort of creates this situation where they had settled very early on with Clearview AI, so they sort of set a ceiling for what we could potentially gain from a lawsuit. But there’s other versions of this that have popped up in other parts of the country targeting Clearview AI as well. So I think it’s a really interesting intervention, and in general, I think a really interesting tool in the toolbox to have, especially in this moment where the federal government is not really a site of struggle, because the federal government is the tech industry at this point.

But I think, to the broader question, on how to fight back, for me, I think it’s recognizing that technology is a terrain of struggle. It’s not a secondary fight anymore. It used to be and for many years, with many of the folks that I organized with on campaigns to try to get net neutrality through at the federal level, to fight against content moderation on the way Facebook was moderating its content, or trying to go after police departments that were using facial recognition, for the most part, those kinds of struggles were secondary struggles to the communities that we were organizing with. They’re fighting for abolition, or they’re fighting for stopping the construction of a jail, and so tech always felt like the secondary thing. But we’re living in a moment right now where I just don’t think that that’s true anymore. You can see this in the big, massive deportation operation that’s taking place in this country right now. Tech is at the very core of it, the militarization that’s happening at the border. Tech, it continues to be at the very core of it, the mainstreaming of fascism. Tech is at the very core of that. So we have to recognize this as terrain that we have to struggle over.

And I want to pick up on a point that you were raising, which is the inevitability of the things. Nothing is inevitable about the technologies that are in front of us right now. And the industry themselves, as big and as massive as they are, they are not immune from accountability. They do have weaknesses. You know, you look at their business model that relies on data to keep it going, it relies on natural resources. It relies on labor, or the elimination of labor, and those are all sites of struggle, in my view, of places where we can be mucking up the business model, making it very difficult for them to continue expanding their profits. The lawsuit for me, I think what it taught me, it’s just, again, finding more tools in the toolbox to push back against this tech bro-ligarchy that we see emerging. State-level policy, the California privacy law, there’s a biometric information law in Illinois that’s very interesting. It’s been used in interesting ways to go after companies. So litigation is a very interesting tool to have as part of the toolbox. Then the other thing that I’ve been paying a lot more attention to lately is just that there are different layers of government that we can be contending with, engaging with. You know, your water boards, your public utility commissions, your counties, these other places where, tech needs them in order to advance their agenda and where we have influence.

Nima: Yeah, Steven, I mean, I think you really kind of summarized everything incredibly well there. Before we let you go, though, can you tell our listeners about maybe some current campaigns, some current work that Media Justice is involved in? I think something that really intrigues me about your work is the idea of who technology is built for. I think we hear about the kind of democratizing nature of tech. It’s just a tool that can be wielded for good or evil, depending on, you know, the user. And so yes, we have movements to fight for our rights, to kind of curb the worst intentions or the worst sensibilities, but ultimately it’s just kind of neutral. And I think that so much of your work is focused on identifying who this technology, who our media also, is built to serve. And surprise, surprise, it’s oftentimes not us. If you would, please tell us about the work that Media Justice is currently up to in that regard.

Steven Renderos: Yeah, absolutely. I think you know, we see this moment right now, especially as the tech industry becomes the state, of really needing to build a much bigger field that’s capable of fighting and taking on the tyranny that we’re seeing. We’ll be kicking off, actually, to that end, a political education series in a couple weeks. It’s aptly named WTF, because there’s just a lot of like, what the fuck? We’re going to do, like a series, it’ll be every two weeks, kind of paralleling Trump’s 100 days, so that we can really kind of make sense of what’s happening in this moment. What are the moves that the tech industry is making? Who are the key players inside of that industry? What are they pushing for? With an eye towards helping to absorb people in this moment who are waking up to the reality that Elon Musk has taken over the federal government. And so that’s one of the things that I’m excited to do. I think it’s a way for us to start connecting with new organizations, with new groups. I’m also going to be sending our organizers, our movement-building team, out into communities later on this year. We’re doing a listening tour, just connecting with groups on the ground, particularly folks in climate and labor, folks organizing workers at different layers of the kind of economic base of society, because we are going to need just a much bigger set of folks that are fighting these companies. And I’m excited for those two things. I would point people to check out Media Justice on the many different social medias that are out there nowadays, including Bluesky, where we’ll be promoting the political education series that folks can check out. I think it’ll be a great space to just, let’s, let’s get our learning on in this moment where we might have limited means to actually win some stuff.

Nima: Well, I think that’s a great place to leave it. Everyone, yes, please check out Media Justice. We’ve been speaking with Steven Renderos, Executive Director of Media Justice, a national racial-justice organization that advances the media and technology rights of people of color. He also spins records as DJ Run, is the creator and co-host, with the great Brandi Collins-Dexter, of Bring Receipts, a politics and pop-culture podcast, and is executive producer of Revolutionary Spirits, a four-part audio series on the life and martyrdom of Mexican revolutionary leader Francisco Madero. Steven, as always, thank you so much for joining us today on Citations Needed.

Steven Renderos: Thanks so much for having me, Nima. It’s always a pleasure.

[Music]

Adam: Yeah, I think the fundamental ontological problem is that people think AI makes decisions, or that it’s actually intelligent, such as it is, right? A quote-unquote, “AI,” which can be algorithms, LLM, it kind of fits into a broad category. And it’s not. It’s simply guessing at what it thinks the thing you want is, based on a formula, what it’s come before. And it really just hit the market at the perfect time when people were doing really evil shit, again, raising rents, denying claims coverages, blowing up toddlers in Gaza. And it really became this kind of mystical thing that you could use to offset responsibility at just the right time. Because in some way, I think even those who think they’re being critical of power, like reporters who think they’re kind of being critical or unveiling some sinister new trend in society, they don’t want the people in charge to be bad in some fundamental way. So it kind of gets everybody off the hook. It kind of soothes everyone’s conscience.

Nima: Much in the way predictive policing does as well. You know, right at a time when maybe there’s a Black Lives Matter movement, or there is a racial reckoning uprising, appealing to the data and the evidence-based algorithms and AI as being part of why cops are, you know, busting down doors and shooting people while they’re asleep. This continues to kind of absolve these powerful structures of their own agency, their own responsibility, because of how powerful now technology is. And so technology writ large, quote-unquote, “technology,” becomes, if at all, the villain, even if there is a villain, it may just be kind of like an Aw, shucks. We have to do this better. Or, you know, this is maybe a problem. Time to investigate further. But it offsets responsibility. So it’s not the cops breaking down doors, it’s the algorithm that told them that that was the right address to go to. And so, Hey, what are they supposed to do? They’re just following orders. So it’s just another way of saying, Hey, we’re just following orders. This isn’t our fault.

Adam: No, it’s perfect. It’s the perfect responsibility-avoidance thingamajig. Again, if you just put AI in a story, it sounds sexy, and don’t really question, you know, is this serving a dual purpose? Is it serving a double purpose to make it look like the machines kind of took over and did all these bad things?

Nima: Right? I mean, remember, in Terminator, Adam, the thing that they have to do is they have to go back and address the people who made the decisions to take the future of the planet down the road of having machines in control. It’s not like, Oh, we need to unplug the server. It’s like, We need to kill the people, right? [Laughs] And there’s a reason for that. There’s a reason for that. There’s a reason why the people behind the decisions are actually the ones with the agency, and with the moral compass to appeal to here, not the machines themselves. And I think that that’s what we’re seeing in these media reports again and again, very self-serving absolution, saying that technology did it. It’s not really us. Therefore, hey, let me just go about my day.

But that will do it for this episode of Citations Needed. Thank you all for listening. Of course, you can follow the show on Twitter and Bluesky @citationspod, Facebook Citations Needed, and become a supporter of the show through Patreon.com/CitationsNeededPodcast. We are 100% listener-funded, so all your support is so incredibly appreciated. I am Nima Shirazi.

Adam: I’m Adam Johnson.

Nima: Citations Needed’s senior producer is Florence Barrau-Adams. Producer is Julianne Tveten. Production assistant is Trendel Lightburn. The newsletter is by Marco Cartolano. The music is by Grandaddy. Thanks again for listening, everyone. We’ll catch you next time.

[Music]

This Citations Needed episode was released on Wednesday, March 26, 2025.

--

--

Citations Needed
Citations Needed

Written by Citations Needed

A podcast on media, power, PR, and the history of bullshit. Hosted by @WideAsleepNima and @adamjohnsonnyc.

No responses yet