An AI loop that ensnares consumers and critics alike

232

Gabriel Nicholas

THE WASHINGTON POST – The backlash against tech began in books. In the 2010s, when Google’s motto “Don’t be evil” felt unironic and TED talks lauding the Internet as the great social equaliser still drew credulous audiences, books such as Eli Pariser’s The Filter Bubble and Shoshana Zuboff’s The Age of Surveillance Capitalism were defining terms that would massively shift public opinion. Over the next decade, technology, or more specifically software, would come to be seen less as an innovative convenience and more as a harbinger of societal turmoil.

In 2022, anti-tech is mainstream. Ideas that originated in books about how platforms and their undergirding artificial-intelligence (AI) algorithms threaten society have made their way into Netflix documentaries, op-eds, bipartisan legislation and even the latest Space Jam (with supervillain Al-G Rhythm, played by Don Cheadle). Technology scholars such as Lina Khan and Meredith Whittaker, once considered fringe for their criticisms of technology’s structural harms, have found themselves with prominent appointments in the Biden administration. The world is finally listening to technology critics. So now the question is: What should they write about next?

The easy answer is to ride the wave of tech’s new unpopularity, and that is the option NBC News tech correspondent Jacob Ward chose in writing The Loop: How Technology Is Creating a World Without Choices and How to Fight Back. The book argues that capitalistic AI technologies “prey on our psychological frailties” and threaten to create “a world in which our choices are narrowed, human agency is limited, and our worst unconscious impulses dominate society”. More than telling readers anything new about the dangers of technology, though, The Loop provides evidence that tech criticism itself is calcifying into a mainstream genre.

The titular “loop” that Ward warns his readers about is rooted in the power, predictability and stupidity of our unconscious minds. When humans make decisions, our brains are quick to take shortcuts. In doing so, we make predictable, systematic errors such as miscalculating risk and overtrusting authority.

Technology companies, Ward argued, use algorithms to hijack these unconscious patterns for profit. The “loop” is Ward’s speculation that our ever-increasing dependence on AI products – Spotify for music recommendations, social media algorithms for news, automated weapons for waging war – drives our thoughtlessness, which in turn makes us more dependent on AI, and so on. “In a generation or two,” Ward posited, “we’ll be an entirely different species – distracted, obedient, helpless to resist the technologies we use to make our choices for us, even when they’re the wrong choices”.

Throughout the book, Ward interviews technologists, academics and everyday users to understand how different AI products have become inextricably woven into people’s lives.

In one section, he talks to people addicted to “social casino games,” free-to-play gambling simulators that lull users, often poor and in dark places in their lives, into spending tens of thousands of real dollars on in-game currency. In another, Ward rides along with police officers as they patrol a beat dictated by PredPol (now Geolitica), an infamously racially biased algorithm that predicts where crime will occur based on past incidents. Ward asks, “What happens when budgets and schedules for policing are built on the assumption that a software subscription can replace the need to pay overtime for detectives?” For people and institutions alike, once AI is introduced, it’s hard to remove.

More often than not, though, Ward’s examples don’t fit into his neat conception of AI as a force for evil. Take his discussion of the 2017 incident aboard United Airlines Flight 3411 when security officers physically dragged physician David Dao off an overbooked plane; he had refused to leave after being selected more or less randomly for involuntary removal.

Ward describes Dao not as a victim of violently bad customer service but as an enlightened freedom fighter who stood strong against a dictatorial algorithm. “Everyone – from the flight attendants who insisted Dao change flights despite the consequences for his patients, to the young couple that got off when asked, to the officers summoned to remove Dao – was acting under the direction of a larger, mysterious machine.” Ward could have used the United incident to engage with hard questions about when algorithms should be used to make decisions and how those decisions should be communicated. Instead, he uses it as another opportunity to shake his fist at a computer that ruined everything.

Ward rails with equal vigour against AI that demonstrably improves people’s lives. He interviews Yacqueline, a divorced woman who has trouble managing hostility against her ex as they negotiate time with their five-year-old son.

A judge orders that she and her former husband communicate through coParenter, a messaging app for divorced parents that uses AI to detect and mitigate hostile language.

Yacqueline describes the app as a godsend, but Ward sees only danger. “Are Yacqueline and her ex modeling anything for (their son) at all, except the bland collegiality they’re reading line for line from a series of AI-driven prompts?… Is this training? Or are these training wheels that never come off?” Any evidence to the contrary, such as how Yacqueline never learned such conflict-management skills from her own, AI-less divorced parents, Ward ignored.

The Loop has the right anecdotes to wrestle with the ethical ambiguity of AI, but instead it tries to prove that AI is, almost without exception, bad. Ward seems to have drawn the wrong lessons from the techlash authors who came before him: Rather than following their methods of considering from all angles and questioning the status quo, he rides the new status quo they helped establish.