Skip to main content

Algorithmic Intelligence Has Gotten So Smart, It's Easy To Forget It's Artificial

Linguist Geoff Nunberg considers the word 'algorithm.'

06:28

Other segments from the episode on June 28, 2019

Fresh Air with Terry Gross, June 28, 2019: Interview with John Green; Commentary about the word 'algorithms'; Review of TV show 'The Loudest Voice.'

Transcript

DAVID BIANCULLI, HOST:

This is FRESH AIR. Algorithms, that's the headline word for all the decision-making we've handed over to computers, from assigning credit scores to recommending YouTube videos to diagnosing cancer. The more we rely on them, the more of a hash they seem to make of things. Our linguist Geoff Nunberg has these thoughts on a word that has come to stand in for the power technology wields in our lives.

GEOFF NUNBERG, BYLINE: Algorithms were around for a very long time before the public paid them any notice. The word itself is derived from the name of a 9th-century Persian mathematician. And the notion is simple enough. An algorithm's just any step-by-step procedure for accomplishing some task, from making the morning coffee to performing cardiac surgery. Computers use algorithms for pretty much everything they do, adding up a column of figures, resizing a window, saving a file to a disk. But all those things usually just happen the way they're supposed to. We don't have to think about what's going on under the hood.

But algorithms got harder to ignore when they started taking over tasks that used to require human judgment - deciding which criminal defendants get bail, winnowing job applications, prioritizing stories in a news feed. All at once, the media are full of disquieting headlines like, "How To Manage Our Algorithmic Overlords" and "The Algorithmification Of The Human Experience." Ordinary muggles may not know exactly how an algorithm works its magic, and a lot of people use the word just as a tech-inflected abracadabra.

But we're reminded every day how unreliable these algorithms can be. Ads for vitamin supplements show up in our mail feed, while wedding invitations are buried in the junk file. An app sends us off a crowded highway and lands us bumper to bumper in local streets. OK, these are mostly just inconveniences. But they shake our confidence in the algorithms that are doing more important work. How can I trust Facebook's algorithms to get hate speech right when they've got other algorithms telling advertisers that my interests include "The Celebrity Apprentice," beauty pageants and the World Wrestling Entertainment Hall of Fame?

It's hard to resist anthropomorphizing these algorithms. We endow them with insight and intellect or with human frailties, like bad taste and bias. Disney actually personified the algorithm literally in their 2018 animated movie, "Ralph Breaks The Internet," in the form of a character who has the title of head algorithm at a video-sharing site. She's an imperious fashionista who recalls Meryl Streep in "The Devil Wears Prada" as she sits at a desk swiping through cat videos and saying, no, no, yes.

Tech companies tend to foster that anthropomorphic illusion when they tout their algorithms as artificial intelligence, or just AI. To most people, that term evokes the efforts to create self-aware beings capable of reasoning and explaining themselves, like Commander Data of "Star Trek" or HAL in "2001." That was the aim of what computer scientists call good old-fashioned AI. But AI now connotes what's called second-wave AI or narrow AI. That's a very different project focused on machine learning.

The idea is to build systems that can mimic human behavior without having to understand it. You train an algorithm in something like the way psychologists have trained pigeons to distinguish pictures of Charlie Brown from pictures of Lucy. You give it a pile of data, posts that Facebook users have engaged with, comments that human reviewers have classified as toxic or benign, messages tagged as spam or not spam and so on. The algorithm chews over thousands or millions of factors until it can figure out for itself how to tell the categories apart or predict which posts or videos somebody will click on. At that point, you can set it loose in the world.

These algorithms can be quite adept at specific tasks. Take a very simple system I built with two colleagues some years ago that could sort out texts according to their genre. We trained an algorithm on a set of texts that were tagged as news articles, editorials, fiction and so on. And it masticated their words and punctuation until it was pretty good at telling them apart. For instance, it figured out for itself that when a text contained an exclamation point or question mark, it was more likely to be an editorial than a news story. But it didn't understand the text it was processing or have any concept of the difference between an opinion and a news story - no more than those pigeons know who Charlie Brown and Lucy are.

The University of Toronto computer scientist Brian Cantwell Smith makes this point very crisply in a forthcoming book called "The Promise Of Artificial Intelligence." However impressive they may be, he says, all existing AI systems do not know what they're talking about. By that he means that the systems have no concept of spam or porn or extremism or even of a game. Those are just elements of the narratives we tell about them.

The algorithms are really triumphs of intelligent artifice, ingenious systems that can mindlessly simulate human judgment. Sometimes they do that all too well when they reproduce the errors in judgment they were trained on. If you train a credit rating algorithm on historical lending data that's infected with racial or gender bias, the algorithm's going to inherit that bias, and it won't be easy to tell. But they can also fail in alien ways that betray an unhuman weirdness. You think of the porn filters that block flesh-colored pictures of pigs and puddings or those notorious image-recognition algorithms that were identifying black faces as gorillas.

So it's natural to be wary of our new algorithmic overlords. They've gotten so good at faking intelligent behavior that it's easy to forget that there's really nobody home.

BIANCULLI: Geoff Nunberg is a linguist at the University of California Berkeley School of Information. Coming up, I review the new Showtime miniseries "The Loudest Voice," about TV executive Roger Ailes and the birth and rise of the Fox News Channel. This is FRESH AIR.

(SOUNDBITE OF FRED KATZ'S "OLD PAINT") Transcript provided by NPR, Copyright NPR.

You May Also like

Did you know you can create a shareable playlist?

Advertisement

Recently on Fresh Air Available to Play on NPR

52:30

Daughter of Warhol star looks back on a bohemian childhood in the Chelsea Hotel

Alexandra Auder's mother, Viva, was one of Andy Warhol's muses. Growing up in Warhol's orbit meant Auder's childhood was an unusual one. For several years, Viva, Auder and Auder's younger half-sister, Gaby Hoffmann, lived in the Chelsea Hotel in Manhattan. It was was famous for having been home to Leonard Cohen, Dylan Thomas, Virgil Thomson, and Bob Dylan, among others.

43:04

This fake 'Jury Duty' really put James Marsden's improv chops on trial

In the series Jury Duty, a solar contractor named Ronald Gladden has agreed to participate in what he believes is a documentary about the experience of being a juror--but what Ronald doesn't know is that the whole thing is fake.

08:26

This Romanian film about immigration and vanishing jobs hits close to home

R.M.N. is based on an actual 2020 event in Ditrău, Romania, where 1,800 villagers voted to expel three Sri Lankans who worked at their local bakery.

There are more than 22,000 Fresh Air segments.

Let us help you find exactly what you want to hear.
Just play me something
Your Queue

Would you like to make a playlist based on your queue?

Generate & Share View/Edit Your Queue