Skip to main content

The Age Of Automation Is Now: Here's How To 'Futureproof' Yourself

Are robots coming for your job? New York Times tech columnist Kevin Roose says companies and governments are increasingly using automation and artificial intelligence to cut costs, transform workplaces and eliminate jobs — and more changes are coming.

42:24

Other segments from the episode on March 16, 2021

Fresh Air with Terry Gross, Tuesday, March 16: Interview with Kevin Roose; Review of 'Quo Vadis, Aida?'

Transcript

DAVE DAVIES, HOST:

This is FRESH AIR. I'm Dave Davies, in today for Terry Gross.

When you go to the grocery store these days, do you opt for that self-checkout lane? It's convenient, saves you some time, probably, but you're also doing work store employees used to do and probably cutting their hours. Our guest, New York Times tech columnist Kevin Roose, says those self-service lanes are among the countless ways that companies and governments are employing automation and artificial intelligence to cut costs, transform workplaces, eliminate jobs and influence our buying habits and lifestyle choices. In a new book, Roose examines the approaches and motivations of those pushing artificial intelligence, and he offers some ideas for protecting yourself from being automated out of a job and some thoughts on how we as a society should responsibly manage technological innovation.

Kevin Roose also writes frequently about social media disinformation and cybersecurity. He's the author of two previous books, "The Unlikely Disciple," about his semester at Liberty University, the evangelical school founded by Jerry Falwell, and "Young Money," which looked at the lives of eight Wall Street bankers after the 2008 financial crisis. He's also the host of "Rabbit Hole," an eight-part podcast series about how the Internet affects our lives. He joins us from his home in Oakland to talk about his new book, "Futureproof: 9 Rules For Humans In The Age Of Automation."

Well, Kevin Roose, welcome to FRESH AIR.

KEVIN ROOSE: Such a pleasure to be here.

DAVIES: You write about automation and using artificial intelligence and the extent to which it eliminates jobs. And you say that there's a lot people misunderstand about how this works. It's not that, you know, somebody is literally replaced one day by a robot. What's the more typical way that automation cuts jobs?

ROOSE: Well, there are a few ways. I mean, that kind of one-for-one substitution, it still does happen sometimes. There still are jobs - you know, like, cleaning people at big-box retailers, for example - who are sort of being replaced by robots. But much of the work that can be automated that way has already been automated in factories and manufacturing facilities. So there are some other ways that automation can replace jobs, too. And one of them is by allowing companies to do the same amount of work with many fewer people - new companies, that is.

So one example of this is in China, one of the biggest lenders is a company called MYbank, and their signature loan product is referred to as 310 because it takes three minutes to apply online for a loan, it takes one second for an algorithm to approve it, and zero humans are involved in the process. And so that firm has been able to issue billions and billions of dollars in loans with very few employees relative to their competitors. So you might have a bank that has 10,000 loan officers, but if this company, MYbank, can outperform you with a couple hundred employees, that results in a net loss of jobs if that firm takes over. So that's a more common way.

But there are other ways, too, including giving us new behaviors that replace old behaviors. So one example of that would be what's been happening with photos. Kodak used to be a major American company with many, many thousands of employees. But we don't hear much about them anymore because now we get photos on our phones, and we distribute them through Instagram and Facebook and on Twitter. And so the distribution has changed. The behavior has changed. And the companies that are sort of doing that new behavior - Instagram and Twitter - those have fewer employees than Kodak used to have because they're much more automated.

DAVIES: And there are cases where one company will do something which allows it to operate smarter and leaner, and it causes a loss in jobs somewhere else, like software that allows people to better maintain aircraft, for example. That's one you give.

ROOSE: Exactly, yeah. So there are lots of ways in which AI and automation are being implemented to make processes more efficient, which, you know, we think of as generally a good thing. And it is. But it can also result in a loss of jobs. So yeah, if there is a company that's making an algorithm that, you know, tells airlines when to replace parts in their planes, maybe they end up buying fewer parts and maintaining them more. There are many, many examples of this happening throughout the economy. And it doesn't all replace jobs, but there are jobs being lost as a result.

DAVIES: Right, so one company just finds they're getting fewer orders. And way down the line, somebody somewhere doesn't need to order stuff because they are taking better care of what they have. You know, there is this growing industry of people who develop artificial intelligence programs. Sometimes it's software; sometimes it's hardware. And they sell to companies. Do they pitch it as, hey, this is going to save you money by cutting jobs?

ROOSE: It depends who's listening. And this is one of the big things that made me interested in this topic, was, you know, I live out in Silicon Valley, and I talk to a lot of people in the tech industry. And most of the time, they will tell you, you know, AI is going to be great for people. It's going to free them from mundane and repetitive tasks. It's going to make their lives better. These algorithms are going to personalize everything for them and make it easy for them to get around in the world.

But then there's this other story that I started hearing snippets of a few years ago when I decided to write this book, and that story was much more cynical and much more pessimistic. And it basically was these executives who were talking about automation and AI as purely a way to get rid of human workers, to cut their costs and to automate their workforces. And so the clearest example I have of this in the book is I was at a party a few years ago in San Francisco, and I started talking to a guy who was telling me about his startup, as most conversations at parties tend to go in the Bay Area. And he told me that his company had developed a piece of software that he was calling the boomer remover. And I sort of - I was confused by that. I said, what do you mean, the boomer remover? And he told me, well, this is a piece of software that allows companies with factories to use artificial intelligence to streamline the decision about what to produce on which machines on which days. This is called production planning, and humans have done that job for hundreds of years.

But this AI program that he had developed was allowing factories and companies to replace the supervisors of those factories - who he referred to as boomers because they were generally older and better paid - with an algorithm. And he was very proud of this. And so there are snippets of this more honest automation conversation that you occasionally hear. And that part was really disturbing to me because he was not concerned about the people who were going to lose their jobs to this technology. In fact, he was almost glad to be able to replace them.

DAVIES: You know, defenders of this say, look, this has been happening throughout history - the Industrial Revolution, successive waves of new technology when, you know, factories began to run on electricity. That changed things in big ways - and that, yes, jobs were eliminated, but new ones emerged. Should we be reassured by that argument?

ROOSE: Well, I believed in that argument for a long time. I mean, I am a tech writer. I'm not a Luddite. I love technology. I, you know, grew up with computers and on the Internet. And I was very optimistic about this technology because of that argument that you made that, you know, automation and artificial intelligence will destroy some jobs. But they will create other jobs. And those jobs will replace the lost ones. But as I started looking more into the present of AI and also the past of automation, I learned that it's not always that smooth. During the Industrial Revolution, for example, there were people who didn't find work for a long time. There were - you know, wages for workers didn't catch up to corporate profits for something like 50 years. So a lot of the people who went through those technological transformations didn't have a good time. They weren't necessarily happier or living better lives or wealthier as a result of this new technology. But there's also a difference today, which is that artificial intelligence is not just replacing sort of repetitive manual labor. It's also replacing repetitive cognitive labor.

It's able to do higher value tasks, not just moving data around on a spreadsheet or moving car parts around in a factory. It's able to do the work of white-collar workers in fields that are generally - require college educations and specialized training. And that's one difference. And then the other difference is there's been some new research out about the effect that automation has been having in the economy. And it's shown that while for much of the 20th century, automation was creating new jobs faster than it was destroying old jobs, for the last few decades, the opposite has been true. New jobs have been disappearing - or old jobs have been disappearing faster than new jobs have been created.

DAVIES: After rolling over these arguments a bit in the book, you say that, well, if you were going to rate your view of this on a one to 10 scale, one being no worries, this is all going to work out to 10 being artificial intelligence will destroy everything we hold dear, where are you on the scale?

ROOSE: Well, the answer is - I have two answers for that. One is about the technology itself. And on the technology itself, I am much more optimistic. I really think that AI and automation could produce amazing things for us. It could help us cure rare diseases. It could help us fix the climate crisis. It could do any number of amazing things that we really, really need.

I am much more worried, on the other hand, about the humans who are in charge of the AI and automation and what their motivations are. I mean, it's the people like the startup founder who told me about the boomer remover. But it's also the executives at large companies who are using automation to replace workers without transforming their companies, without developing new products. They're not trying to innovate and transform their businesses. They're purely trying to do the same amount of work with fewer people.

DAVIES: We need to take a break here. Let me reintroduce you. We're speaking with Kevin Roose. He is a technology columnist for The New York Times. His new book is "Futureproof: 9 Rules For Humans In An Age Of Automation." We'll continue our conversation in just a moment. This is FRESH AIR.

(SOUNDBITE OF THE ROOTS SONG, "SACRIFICE")

DAVIES: This is FRESH AIR. And we're speaking with New York Times technology columnist Kevin Roose. He has a new book about the impact of automation and artificial intelligence. It's called "Futureproof: 9 Rules For Humans In An Age Of Automation."

You know, it's interesting that a lot of this automation and artificial intelligence doesn't even involve a physical intervention in a workplace, necessarily. A lot of it is simply algorithms, which guide the workplace - the work process through software. And you say, one thing to beware of are bureaucratic bots. That is to say, algorithms which governments and institutions use to determine, you know, who qualifies for unemployment compensation or, in a private company, how the benefits are managed. Do you want to explain what this is and what its impact is?

ROOSE: Yeah. The category I call bureaucratic bots is sort of made up of these algorithms that make decisions that affect people's lives in really dramatic and important ways. So I don't think people fully appreciate the extent to which things like benefits, who qualifies for nutrition assistance, who qualifies for public housing are determined by algorithms now. And sometimes that works fine. And some other times, it doesn't work so great. There was a case a few years ago in Michigan where an algorithm that the state was using to determine benefits eligibility misfired. And it kicked a lot of people off their benefits wrongly. And that affected people's lives in real, tangible ways.

There are other kinds of bots and automation being used by governments in the criminal justice system, for example, to predict whether a given defendant is likely to reoffend if you put them out on parole. And these algorithms are generally not open and inspectable by the public. They're sort of black boxes. And we don't really know how they work. And there's not a lot of accountability for them. And so as a result, we end up with these kind of mysterious machines making these decisions that affect millions, billions of people's lives. And we don't really understand what they're doing.

DAVIES: And I guess when a process like that, who qualifies for what, is highly automated, even if there is an outrageous screw up that clearly affects a lot of people and becomes known, it's hard to unwind and fix quickly, isn't it?

ROOSE: Exactly. And it requires humans to intervene and to undo a lot of the work that the machine has screwed up. And I think that's a real issue. And I think there's been some great writing on this. There's a book called Automating Inequality by a scholar named Virginia Eubanks. And she goes into a lot of examples of how this technology, this automation and AI technology is harming people and is disproportionately harming people who don't have a lot of money, who are, you know, dependent on state benefits, people of color, marginalized communities. They suffer when these systems don't work as they're supposed to. And it often takes a long time to clean up the mess.

DAVIES: You looked in particular at YouTube and the way it recommended videos to regular YouTube watchers. YouTube is, of course, owned by Google. This is something that you wrote about in the Times. And it's also in your podcast, "Rabbit Hole." You want to explain what you learned about the algorithm that was recommending videos to YouTube and its impact?

ROOSE: Well, one thing I didn't fully appreciate is how sophisticated the AI that powers YouTube is. YouTube is owned by Google. And Google has the best AI research team in America. And they produce the most award-winning papers. They have the best Ph.D.s. They - you know, they're at the vanguard of artificial intelligence. And a lot of that research and expertise for the last decade has been going into honing this YouTube algorithm with these techniques that are brand-new and that are making it much more effective. And something like 70% of all the time that people spend on YouTube is directly related to recommendations that come from this algorithm. And so one thing that I learned when I started looking into this is that this algorithm has changed a lot over the years. And it's become much more savvy about what will keep people on YouTube.

Maximizing watch time is the No. 1 goal of this algorithm. And so some of the ways that it's learned that it can keep people on YouTube for a long time are by introducing them to new ideas, maybe to conspiracy theories, maybe to more extreme versions of something that they already believe, things that will sort of lead them down these rabbit holes. And so this has had an effect on politics. This has had an effect on our culture. And it's resulted in some cases where people have been radicalized because the algorithm thought that radicalizing them would be a good way to keep them watching YouTube.

DAVIES: Wow. So the algorithm sees them have a certain political orientation. And rather than, you know, popping up some videos which might give me another way of looking at it, it takes me a step further into my own beliefs and can result in extremist views of all kinds, right?

ROOSE: Yeah. I mean, these algorithms have no idea what they're recommending. That's one thing that I learned is it's not like there's an algorithm sitting inside YouTube's, you know, headquarters that's saying, you know, I want to radicalize this person. So I'm going to show them a conspiracy theory about Qanon or something like that. But it does learn what we enjoy. And it learns what we're attracted to. And it learns what will keep our attention.

And often, lies and conspiracy theories and extremist views are just more engaging than the truth. It's much more engaging to think that there is a conspiracy, you know, where people are, you know, being microchipped by Bill Gates every time they get a COVID vaccine than the truth, which is that these vaccines are effective. And they work. And there are no Bill Gates microchips inside of them. And so when you give that job to an algorithm and tell it to learn what people will respond to and don't give it any sort of parameters for that, it learns some intriguing and, sometimes, scary things.

DAVIES: What do Google and YouTube officials say when you've done these stories and you've contacted them for comment?

ROOSE: Well, they say that, you know, their algorithms are effective and that they're not, you know, radicalizing large numbers of people. They dispute this idea that there's this kind of extremism effect that their algorithms have. But they've also sort of tacitly acknowledged that this is happening because they've changed their algorithm a lot. They've, you know, kicked off a lot of the white supremacists and neo-Nazis who were, you know, major figures on YouTube. They've started monitoring what kind of content their algorithm is recommending to people and sort of reducing what they call borderline content, which is sort of content that's - almost breaks their rules but doesn't quite. So they have changed a lot in response to, you know, criticism and awareness of what is going on there.

DAVIES: We need to take another break here. Let me reintroduce you. We are speaking with Kevin Roose. He's a technology columnist for The New York Times. His new book is "Futureproof: 9 Rules For Humans In An Age Of Automation." He'll be back to talk more after a short break. I'm Dave Davies. And this is FRESH AIR.

(SOUNDBITE OF FRANK ZAPPA'S "UNCLE MEAT: MAIN TITLE THEME")

DAVIES: This is FRESH AIR. I'm Dave Davies, in today for Terry Gross. We're speaking with New York Times technology columnist Kevin Roose. His new book is about the expanding use of artificial intelligence and automation to cut costs, transform workplaces and influence our buying habits and lifestyle choices. The book is called "Futureproof: 9 Rules For Humans In An Age Of Automation."

So if somebody is worried about this, somebody who has a job in a warehouse or somebody who drives a truck or any of the millions of jobs out in in the economy, how do they evaluate its vulnerability to automation? And what do they do?

ROOSE: Well, the conventional way is by looking at it and sort of a job-to-job basis. So there - you know, there are studies showing that, you know, tax preparers have this chance of being automated or truck drivers have this chance. But I think that's the wrong framework because what matters and what we've seen over history is that certain occupations don't just disappear one day. Instead, they sort of change. People who, you know, are doing work that is more rote and repetitive become automated first. And then, you know, the sort of automation works its way up. And sometimes it hits a wall. And it can't sort of do any more of the jobs in that field.

And so the version of that today that we're seeing is that, you know, some people within professions like journalism are, you know, very susceptible to automation. The people who write, you know, recaps of sports games or, you know, reports about the stock market or the kinds of corporate earning reports that I used to write - those jobs are much more susceptible than jobs like, frankly, the ones we're doing right now, which are more about human connection and expression of complex ideas. So I think there's a sort of way of looking at this that is not so much about what you do. It's about how you do it and how human you are in performing that work.

DAVIES: Yeah. It's interesting, you know, because I think the advice for a lot of people in an increasingly technically sophisticated age is, you know, learn math, learn computer science, you know, forget about all this humanities stuff. It sounds like you're saying that that's - the humanities are really important if you're going to live in this world.

ROOSE: Yeah. That was one of the fascinating things I learned while I was researching this book. The more AI experts and computer scientists I talked to, the more sure I became that we have been preparing people for the future in exactly the wrong way. We've been telling them, you know, develop these kind of technical skills in fields like computer science and engineering. We've been telling people to become as productive as possible to optimize their lives, to squeeze out all the inefficiency and spend their time as effectively as possible, in essence, to become more like machines. And really, what we should be teaching people is to be more like humans, to do the things that machines can't do.

And so in the book, I go over a couple types of activities that I learned in researching this book were very hard for machines to accomplish as effectively as humans. There are three categories of work that I think is unlikely to be automated in the near future. One is surprising work. So this is work that involves complex rules, changing environments, unexpected variables. AI and automation really like regularity. They like concrete rules, sort of bounded environments and repetitive action. So this is why, like, AI can beat a human in chess. But if you asked an algorithm to teach a kindergarten class, it would fail miserably because that's a very irregular environment with lots of surprises going on. So those surprising jobs are the first jobs I think are relatively safe.

The second category is what I call social jobs, jobs that involve making people feel things rather than making things. So these would be the jobs in social services and health care, nursing, therapists, ministers, but also people who perform sort of emotional labor as part of their jobs - people like flight attendants and baristas, you know, people we don't typically think of as being sort of social workers. But their jobs do involve an element of making people feel things.

And the third category of work that I think is safe is what I call scarce work. And this is work that involves sort of high-stakes situations, rare combinations of skills or just people who are sort of experts in their fields. And this would include jobs that we have decided are unacceptable to automate. So you know, we could replace all of the human 911 operators with robots. That technology exists. But if you call 911 today, you will get a human because we want humans to be doing that job. When we're in trouble, we want a human to pick up the phone and help us to deal with our problems.

DAVIES: You know, some of the ideas that you present to deal with this rule of automation involve individual choices. But some of them are really at the level of society, large institutions, the government. What should elected officials and policy analysts be focused on as we confront these issues?

ROOSE: Well, I think we need to prepare for the possibility that a lot of people are going to fall through the cracks of this technological transformation. It's happened during every technological transformation we've ever had, and it's going to happen this time. And in fact, it already is happening. And so - you know, there have been various solutions proposed. You know, universal basic income is the kind of Andrew Yang solution to this. And I think that's probably a good idea. We're seeing now during the pandemic with these stimulus checks that actually giving people money is a really good way to get people out of poverty and to sustain them through periods of hardship. So something like that, like universal basic income, could help. Something like "Medicare for All" could also help by not - by sort of detaching health care from our work. You know, a big part of the reason people don't quit their jobs, even if they know they're about to be automated, is because they don't want to go without health care.

But there are also solutions that we could put in place that already exist in other countries. So in Sweden, for example, there are these job councils that are basically sort of public-private partnerships that essentially catch workers who are displaced by automation and layoffs. And they help retrain them. They sustain them while they're looking for work, and they find them other work. And that works very effectively in that country. In Japan, there's a similar practice. And so I think we need to take this really seriously. And I think we - you know, we can do a lot more than we're currently doing.

DAVIES: We need to take another break here. Let me reintroduce you. We're speaking with Kevin Roose. He's a technology columnist for The New York Times. His new book is "Futureproof: 9 Rules For Humans In An Age Of Automation." We'll continue our conversation after this short break. This is FRESH AIR.

(SOUNDBITE OF JULIAN LAGE'S "IOWA TAKEN")

DAVIES: This is FRESH AIR, and we're speaking with New York Times technology columnist Kevin Roose. His new book is about the expanding use of artificial intelligence and automation in our lives. The book is "Futureproof: 9 Rules For Humans In An Age Of Automation."

You don't just report on automation for the Times, and you've spent a fair amount of time reporting on online extremism. And for, you know, a recent episode of The New York Times radio program and podcast "The Daily," you described watching the reaction of followers of the QAnon conspiracy theory to the events on Inauguration Day, January 20, when Joe Biden was sworn in. I thought we - I wanted to play a clip of this because you captured some of these reactions. You want to just first tell us what you were doing on Inauguration Day and what you wanted to see?

ROOSE: Yeah. Well, I have two computer monitors in my office side by side. And on one of them, I was watching the inauguration. You know, I had a stream going, and I was watching Joe Biden get sworn into office. And on the other screen, I was looking at this kind of QAnon reality.

There were these predictions that people who believe in this conspiracy theory had made that Joe Biden would not actually be inaugurated, that Donald Trump would implement martial law and stop the proceedings and announce the mass arrests of elite pedophiles and satanic criminals and that there would be this kind of day of reckoning during the inauguration that would result in Donald Trump taking a second term in office. And so I was watching those people, the people who believed in that theory, respond to the events of the actual inauguration that was happening on my other screen.

DAVIES: All right, so we're going to listen to a bit of this. This is from the podcast "The Daily." And what we'll hear is a QAnon follower anticipating Trump seizing power before Biden can be sworn into office. And then we hear some of you describing this and then some other followers. So let's just listen.

(SOUNDBITE OF PODCAST, "THE DAILY")

UNIDENTIFIED PERSON #1: It would be hilarious if Trump did the emergency broadcast in the middle of the inauguration. That would be hilarious.

ROOSE: As inauguration started, politicians, you know, walk in, take their seats. These people that they believe are members of this global cabal of criminals - Hillary Clinton, George W. Bush, Barack Obama - they're all in one place.

UNIDENTIFIED PERSON #2: I got the popcorn ready.

UNIDENTIFIED PERSON #3: I'm optimistic.

UNIDENTIFIED PERSON #4: Almost like it's the moment of truth.

ROOSE: One person on a QAnon message board writes, the next 48 hours will be like the entire Revolutionary War and the fall of Berlin compressed into two days. I have called off work so I can witness history in the making. What a time to be alive.

DAVIES: And that's our guest Kevin Roose on an episode of the podcast "The Daily" in which he's watching QAnon followers in real time as they anticipate Trump taking power on Inauguration Day, January 20. That obviously didn't happen. And I know, Kevin Roose, that you've maintained contact with a number of QAnon followers. You know, others hope that on March 4 - I think that was the date which was traditionally the Inauguration Day historically. That was another opportunity for Trump to make his move. Nothing happened. Life goes on. We have a new president. I'm wondering how those that you are in contact with are reacting to this collision with reality.

ROOSE: Well, it varies. So some of them have gotten disillusioned with QAnon. They've said, you know, maybe we've been lied to. Maybe this whole thing, you know, was made up, and maybe I'm going to go find some other way to spend my time or some other conspiracy theory to attach myself to. But then there are people who just move the goalposts. They say, OK, well, it's not that Q, the sort of mysterious, anonymous figure at the center of the QAnon movement - it's not that Q is wrong. It's just that we misinterpreted Q.

So the real date of this great awakening, they call it, will be sometime in the future. It'll be, you know, in the 2024 election maybe or maybe even before then. And so there's this sort of reluctance to accept reality that I don't think is unique to QAnon. I mean, we've seen, you know, religious groups that predict that the world is going to end on a certain day. You know, that day comes and goes. And they don't lose their faith. They just sort of shift their expectations.

DAVIES: A lot of people have talked about the need to address this, you know, increasing, you know, embrace of deluded thinking in a lot of these conspiracy theories. And, you know, social media institutions have responded in some ways. But, you know, it seemed like this was potentially an inflection point where such, you know, a clearly predicted and anticipated event just didn't happen. The underlying - you know, reality undermines the thinking. Is it an opportunity to intervene in some way? If it is, who should do it? I'm wondering if you've thought about that, how you begin to - I don't know - weaken the hold of some of this thinking on its followers.

ROOSE: I think it is a moment for potential intervention. I get emails every day from people who say, you know, my mom or my brother or my neighbor or my colleague has gotten really into Internet conspiracy theories. How do I get them out of it? And it's a really hard question. That is not something we know the answer to. But I think it does work better in moments where there is sort of uncertainty and people are sort of grappling with what they believe and whether or not it's true. So I think, yeah, this is a moment where some people may not change their thinking at all and they may still be resistant to being sort of reintroduced to reality, but for other people, there might be a kind of break in the clouds and a chance to bring them back.

DAVIES: You're a technology writer. I'm wondering, just looking ahead, are there new technologies, new trends that you think are important for you to follow up on? I mean, what questions do you think you'll be examining in the next year?

ROOSE: Well, AI is fascinating. I - you know, we've talked a lot about the potential negatives of it, but there are a lot of potential positives, too. I mean, one area I'm looking at right now is the use of AI in medicine and health care, not just to sort of make things more efficient, but to discover new drugs, you know, to allow doctors to do kind of better analysis and diagnosis of patients. I think that's really promising, so I'm looking at that.

And I'm also excited about just the stuff that is in our, you know, homes and lives just getting - continuing to get better. You know, I don't remember if you - or I don't know if you remember, but, like, when, you know, something like Siri first came out, like, it wasn't very good (laughter). Like, you would ask, you know, Siri, what time is it? And she would respond, thyme is an herb used in cooking, you know?

DAVIES: Right.

ROOSE: It's something like that. And those models, those AIs have gotten much more accurate in the past few years. And I think that's something to be excited about and also to monitor very closely because it's not always good.

DAVIES: Yeah, one of your nine rules is demote your devices, right (laughter)?

ROOSE: Yeah. We need to be in control of our technology. There's a way in which we use our tools, and there's a way in which our tools use us. And so I think restoring authority over the technology in our lives so that we are in the driver's seat, we are controlling our own human experience, I think that's really important.

DAVIES: Well, Kevin Roose, thank you so much for speaking with us.

ROOSE: Thanks so much for having me.

DAVIES: Kevin Roose is a technology columnist for the New York Times and host of "Rabbit Hole," an eight-part podcast about how the Internet is affecting us. His new book is "Futureproof: 9 Rules For Humans In The Age Of Automation."

(SOUNDBITE OF ALLISON MILLER'S "VALLEY OF THE GIANTS")

DAVIES: Coming up, Justin Chang reviews "Quo Vadis, Aida?", the Oscar-nominated film about the massacre of Muslims in the town of Srebrenica near the end of the Bosnian war. This is FRESH AIR.

(SOUNDBITE OF JASON MORAN'S "BIG STUFF") Transcript provided by NPR, Copyright NPR.

You May Also like

Did you know you can create a shareable playlist?

Advertisement

Recently on Fresh Air Available to Play on NPR

52:30

Daughter of Warhol star looks back on a bohemian childhood in the Chelsea Hotel

Alexandra Auder's mother, Viva, was one of Andy Warhol's muses. Growing up in Warhol's orbit meant Auder's childhood was an unusual one. For several years, Viva, Auder and Auder's younger half-sister, Gaby Hoffmann, lived in the Chelsea Hotel in Manhattan. It was was famous for having been home to Leonard Cohen, Dylan Thomas, Virgil Thomson, and Bob Dylan, among others.

43:04

This fake 'Jury Duty' really put James Marsden's improv chops on trial

In the series Jury Duty, a solar contractor named Ronald Gladden has agreed to participate in what he believes is a documentary about the experience of being a juror--but what Ronald doesn't know is that the whole thing is fake.

There are more than 22,000 Fresh Air segments.

Let us help you find exactly what you want to hear.
Just play me something
Your Queue

Would you like to make a playlist based on your queue?

Generate & Share View/Edit Your Queue