Welcome, Robot Overlords. Please Don't Fire Us?
Smart machines probably won't kill us all—but they'll definitely take our jobs, and sooner than you think.
Mon May. 13, 2013 3:00 AM PDT
|
This is a story about the future. Not the unhappy future, the one where climate change turns the planet into a cinder or we all die in a global nuclear war. This is the happy version. It's the one where computers keep getting smarter and smarter, and clever engineers keep building better and better robots. By 2040, computers the size of a softball are as smart as human beings. Smarter, in fact. Plus they're computers: They never get tired, they're never ill-tempered, they never make mistakes, and they have instant access to all of human knowledge.
The result is paradise. Global warming is a problem of the past because computers have figured out how to generate limitless amounts of green energy and intelligent robots have tirelessly built the infrastructure to deliver it to our homes. No one needs to work anymore. Robots can do everything humans can do, and they do it uncomplainingly, 24 hours a day. Some things remain scarce—beachfront property in Malibu, original Rembrandts—but thanks to super-efficient use of natural resources and massive recycling, scarcity of ordinary consumer goods is a thing of the past. Our days are spent however we please, perhaps in study, perhaps playing video games. It's up to us.
Maybe you think I'm pulling your leg here. Or being archly ironic. After all, this does have a bit of a rose-colored tint to it, doesn't it? Like something from The Jetsons or the cover of Wired [2]. That would hardly be a surprising reaction. Computer scientists have been predicting the imminent rise of machine intelligence since at least 1956, when the Dartmouth Summer Research Project on Artificial Intelligence [3] gave the field its name, and there are only so many times you can cry wolf. Today, a full seven decades after the birth of the computer, all we have are iPhones, Microsoft Word, and in-dash navigation. You could be excused for thinking that computers that truly match the human brain are a ridiculous pipe dream.
But they're not. It's true that we've made far slower progress toward real artificial intelligence than we once thought, but that's for a very simple and very human reason: Early computer scientists grossly underestimated the power of the human brain and the difficulty of emulating one. It turns out that this is a very, very hard problem, sort of like filling up Lake Michigan one drop at a time. In fact, not just sort of like. It's exactly like filling up Lake Michigan one drop at a time. If you want to understand the future of computing, it's essential to understand this.
By 1950, you have added around a gallon of water. But you keep soldiering on. By 1960, you have a bit more than 150 gallons. By 1970, you have 16,000 gallons, about as much as an average suburban swimming pool.
At this point it's been 30 years, and even though 16,000 gallons is a fair amount of water, it's nothing compared to the size of Lake Michigan. To the naked eye you've made no progress at all.
So let's skip all the way ahead to 2000. Still nothing. You have—maybe—a slight sheen on the lake floor. How about 2010? You have a few inches of water here and there. This is ridiculous. It's now been 70 years and you still don't have enough water to float a goldfish. Surely this task is futile?
But wait. Just as you're about to give up, things suddenly change. By 2020, you have about 40 feet of water. And by 2025 you're done. After 70 years you had nothing. Fifteen years later, the job was finished.
IF YOU HAVE ANY KIND OF BACKGROUND in computers, you've already figured out that I didn't pick these numbers out of a hat. I started in 1940 because that's about when the first programmable computer was invented [4]. I chose a doubling time of 18 months because of a cornerstone of computer history called Moore's Law [5], which famously estimates that computing power doubles approximately every 18 months. And I chose Lake Michigan because its size, in fluid ounces, is roughly the same as the computing power of the human brain measured in calculations per second.
In other words, just as it took us until 2025 to fill up Lake Michigan, the simple exponential curve of Moore's Law suggests it's going to take us until 2025 to build a computer with the processing power of the human brain. And it's going to happen the same way: For the first 70 years, it will seem as if nothing is happening, even though we're doubling our progress every 18 months. Then, in the final 15 years, seemingly out of nowhere, we'll finish the job.
This is why, even with the IT industry barreling forward relentlessly, it has never seemed like we were making any real progress on the AI front. But there's another reason as well: Every time computers break some new barrier, we decide—or maybe just finally get it through our thick skulls—that we set the bar too low. At one point, for example, we thought that playing chess at a high level would be a mark of human-level intelligence. Then, in 1997, IBM's Deep Blue supercomputer beat world champion Garry Kasparov [6], and suddenly we decided that playing grandmaster-level chess didn't imply high intelligence after all.
So maybe translating human languages would be a fair test? Google Translate does a passable job of that these days. Recognizing human voices and responding appropriately? Siri mostly does that, and better systems are on the near horizon. Understanding the world well enough to win a round of Jeopardy! against human competition? A few years ago IBM's Watson supercomputer beat the two best human Jeopardy! champions[7] of all time. Driving a car? Google has already logged more than 300,000 miles in its driverless cars [8], and in another decade they may be commercially available.
The truth is that all this represents more progress toward true AI than most of us realize. We've just been limited by the fact that computers still aren't quite muscular enough to finish the job. That's changing rapidly, though. Computing power is measured in calculations per second—a.k.a. floating-point operations per second, or "flops"—and the best estimates of the human brain suggest that our own processing power is about equivalent to 10 petaflops. ("Peta" comes after giga and tera.) That's a lot of flops, but last year an IBM Blue Gene/Q supercomputer at Lawrence Livermore National Laboratory was clocked at 16.3 petaflops [9].
Of course, raw speed isn't everything. Livermore's Blue Gene/Q fills a room, requires eight megawatts of power to run [10], and costs about $250 million. What's more, it achieves its speed not with a single superfast processor, but with 1.6 million ordinary processor cores running simultaneously. While that kind of massive parallel processing is ideally suited for nuclear-weapons testing, we don't know yet if it will be effective for producing AI.
But plenty of people are trying to figure it out. Earlier this year, the European Commission chose two big research endeavors to receive a half billion euros each, and one of them was the Human Brain Project led by Henry Markram [11], a neuroscientist at the Swiss Federal Institute of Technology in Lausanne. He uses another IBM supercomputer in a project aimed at modeling the entire human brain. Markram figures he can do this by 2020 [12].
Google's driverless car, for example, doesn't navigate the road the way humans do. It uses four radars, a 64-beam laser range finder, a camera, GPS, and extremely detailed high-res maps. What's more, Google engineers drive along test routes to record data before they let the self-driving cars loose.
Is this disappointing? In a way, yes: Google has to do all this to make up for the fact that the car can't do what any human can do while also singing along to the radio, chugging a venti, and making a mental note to pick up the laundry. But that's a cramped view. Even when processing power and software get better, there's no reason to think that a driverless car should replicate the way humans drive. They will have access to far more information than we do, and unlike us they'll have the power to make use of it in real time. And they'll never get distracted when the phone rings.
The exact pace of future progress remains uncertain. For example, some physicists think that Moore's Law may break down [13] in the near future and constrain the growth of computing power. We also probably have to break lots of barriers in our knowledge of neuroscience before we can write the software that does all the things a human brain can do. We have to figure out how to make petaflop computers smaller and cheaper. And it's possible that the 10-petaflop estimate of human computing power is too low in the first place.
The result is paradise. Global warming is a problem of the past because computers have figured out how to generate limitless amounts of green energy and intelligent robots have tirelessly built the infrastructure to deliver it to our homes. No one needs to work anymore. Robots can do everything humans can do, and they do it uncomplainingly, 24 hours a day. Some things remain scarce—beachfront property in Malibu, original Rembrandts—but thanks to super-efficient use of natural resources and massive recycling, scarcity of ordinary consumer goods is a thing of the past. Our days are spent however we please, perhaps in study, perhaps playing video games. It's up to us.
Maybe you think I'm pulling your leg here. Or being archly ironic. After all, this does have a bit of a rose-colored tint to it, doesn't it? Like something from The Jetsons or the cover of Wired [2]. That would hardly be a surprising reaction. Computer scientists have been predicting the imminent rise of machine intelligence since at least 1956, when the Dartmouth Summer Research Project on Artificial Intelligence [3] gave the field its name, and there are only so many times you can cry wolf. Today, a full seven decades after the birth of the computer, all we have are iPhones, Microsoft Word, and in-dash navigation. You could be excused for thinking that computers that truly match the human brain are a ridiculous pipe dream.
But they're not. It's true that we've made far slower progress toward real artificial intelligence than we once thought, but that's for a very simple and very human reason: Early computer scientists grossly underestimated the power of the human brain and the difficulty of emulating one. It turns out that this is a very, very hard problem, sort of like filling up Lake Michigan one drop at a time. In fact, not just sort of like. It's exactly like filling up Lake Michigan one drop at a time. If you want to understand the future of computing, it's essential to understand this.
What do we do over the next few decades as robots become steadily more capable and steadily begin taking away all our jobs?
Suppose it's 1940 and Lake Michigan has (somehow) been emptied. Your job is to fill it up using the following rule: To start off, you can add one fluid ounce of water to the lake bed. Eighteen months later, you can add two. In another 18 months, you can add four ounces. And so on. Obviously this is going to take a while.By 1950, you have added around a gallon of water. But you keep soldiering on. By 1960, you have a bit more than 150 gallons. By 1970, you have 16,000 gallons, about as much as an average suburban swimming pool.
At this point it's been 30 years, and even though 16,000 gallons is a fair amount of water, it's nothing compared to the size of Lake Michigan. To the naked eye you've made no progress at all.
So let's skip all the way ahead to 2000. Still nothing. You have—maybe—a slight sheen on the lake floor. How about 2010? You have a few inches of water here and there. This is ridiculous. It's now been 70 years and you still don't have enough water to float a goldfish. Surely this task is futile?
But wait. Just as you're about to give up, things suddenly change. By 2020, you have about 40 feet of water. And by 2025 you're done. After 70 years you had nothing. Fifteen years later, the job was finished.
IF YOU HAVE ANY KIND OF BACKGROUND in computers, you've already figured out that I didn't pick these numbers out of a hat. I started in 1940 because that's about when the first programmable computer was invented [4]. I chose a doubling time of 18 months because of a cornerstone of computer history called Moore's Law [5], which famously estimates that computing power doubles approximately every 18 months. And I chose Lake Michigan because its size, in fluid ounces, is roughly the same as the computing power of the human brain measured in calculations per second.
In other words, just as it took us until 2025 to fill up Lake Michigan, the simple exponential curve of Moore's Law suggests it's going to take us until 2025 to build a computer with the processing power of the human brain. And it's going to happen the same way: For the first 70 years, it will seem as if nothing is happening, even though we're doubling our progress every 18 months. Then, in the final 15 years, seemingly out of nowhere, we'll finish the job.
True artificial intelligence really is around the corner, and it really will make life easier. But first we face vast economic upheaval.
And that's exactly where we are. We've moved from computers with a trillionth of the power of a human brain to computers with a billionth of the power. Then a millionth. And now a thousandth. Along the way, computers progressed from ballistics to accounting to word processing to speech recognition, and none of that really seemed like progress toward artificial intelligence. That's because even a thousandth of the power of a human brain is—let's be honest—a bit of a joke. Sure, it's a billion times more than the first computer had, but it's still not much more than the computing power of a hamster.This is why, even with the IT industry barreling forward relentlessly, it has never seemed like we were making any real progress on the AI front. But there's another reason as well: Every time computers break some new barrier, we decide—or maybe just finally get it through our thick skulls—that we set the bar too low. At one point, for example, we thought that playing chess at a high level would be a mark of human-level intelligence. Then, in 1997, IBM's Deep Blue supercomputer beat world champion Garry Kasparov [6], and suddenly we decided that playing grandmaster-level chess didn't imply high intelligence after all.
So maybe translating human languages would be a fair test? Google Translate does a passable job of that these days. Recognizing human voices and responding appropriately? Siri mostly does that, and better systems are on the near horizon. Understanding the world well enough to win a round of Jeopardy! against human competition? A few years ago IBM's Watson supercomputer beat the two best human Jeopardy! champions[7] of all time. Driving a car? Google has already logged more than 300,000 miles in its driverless cars [8], and in another decade they may be commercially available.
Of course, raw speed isn't everything. Livermore's Blue Gene/Q fills a room, requires eight megawatts of power to run [10], and costs about $250 million. What's more, it achieves its speed not with a single superfast processor, but with 1.6 million ordinary processor cores running simultaneously. While that kind of massive parallel processing is ideally suited for nuclear-weapons testing, we don't know yet if it will be effective for producing AI.
But plenty of people are trying to figure it out. Earlier this year, the European Commission chose two big research endeavors to receive a half billion euros each, and one of them was the Human Brain Project led by Henry Markram [11], a neuroscientist at the Swiss Federal Institute of Technology in Lausanne. He uses another IBM supercomputer in a project aimed at modeling the entire human brain. Markram figures he can do this by 2020 [12].
The Luddites weren't wrong. They were just 200 years too early.
That might be optimistic. At the same time, it also might turn out that we don't need to model a human brain in the first place. After all, when the Wright brothers built the first airplane, they didn't model it after a bird with flapping wings. Just as there's more than one way to fly, there's probably more than one way to think, too.Google's driverless car, for example, doesn't navigate the road the way humans do. It uses four radars, a 64-beam laser range finder, a camera, GPS, and extremely detailed high-res maps. What's more, Google engineers drive along test routes to record data before they let the self-driving cars loose.
Is this disappointing? In a way, yes: Google has to do all this to make up for the fact that the car can't do what any human can do while also singing along to the radio, chugging a venti, and making a mental note to pick up the laundry. But that's a cramped view. Even when processing power and software get better, there's no reason to think that a driverless car should replicate the way humans drive. They will have access to far more information than we do, and unlike us they'll have the power to make use of it in real time. And they'll never get distracted when the phone rings.
True artificial intelligence will very likely be here within a couple of decades. By about 2040 our robot paradise awaits.
In other words, you should still be impressed. When we think of human cognition, we usually think about things like composing music or writing a novel. But a big part of the human brain is dedicated to more prosaic functions, like taking in a chaotic visual field and recognizing the thousands of separate objects it contains. We do that so automatically we hardly even think of it as intelligence. But it is, and the fact that Google's car can do it at all is a real breakthrough.The exact pace of future progress remains uncertain. For example, some physicists think that Moore's Law may break down [13] in the near future and constrain the growth of computing power. We also probably have to break lots of barriers in our knowledge of neuroscience before we can write the software that does all the things a human brain can do. We have to figure out how to make petaflop computers smaller and cheaper. And it's possible that the 10-petaflop estimate of human computing power is too low in the first place.
Nonetheless, in Lake Michigan terms, we finally have a few inches of water in the lake bed, and we can see it rising. All those milestones along the way—playing chess, translating web pages, winning at Jeopardy!, driving a car—aren't just stunts. They're precisely the kinds of things you'd expect as we struggle along with platforms that aren't quite powerful enough—yet. True artificial intelligence will very likely be here within a couple of decades. Making it small, cheap, and ubiquitous might take a decade more.
In other words, by about 2040 our robot paradise awaits.
In other words, by about 2040 our robot paradise awaits.
AND NOW FOR THE BAIT and switch. I promised you this would be a happy story, and in the long run it is.
But first we have to get there. And at this point our tale takes a darker turn. What do we do over the next few decades as robots become steadily more capable and steadily begin taking away all our jobs? This is the kind of thing that futurologists write about frequently, but when I started looking for answers from mainstream economists, it turned out there wasn't much to choose from. The economics community just hasn't spent much time over the past couple of decades focusing on the effect that machine intelligence is likely to have on the labor market.Now is a particularly appropriate time to think about this question, because it was two centuries ago this year that 64 men were brought to trial in York, England. Their crime? They were skilled weavers who fought back against the rising tide of power looms they feared would put them out of work. The Luddites spent two years burning mills and destroying factory machinery, and the British government was not amused. Of the 64 men charged in 1813, 25 were transported to Australia and 17 were led to the gallows.
Since then, Luddite has become a derisive term for anyone afraid of new technology. After all, the weavers turned out to be wrong. Power looms put them out of work, but in the long run automation made the entire workforce more productive. Everyone still had jobs—just different ones. Some ran the new power looms, others found work no one could have imagined just a few decades before, in steel mills, automobile factories, and railroad lines. In the end, this produced wealth for everyone, because, after all, someone still had to make, run, and maintain the machines.
But that was then. During the Industrial Revolution, machines were limited to performing physical tasks. The Digital Revolution is different because computers can perform cognitive tasks too, and that means machines will eventually be able to run themselves. When that happens, they won't just put individuals out of work temporarily. Entire classes of workers will be out of work permanently.
In other words, the Luddites weren't wrong. They were just 200 years too early.
This isn't something that will happen overnight. It will happen slowly, as machines grow increasingly capable. We've already seen it in factories, where robots do work that used to be done by semiskilled assembly line workers. In a decade, driverless cars will start to put taxi hacks and truck drivers out of a job. And while it's easy to believe that some jobs can never be done by machines—do the elderly really want to be tended by robots?—that may not be true. Nearly 50 years ago, when MIT computer scientist Joseph Weizenbaum created a therapy simulation program named Eliza, he was astonished to discover just how addictive it was. Even though Eliza was almost laughably crude, it was endlessly patient and seemed interested in your problems. People liked talking to Eliza.
It's not hard to see why. Unlike humans, an intelligent machine does whatever you want it to do, for as long as you want it to. You want to gossip? It'll gossip. You want to complain for hours on end about how your children never call? No problem. And as the technology of robotics advances—the Pentagon has developed a fully functional robotic arm that can be controlled by a human mind—they'll be able to perform ordinary human physical tasks too. They'll clean the floor, do your nails, diagnose your ailments, and cook your food.
Increasingly, then, robots will take over more and more jobs. And guess who will own all these robots? People with money, of course. As this happens, capital will become ever more powerful and labor will become ever more worthless. Those without money—most of us—will live on whatever crumbs the owners of capital allow us.
This is a grim prediction. But it's not nearly as far-fetched as it sounds. Economist Paul Krugman recently remarked that our long-standing belief in skills and education as the keys to financial success may well be outdated. In a blog post titled "Rise of the Robots [15]," he reviewed some recent economic data and predicted that we're entering an era where the prime cause of income inequality will be something else entirely: capital vs. labor.
Until a decade ago, the share of total national income going to workers was pretty stable at around 70 percent, while the share going to capital—mainly corporate profits and returns on financial investments—made up the other 30 percent. More recently, though, those shares have started to change. Slowly but steadily, labor's share of total national income has gone down [16], while the share going to capital owners has gone up. The most obvious effect of this is the skyrocketing wealth of the top 1 percent, due mostly to huge increases in capital gains and investment income.
In the economics literature, the increase in the share of income going to capital owners is known as capital-biased technological change. Let's take a layman's look at what that means.
The question we want to answer is simple: If CBTC is already happening—not a lot, but just a little bit—what trends would we expect to see? What are the signs of a computer-driven economy? First and most obviously, if automation were displacing labor, we'd expect to see a steady decline in the share of the population that's employed [19].
Second, we'd expect to see fewer job openings [20] than in the past. Third, as more people compete for fewer jobs, we'd expect to see middle-class incomes flatten[21] in a race to the bottom. Fourth, with consumption stagnant, we'd expect to see corporations stockpile more cash and, fearing weaker sales, invest less [22] in new products and new factories. Fifth, as a result of all this, we'd expect to see labor's share of national income decline and capital's share rise.
These trends are the five horsemen of the robotic apocalypse, and guess what? We're already seeing them, and not just because of the crash of 2008. They started showing up in the statistics more than a decade ago. For a while, though, they were masked by the dot-com and housing bubbles, so when the financial crisis hit, years' worth of decline was compressed into 24 months. The trend lines dropped off the cliff.
How alarmed should we be by this? In one sense, a bit of circumspection is in order. The modern economy is complex, and most of these trends have multiple causes. The decline in the share of workers who are employed, for example, is partly caused by the aging of the population. What's more, the financial crisis has magnified many of these trends. Labor's share of income will probably recover a bit once the economy finally turns up.
How exactly will this play out? Economist David Autor has suggested that the first jobs to go will be middle-skill jobs [23]. Despite impressive advances, robots still don't have the dexterity to perform many common kinds of manual labor that are simple for humans—digging ditches, changing bedpans. Nor are they any good at jobs that require a lot of cognitive skill—teaching classes, writing magazine articles. But in the middle you have jobs that are both fairly routine and require no manual dexterity. So that may be where the hollowing out starts: with desk jobs in places like accounting or customer support.
That hasn't yet happened in earnest because AI is still in its infancy. But it's not hard to see which direction the wind is blowing. The US Postal Service, for example, used to employ humans to sort letters [24], but for some time now, that's been done largely by machines that can recognize human handwriting. Netflix does a better job picking movies you might like than a bored video-store clerk. Facial recognition software is improving rapidly, and that's a job so human there's an entire module in the human brain, the fusiform gyrus, solely dedicated to this task.
In fact, there's even a digital sports writer [25]. It's true that a human being wrote this story—ask my mother if you're not sure—but in a decade or two I might be out of a job too. Doctors should probably be worried as well. Remember Watson, the Jeopardy!-playing computer? It's now being fed millions of pages of medical information so that it can help physicians do a better job [26] of diagnosing diseases. In another decade, there's a good chance that Watson will be able to do this without any human help at all.
This is, admittedly, pretty speculative. Still, even if it's hard to find concrete examples of computers doing human work today, it's going to get easier before long.
Take driverless cars. My newspaper is delivered every day by a human being. But because humans are fallible, sometimes I don't get a paper, or I get the wrong one. This would be a terrific task for a driverless car in its early stages of development. There are no passengers to worry about. The route is fixed. Delivery is mostly done in the early morning, when traffic is light. And the car's abundance of mapping and GPS data would ensure that it always knows which house is which.
The next step might be passenger vehicles on fixed routes, like airport shuttles. Then long-haul trucks. Then buses and taxis. There are 2.5 million workers who drive trucks, buses, and taxis for a living, and there's a good chance that, one by one, all of them will be displaced by driverless vehicles within the next decade or two. What will they do when that happens? Machines will be putting everyone else with modest skill levels out of work too. There will be no place to go but the unemployment line.
WHAT CAN WE DO about this? First and foremost, we should be carefully watching those five economic trends linked to capital-biased technological change to see if they rebound when the economy picks up. If, instead, they continue their long, downward slide, it means we've already entered a new era.
Next, we'll need to let go of some familiar convictions. Left-leaning observers may continue to think that stagnating incomes can be improved with better education and equality of opportunity. Conservatives will continue to insist that people without jobs are lazy bums who shouldn't be coddled. They'll both be wrong.
Corporate executives should worry too. For a while, everything will seem great for them: Falling labor costs will produce heftier profits and bigger bonuses. But then it will all come crashing down. After all, robots might be able to produce goods and services, but they can't consume them. And eventually computers will become pretty good CEOs as well.
Solutions to this will remain elusive as long as we resist facing the real change in the way our economy works. When we finally do, we'll probably have only a few options open to us. The simplest, because it's relatively familiar, is to tax capital at high rates and use the money to support displaced workers. In other words, as The Economist's Ryan Avent puts it, "redistribution, and a lot of it [27]."
There's not much question that this could work, but would we be happy in a society that offers real work to a dwindling few and bread and circuses for the rest? Most likely, owners of capital would strongly resist higher taxes, as they always have, while workers would be unhappy with their enforced idleness. Still, the ancient Romans managed to get used to it—with slave labor playing the role of robots—and we might have to, as well.
In simple terms, if owners of capital are capturing an increasing fraction of national income, then that capital needs to be shared more widely if we want to maintain a middle-class society. Somehow—and I'm afraid a bit of vagueness is inevitable here—an increasing share of corporate equity will need to be divvied up among the entire population as workers are slowly but surely stripped of their human capital. Perhaps everyone will be guaranteed ownership of a few robots, or some share of robot production of goods and services.
But whatever the answer—and it might turn out to be something we can't even imagine right now—it's time to start thinking about our automated future in earnest. The history of mass economic displacement isn't encouraging—fascists in the '20s, Nazis in the '30s—and recent high levels of unemployment in Greece and Italy have already produced rioting in the streets and larger followings for right-wing populist parties. And that's after only a few years of misery.
So far, though, the topic has gotten surprisingly little attention among economists. At MIT, Autor has written about the elimination of middle-class jobs thanks to encroaching technology, and his colleagues, Erik Brynjolfsson and Andrew McAfee of MIT's Center for Digital Business, got a lot of attention a couple of years ago for their e-book Race Against the Machine [29], probably the best short introduction to the subject of automation and jobs. (Though a little too optimistic about the future of humans, I think.) The fact that Paul Krugman is starting to think about this deeply is also good news.
But it's not enough. When the robot revolution finally starts to happen, it's going to happen fast, and it's going to turn our world upside down. It's easy to joke about our future robot overlords—R2-D2 or the Terminator?—but the challenge that machine intelligence presents really isn't science fiction anymore. Like Lake Michigan with an inch of water in it, it's happening around us right now even if it's hard to see. A robotic paradise of leisure and contemplation eventually awaits us, but we have a long and dimly lit tunnel to navigate before we get there.
Since then, Luddite has become a derisive term for anyone afraid of new technology. After all, the weavers turned out to be wrong. Power looms put them out of work, but in the long run automation made the entire workforce more productive. Everyone still had jobs—just different ones. Some ran the new power looms, others found work no one could have imagined just a few decades before, in steel mills, automobile factories, and railroad lines. In the end, this produced wealth for everyone, because, after all, someone still had to make, run, and maintain the machines.
But that was then. During the Industrial Revolution, machines were limited to performing physical tasks. The Digital Revolution is different because computers can perform cognitive tasks too, and that means machines will eventually be able to run themselves. When that happens, they won't just put individuals out of work temporarily. Entire classes of workers will be out of work permanently.
In other words, the Luddites weren't wrong. They were just 200 years too early.
This isn't something that will happen overnight. It will happen slowly, as machines grow increasingly capable. We've already seen it in factories, where robots do work that used to be done by semiskilled assembly line workers. In a decade, driverless cars will start to put taxi hacks and truck drivers out of a job. And while it's easy to believe that some jobs can never be done by machines—do the elderly really want to be tended by robots?—that may not be true. Nearly 50 years ago, when MIT computer scientist Joseph Weizenbaum created a therapy simulation program named Eliza, he was astonished to discover just how addictive it was. Even though Eliza was almost laughably crude, it was endlessly patient and seemed interested in your problems. People liked talking to Eliza.
Robots will take over more and more jobs. As this happens, capital will become ever more powerful and labor will become ever more worthless.
And that was 50 years ago, using only a keyboard and an old Teletype terminal. Add a billion times more processing power and you start to get something much closer to real social interaction. Robotic pets are growing so popular that Sherry Turkle, an MIT professor who studies the way we interact with technology, is uneasy about it: "The idea of some kind of artificial companionship [14]," she says, "is already becoming the new normal [14]."It's not hard to see why. Unlike humans, an intelligent machine does whatever you want it to do, for as long as you want it to. You want to gossip? It'll gossip. You want to complain for hours on end about how your children never call? No problem. And as the technology of robotics advances—the Pentagon has developed a fully functional robotic arm that can be controlled by a human mind—they'll be able to perform ordinary human physical tasks too. They'll clean the floor, do your nails, diagnose your ailments, and cook your food.
Increasingly, then, robots will take over more and more jobs. And guess who will own all these robots? People with money, of course. As this happens, capital will become ever more powerful and labor will become ever more worthless. Those without money—most of us—will live on whatever crumbs the owners of capital allow us.
This is a grim prediction. But it's not nearly as far-fetched as it sounds. Economist Paul Krugman recently remarked that our long-standing belief in skills and education as the keys to financial success may well be outdated. In a blog post titled "Rise of the Robots [15]," he reviewed some recent economic data and predicted that we're entering an era where the prime cause of income inequality will be something else entirely: capital vs. labor.
Until a decade ago, the share of total national income going to workers was pretty stable at around 70 percent, while the share going to capital—mainly corporate profits and returns on financial investments—made up the other 30 percent. More recently, though, those shares have started to change. Slowly but steadily, labor's share of total national income has gone down [16], while the share going to capital owners has gone up. The most obvious effect of this is the skyrocketing wealth of the top 1 percent, due mostly to huge increases in capital gains and investment income.
The question we want to answer is simple: If CBTC is already happening—not a lot, but just a little bit—what trends would we expect to see? What are the signs of a computer-driven economy? First and most obviously, if automation were displacing labor, we'd expect to see a steady decline in the share of the population that's employed [19].
Second, we'd expect to see fewer job openings [20] than in the past. Third, as more people compete for fewer jobs, we'd expect to see middle-class incomes flatten[21] in a race to the bottom. Fourth, with consumption stagnant, we'd expect to see corporations stockpile more cash and, fearing weaker sales, invest less [22] in new products and new factories. Fifth, as a result of all this, we'd expect to see labor's share of national income decline and capital's share rise.
These trends are the five horsemen of the robotic apocalypse, and guess what? We're already seeing them, and not just because of the crash of 2008. They started showing up in the statistics more than a decade ago. For a while, though, they were masked by the dot-com and housing bubbles, so when the financial crisis hit, years' worth of decline was compressed into 24 months. The trend lines dropped off the cliff.
How alarmed should we be by this? In one sense, a bit of circumspection is in order. The modern economy is complex, and most of these trends have multiple causes. The decline in the share of workers who are employed, for example, is partly caused by the aging of the population. What's more, the financial crisis has magnified many of these trends. Labor's share of income will probably recover a bit once the economy finally turns up.
Doctors should probably be worried as well. Remember Watson, the Jeopardy!-
playing computer? In another decade, there's a good chance that Watson will be able to do this without any human help at all.
But in another sense, we should be very alarmed. It's one thing to suggest that robots are going to cause mass unemployment starting in 2030 or so. We'd have some time to come to grips with that. But the evidence suggests that—slowly, haltingly—it's happening already, and we're simply not prepared for it.playing computer? In another decade, there's a good chance that Watson will be able to do this without any human help at all.
How exactly will this play out? Economist David Autor has suggested that the first jobs to go will be middle-skill jobs [23]. Despite impressive advances, robots still don't have the dexterity to perform many common kinds of manual labor that are simple for humans—digging ditches, changing bedpans. Nor are they any good at jobs that require a lot of cognitive skill—teaching classes, writing magazine articles. But in the middle you have jobs that are both fairly routine and require no manual dexterity. So that may be where the hollowing out starts: with desk jobs in places like accounting or customer support.
That hasn't yet happened in earnest because AI is still in its infancy. But it's not hard to see which direction the wind is blowing. The US Postal Service, for example, used to employ humans to sort letters [24], but for some time now, that's been done largely by machines that can recognize human handwriting. Netflix does a better job picking movies you might like than a bored video-store clerk. Facial recognition software is improving rapidly, and that's a job so human there's an entire module in the human brain, the fusiform gyrus, solely dedicated to this task.
In fact, there's even a digital sports writer [25]. It's true that a human being wrote this story—ask my mother if you're not sure—but in a decade or two I might be out of a job too. Doctors should probably be worried as well. Remember Watson, the Jeopardy!-playing computer? It's now being fed millions of pages of medical information so that it can help physicians do a better job [26] of diagnosing diseases. In another decade, there's a good chance that Watson will be able to do this without any human help at all.
This is, admittedly, pretty speculative. Still, even if it's hard to find concrete examples of computers doing human work today, it's going to get easier before long.
Take driverless cars. My newspaper is delivered every day by a human being. But because humans are fallible, sometimes I don't get a paper, or I get the wrong one. This would be a terrific task for a driverless car in its early stages of development. There are no passengers to worry about. The route is fixed. Delivery is mostly done in the early morning, when traffic is light. And the car's abundance of mapping and GPS data would ensure that it always knows which house is which.
The next step might be passenger vehicles on fixed routes, like airport shuttles. Then long-haul trucks. Then buses and taxis. There are 2.5 million workers who drive trucks, buses, and taxis for a living, and there's a good chance that, one by one, all of them will be displaced by driverless vehicles within the next decade or two. What will they do when that happens? Machines will be putting everyone else with modest skill levels out of work too. There will be no place to go but the unemployment line.
WHAT CAN WE DO about this? First and foremost, we should be carefully watching those five economic trends linked to capital-biased technological change to see if they rebound when the economy picks up. If, instead, they continue their long, downward slide, it means we've already entered a new era.
Next, we'll need to let go of some familiar convictions. Left-leaning observers may continue to think that stagnating incomes can be improved with better education and equality of opportunity. Conservatives will continue to insist that people without jobs are lazy bums who shouldn't be coddled. They'll both be wrong.
Corporate executives should worry too. For a while, everything will seem great for them: Falling labor costs will produce heftier profits and bigger bonuses. But then it will all come crashing down. After all, robots might be able to produce goods and services, but they can't consume them. And eventually computers will become pretty good CEOs as well.
Solutions to this will remain elusive as long as we resist facing the real change in the way our economy works. When we finally do, we'll probably have only a few options open to us. The simplest, because it's relatively familiar, is to tax capital at high rates and use the money to support displaced workers. In other words, as The Economist's Ryan Avent puts it, "redistribution, and a lot of it [27]."
There's not much question that this could work, but would we be happy in a society that offers real work to a dwindling few and bread and circuses for the rest? Most likely, owners of capital would strongly resist higher taxes, as they always have, while workers would be unhappy with their enforced idleness. Still, the ancient Romans managed to get used to it—with slave labor playing the role of robots—and we might have to, as well.
The history of mass economic displacement isn't encouraging—fascists in the '20s, Nazis in the '30s. Recent high levels of unemployment in Greece and Italy have already produced larger followings for right-wing populist parties.
Alternatively, economist Noah Smith suggests that we might have to fundamentally change the way we think about how we share economic growth [28]. Right now, he points out, everyone is born with an endowment of labor by virtue of having a body and a brain that can be traded for income. But what to do when that endowment is worth a fraction of what it is today? Smith's suggestion: "Why not also an endowment of capital? What if, when each citizen turns 18, the government bought him or her a diversified portfolio of equity?"In simple terms, if owners of capital are capturing an increasing fraction of national income, then that capital needs to be shared more widely if we want to maintain a middle-class society. Somehow—and I'm afraid a bit of vagueness is inevitable here—an increasing share of corporate equity will need to be divvied up among the entire population as workers are slowly but surely stripped of their human capital. Perhaps everyone will be guaranteed ownership of a few robots, or some share of robot production of goods and services.
But whatever the answer—and it might turn out to be something we can't even imagine right now—it's time to start thinking about our automated future in earnest. The history of mass economic displacement isn't encouraging—fascists in the '20s, Nazis in the '30s—and recent high levels of unemployment in Greece and Italy have already produced rioting in the streets and larger followings for right-wing populist parties. And that's after only a few years of misery.
So far, though, the topic has gotten surprisingly little attention among economists. At MIT, Autor has written about the elimination of middle-class jobs thanks to encroaching technology, and his colleagues, Erik Brynjolfsson and Andrew McAfee of MIT's Center for Digital Business, got a lot of attention a couple of years ago for their e-book Race Against the Machine [29], probably the best short introduction to the subject of automation and jobs. (Though a little too optimistic about the future of humans, I think.) The fact that Paul Krugman is starting to think about this deeply is also good news.
But it's not enough. When the robot revolution finally starts to happen, it's going to happen fast, and it's going to turn our world upside down. It's easy to joke about our future robot overlords—R2-D2 or the Terminator?—but the challenge that machine intelligence presents really isn't science fiction anymore. Like Lake Michigan with an inch of water in it, it's happening around us right now even if it's hard to see. A robotic paradise of leisure and contemplation eventually awaits us, but we have a long and dimly lit tunnel to navigate before we get there.
Links:
[1] http://www.motherjones.com/media/2013/05/robots-modern-unimate-watson-roomba-timeline
[2] http://www.wired.com/gadgetlab/2012/12/ff-robots-will-take-our-jobs/all/
[3] http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html
[4] http://www.computerhistory.org/timeline/?year=1941
[5] http://computer.howstuffworks.com/moores-law.htm
[6] http://www9.georgetown.edu/faculty/bassr/511/projects/letham/final/chess.htm
[7] http://www.nytimes.com/2011/02/17/science/17jeopardy-watson.html?pagewanted=all&_r=1&
[8] http://tech.fortune.cnn.com/2012/11/12/self-driving-cars/
[9] http://arstechnica.com/information-technology/2012/06/with-16-petaflops-and-1-6m-cores-doe-supercomputer-is-worlds-fastest/
[10] https://asc.llnl.gov/computing_resources/sequoia/
[11] http://www.nature.com/news/brain-simulation-and-graphene-projects-win-billion-euro-competition-1.12291
[12] http://news.bbc.co.uk/2/hi/8164060.stm
[13] http://techland.time.com/2012/05/01/the-collapse-of-moores-law-physicist-says-its-already-happening/
[14] http://www.livescience.com/27204-human-robot-relationships-turkle.html
[15] http://krugman.blogs.nytimes.com/2012/12/08/rise-of-the-robots/
[16] http://blogs.reuters.com/felix-salmon/2012/09/26/chart-of-the-day-the-long-decline-of-labor/
[17] http://earlywarn.blogspot.com/2012/04/global-robot-population.html
[18] http://www.motherjones.com/kevin-drum/2012/04/chart-day-our-robot-overlords-will-take-over-soon
[19] http://www.motherjones.com/kevin-drum/2011/11/raw-data-whos-working
[20] http://www.brookings.edu/blogs/up-front/posts/2011/09/09-jobs-winship
[21] http://www.motherjones.com/kevin-drum/2011/09/inflection-point-2000
[22] http://www.motherjones.com/kevin-drum/2013/01/chart-decade-corporations-are-pessimistic-about-future-growth
[23] http://krugman.blogs.nytimes.com/2011/03/06/autor-autor/
[24] http://about.usps.com/publications/pub100/pub100_042.htm
[25] http://www.nytimes.com/2011/09/11/business/computer-generated-articles-are-gaining-traction.html?pagewanted=all&_r=0
[26] http://www.theatlantic.com/magazine/archive/2013/03/the-robot-will-see-you-now/309216/
[27] http://www.economist.com/blogs/freeexchange/2013/03/labour-markets-0
[28] http://www.theatlantic.com/business/archive/2013/01/the-end-of-labor-how-to-protect-workers-from-the-rise-of-the-robots/267135/
[29] http://www.amazon.com/Race-Against-Machine-Accelerating-Productivity/dp/0984725113
[1] http://www.motherjones.com/media/2013/05/robots-modern-unimate-watson-roomba-timeline
[2] http://www.wired.com/gadgetlab/2012/12/ff-robots-will-take-our-jobs/all/
[3] http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html
[4] http://www.computerhistory.org/timeline/?year=1941
[5] http://computer.howstuffworks.com/moores-law.htm
[6] http://www9.georgetown.edu/faculty/bassr/511/projects/letham/final/chess.htm
[7] http://www.nytimes.com/2011/02/17/science/17jeopardy-watson.html?pagewanted=all&_r=1&
[8] http://tech.fortune.cnn.com/2012/11/12/self-driving-cars/
[9] http://arstechnica.com/information-technology/2012/06/with-16-petaflops-and-1-6m-cores-doe-supercomputer-is-worlds-fastest/
[10] https://asc.llnl.gov/computing_resources/sequoia/
[11] http://www.nature.com/news/brain-simulation-and-graphene-projects-win-billion-euro-competition-1.12291
[12] http://news.bbc.co.uk/2/hi/8164060.stm
[13] http://techland.time.com/2012/05/01/the-collapse-of-moores-law-physicist-says-its-already-happening/
[14] http://www.livescience.com/27204-human-robot-relationships-turkle.html
[15] http://krugman.blogs.nytimes.com/2012/12/08/rise-of-the-robots/
[16] http://blogs.reuters.com/felix-salmon/2012/09/26/chart-of-the-day-the-long-decline-of-labor/
[17] http://earlywarn.blogspot.com/2012/04/global-robot-population.html
[18] http://www.motherjones.com/kevin-drum/2012/04/chart-day-our-robot-overlords-will-take-over-soon
[19] http://www.motherjones.com/kevin-drum/2011/11/raw-data-whos-working
[20] http://www.brookings.edu/blogs/up-front/posts/2011/09/09-jobs-winship
[21] http://www.motherjones.com/kevin-drum/2011/09/inflection-point-2000
[22] http://www.motherjones.com/kevin-drum/2013/01/chart-decade-corporations-are-pessimistic-about-future-growth
[23] http://krugman.blogs.nytimes.com/2011/03/06/autor-autor/
[24] http://about.usps.com/publications/pub100/pub100_042.htm
[25] http://www.nytimes.com/2011/09/11/business/computer-generated-articles-are-gaining-traction.html?pagewanted=all&_r=0
[26] http://www.theatlantic.com/magazine/archive/2013/03/the-robot-will-see-you-now/309216/
[27] http://www.economist.com/blogs/freeexchange/2013/03/labour-markets-0
[28] http://www.theatlantic.com/business/archive/2013/01/the-end-of-labor-how-to-protect-workers-from-the-rise-of-the-robots/267135/
[29] http://www.amazon.com/Race-Against-Machine-Accelerating-Productivity/dp/0984725113
No comments:
Post a Comment