DEPENDING ON WHOM you ask, robots and artificial intelligence are either coming to take your job, or you’re perfectly safe, at least for the near future. Truth is, automation always has and always will put people out of work. It’s just that this time around, even highly skilled jobs may be imperiled. And that has some folks dreading a time in which robots and AI upend the human workforce.
Included among those folks is San Francisco supervisor Jane Kim, who Wednesday launched a campaign called the Jobs of the Future Fund to study a statewide “payroll” tax on job-stealing machines. Proceeds from the tax would bankroll things like job retraining, free community college, or perhaps a universal basic income―countermeasures Kim thinks might make a robotic future more bearable for humans.
Artificial intelligence has had its share of ups and downs recently. In what was widely seen as a key milestone for artificial intelligence (AI) researchers, one system beat a former world champion at a mind-bendingly intricate board game. But then, just a week later, a “chatbot” that was designed to learn from its interactions with humans on Twitter had a highly public racist meltdown on the social networking site.
How did this happen, and what does it mean for the dynamic field of AI?
Machines contain the breadth of human knowledge, yet they have the common sense of a newborn. The problem is that computers don’t act enough like toddlers. Yann LeCun, director of artificial intelligence research at Facebook, demonstrates this by standing a pen on the table and then holding his phone in front of it. He performs a sleight of hand, and when he picks the phone up—ta-da! The pen is gone. It’s a trick that’ll elicit a gasp from any one-year-old child, but today’s cutting-edge artificial intelligence software—and most months-old babies—can’t appreciate that the disappearing act isn’t normal. “Before they’re a few months old, you play this trick on them, and they don’t care,” says LeCun, a 54-year-old father of three. “After a few months, they figure out this is not normal.”
There’s been a lot of fear about the future of artificial intelligence.
Stephen Hawking and Elon Musk worry that AI-powered computers might one day become uncontrollable super-intelligent demons. So does Bill Gates.
But Baidu chief scientist Andrew Ng—one of the world’s best-known AI researchers and a guy who’s building out what is likely one of the world’s largest applied AI projects—says we really ought to worry more about robot truck drivers than the Terminator.
In fact, he’s irritated by the discussion about scientists somehow building an apocalyptic super-intelligence. “I think it’s a distraction from the conversation about…serious issues,” Ng said at an AI conference in San Francisco last week.
In the last decade, technology has made it cheaper and easier to start new businesses, finance them, realize operational efficiencies and scale geographically. It has also empowered customers and employees through social media, which has created opportunities for competitors. While these changes have been transformative, coming technology advances in the areas of smart robotics, artificial intelligence, the Internet of Things and Big Data (“AI Revolution”) will metamorphose how businesses are staffed, operated and managed.
So, when he or she passes by the kiosk, the digital signage, equipped with a freaky sort of Anonymous Video Analytics technology, zooms in on his or her face and instantly determines gender and age group to guess what products might exert some allure (hopefully it won’t scan your second chin and suggest half a South Beach Living Fiber Fit Bar … nothing else).
Once possible view of the future:
The use of the term “Singularity” comes from physics, where it describes a point of collapsed space-time, typically at the center of a black hole; the underlying claim is that all of our knowledge of how the universe works is irrelevant within a Singularity. This is the root of the metaphorical use of the term–after a Singularity event, everything we know will change in ways we can’t now understand. In the early 1990s, Mathematician and science fiction writer Vernor Vinge was the first to clearly articulate this usage of the term, and begins his essay as follows:
Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.