Putting the “AI” in ThAInksgiving



Your holiday dinner table is set. Your guests are ready to gab. And, then, in between bites, someone mentions Alexa and AI. “What’s this stuff I’m hearing about killer AI? Cars that decide who to run over? This is crazy!”

Welcome to Thanksgiving table talk circa 2017.

It’s true that AI and machine learning are changing the world, and in a few years, it will be embedded in all of the technology in our lives.

So maybe it makes sense to help folks at home better understand machine learning. After all, without deep knowledge of current tech, autonomous vehicles seem dangerous, Skynet is coming, and the (spoiler warning!) AI-controlled human heat farms of The Matrix are a real possibility.

This stems from a conflation of the very real and exciting concept of machine learning and the very not real concept of “general artificial intelligence,” which is basically as far off today as it was when science fiction writers first explored the idea a hundred years ago.

That said, you may find yourself in a discussion on this topic during the holidays this year, either of your own volition or by accident. And you’ll want to be prepared to argue for AI, against it, or simply inject facts as you moderate the inevitably heated conversation.

But before you dive headlong into argument mode, it’s important that you both know what AI is (which, of course, you do!) and that you know how to explain it.

Starters

Might I recommend a brush up with this post by our own Devin Coldewey that does a good job of both explaining what AI is, the difference between weak and strong AI, and the fundamental problems of trying to define it.

This post also provides an oft-used analogy for AI: The Chinese Room.

Picture a locked room. Inside the room sit many people at desks. At one end of the room, a slip of paper is put through a slot, covered in strange marks and symbols. The people in the room do what they’ve been trained to: divide that paper into pieces, and check boxes on slips of paper describing what they see — diagonal line at the top right, check box 2-B, cross shape at the bottom, check 17-Y, and so on. When they’re done, they pass their papers to the other side of the room. These people look at the checked boxes and, having been trained differently, make marks on a third sheet of paper: if box 2-B checked, make a horizontal line, if box 17-Y checked, a circle on the right. They all give their pieces to a final person who sticks them together and puts the final product through another slot.

The paper at one end was written in Chinese, and the paper at the other end is a perfect translation in English. Yet no one in the room speaks either language.

The analogy, created decades ago, has its shortcomings when you really get into it, but it’s actually a pretty accurate way of describing machine learning systems, which are composed of many, many tiny processes unaware of their significance in a greater system to accomplish complex tasks.

Within that frame of reference, AI seems rather benign. And when AI is given ultra-specific tasks, it is benign. And it’s already all around us right now.

Machine learning systems help identify the words you speak to Siri or Alexa, and help make the voice the assistant responds with sound more natural. An AI agent learns to recognize faces and objects, allowing your pictures to be categorized and friends tagged without any extra work on your part. Cities and companies use machine learning to dive deep into huge piles of data like energy usage in order to find patterns and streamline the systems.

But there are instances in which this could spin out of control. Imagine an AI that was tasked with efficiently manufacturing postcards. After a lot of trial and error, the AI would learn how to format a postcard, and what types of pictures work well on postcards. It might then learn the process for manufacturing these postcards, trying to eliminate inefficiencies and errors in the process. And then, set to perform this task to the best of its ability, it might try to understand how to increase production. It might even decide it needs to cut down more trees in order to create more paper. And since people tend to get in the way of tree-cutting, it might decide to eliminate people.

This is of course a classic slippery slope argument, with enough flaws to seem implausible. But because AI is often a black box — we put data in and data comes out, but we don’t know how the machine got from A to B — it’s hard to say what the long-term outcome of AI might be. Perhaps Skynet was originally started by Hallmark.

Entree

Here’s what we do know:

Right now, there is no “real” AI out there. But that doesn’t mean smart machines can’t help us in many circumstances.

On a practical level, consider self-driving cars. It’s not just about being able to read or watch TV during your commute; think about how much it would benefit the blind and disabled, reduce traffic and improve the efficiency of entire cities, and save millions of lives that would have been lost in accidents. The benefits are incalculable.

At the same time, think of the those who work as drivers in one capacity or another: truckers, cabbies, bus drivers and others may soon be replaced by AIs, putting millions worldwide out of work permanently.



Source link

قالب وردپرس

Add a Comment

Your email address will not be published. Required fields are marked *