visit
Every time some genius decides to apply where it doesn’t belong, the world collectively rolls its eyes and puts another ballot in the AI-Is-A-Fad box.
Don’t read this dictionary, it’s not good for you. If your dictionary defines as magic or robots (or magical robots), of course you’ll be disappointed when it doesn’t deliver the cure to all that ails you. Let’s look at three common gripes using simple examples everyone can grasp.
A respectable software engineer once asked me with a straight face, “Can AI know that Canada is a country?”
Hold your horses there, cowboy. Let’s take a moment to think about how you know that Canada is a country. Someone told you that fact when you were little, you memorized it, and now you’re looking it up.
Meanwhile, in Canada… We can write the code that does that without AI—record the data in a table, then if someone asks about Canada, the program looks the word up and outputs the answer. What do you need AI for here? Nothing, that’s what.
If you expect that is magic, you’ll try to use it for everything. When your bosses find out how much effort you’ve wasted coming up with a , it’s hard to blame them for thinking that AI is hype and nonsense.
AI is like medicine — it can be a life-changer to those who need it, but everyone else should know better than to snack on it out of boredom.
Don’t use AI to learn things you already know the rules for, especially things that are defined by human-made rules in the first place. Examples: How do we convert dollars to cents? Is the one with the the male or female toilet? How do I indent C++ code? What’s the sales tax in Hawaii? Which boxing may I compete in? Should I wear a into a bank? (Photo credit:
Pick your and before you collect any data or hire any PhD gurus. If you feel the need to apply AI somewhere — anywhere! — just because all your friends are doing it, you’re setting yourself up for failure.
If you can make it work without AI, so much the better.Instead, start with you care about and if you can make it work without AI, so much the better.
My best friend is Canadian and she suggested I add this image to the article. I think maybe she’s cold.
In our quest to determine whether AI can know that Canada is a country, we established that a computer can store and retrieve such information without any fancy-pants . Our engineer friend wants to kick it up a notch: “Can a machine learn that all by itself?”
Whoa, what do you mean by “learn” and “all by itself”? Those words mean different things to different people. Let’s answer this version: “Can we expect a machine reliably output the conclusion that Canada is a country if it never had access to the word Canada before?”
Hey people-who-can’t-read-Chinese, is 香蕉 a country? How about 英国? No, don’t go looking up the answer, that’s cheating. You have to learn this all by yourself, remember?
When you have no additional information, how could you possibly know the answer? Similarly, common sense should force you to suspect that AI can’t learn things if there’s no information to learn from. You’d be correct.If you’re curious about this image and want to know how algorithms turn patterns into recipes, take a look . AI is all about extracting patterns from information and using those patterns to automatically make a recipe for turning your next input (Canada) into an output (country). So let’s ask ourselves: what relevant patterns could our computer possibly use if it has never seen the word before?
If there was nothing to learn from, learning is impossible.
Even if we have some data, our might pull out that give us a stupid recipe. Let’s imagine these are our training data: South Africa-country, hippopotamus-animal, frog-animal, Russian Federation-country, United States-country, cat-animal, United Kingdom-country, raccoon-animal, South Korea-country, New Zealand-country, butterfly-animal, giraffe-animal.
Before you’ve even finished reading the first pair, your AI algorithm has already digested them. It gives a satisfied burp and invites you to input your noun. Any guesses what it does when you show it Canada?
There are two loud patterns in these data. One is that all the countries have capital letters. If that’s the basis for the AI’s recipe, then “Canada” would be labeled correctly, but “canada” would not. What if the recipe were based on a different pattern?Is this a Canada? Did you notice all the countries have two-word names, while animals are single words? Well, your algorithm did. It says that Canada is clearly an animal. Oh deer.
Garbage in, garbage out.It’s important not to lose your grip on common sense here; the basics of learning and teaching that apply to human students also apply to AI. If you give your students garbage textbooks, expect them to learn some garbage.
Simple solutions don’t work for tasks that need complicated solutions. So AI comes to the rescue with — surprise! — complicated solutions.It also means that you should start expecting a tangle of when the for one of those headache tasks. When you read the recipe it came with for you… it’s unreadable.
Many people have a gut reaction to mystery and ambiguity: “Get rid of it! Simple or I don’t want it! I can’t trust it.”
Wishing complex things could be simple doesn’t make them so.It looks like you’re stuck with two bad options: live your life solving nothing but the simplest problems or progress beyond the low hanging fruit, but give up trust. Luckily, there’s another way.
Imagine choosing between two spaceships. Spaceship 1 comes with exact equations explaining how it works, but has never been flown. How Spaceship 2 flies is a mystery, but it has undergone extensive testing, with years of successful flights like the one you’re going on. Which spaceship would you choose?
You don’t need to understand it works to check that it work.
, but it’s a lot easier than making sense of something so huge it makes you dizzy. It’s also a principle that we practice often — for example with . Do you know how that headache pill works? . The reason we trust it is that we carefully check that it does work. (’s my deeper discussion of testing as a basis for trust.)
If you’re content with only solving easy problems, it’s okay to sit this AI thing out.