visit
All technology is an echo of the wishes of whoever built it.This isn’t just about ; any technology you’re in danger of seeing as independent is simply the kind whose effect lingers longer after your button press.
No matter how complex its echo, technology always comes from and is designed by people, which means it’s no more objective than we are. This makes the definition of problematic. Algorithmic bias refers to situations where a computer system reflects the implicit values of the humans who created it, but doesn’t all technology reflect its creators’ implicit values? And if you think humans can be completely unbiased, take a stroll through …
Math can obscure the human element and give an illusion of objectivity.Sure, data and math can increase the amount of information you use in decision-making and/or save you from heat-of-the-moment silliness, but how you use them is still up to you. According to :
Tragicomically, a layer of math and data spread on top of the host of utterly subjective choices () obscures the ever-present human element and gives an illusion of objectivity.Objectivity is a philosophical concept of being true independently from individual subjectivity caused by perception, emotions, or imagination.
It’s much flashier to say, “The AI learned to do this task all by itself…” than to tell the truth.
Look, I know sci-fi sells. It’s much flashier to say, “The AI learned to do this task all by itself…” than to tell the truth: “People used a tool with a cool name to help them write code. They fed in examples they considered appropriate, found some patterns in them, and turned those patterns into instructions. Then they checked whether they liked what those instructions do for them.”
The truth drips with human subjectivity — look at all those little choices along the way which are left up to people running the project. Wrapping them in a glamorous coat of math doesn’t make the core any less squishy.When math is applied towards a purpose, that purpose is still shaped by the sensibilities of our times.
To make matters worse, the is to let you explain your wishes to a computer using examples () instead of instructions. Which examples? Hey, that depends on what you’re trying to teach your system to do. Datasets are like textbooks for your student to learn from. Guess what? Textbooks have authors.
Did you answer “bananas”? Why didn’t you mention the plastic bag roll? Or the color of the banana? This example comes from Google’s AI Fairness and demonstrates that although all three answers are technically correct, for some reason we have a bias to prefer one of them. Not all people would share that bias; what we perceive and how we respond is influenced by our norms. If you live on a planet where all bananas are blue, you might answer “yellow bananas” here. If you’ve never seen a banana before, you might say “shelves with yellow stuff on them.” Also correct.
When I said that “” some folks wrote to tell me that I’m wrong because bias comes from data. Well, we can both be winners… because .
Textbooks reflect the biases of their authors. Like textbooks, datasets have authors. They’re collected according to instructions made by people.Other design choices like , data , and are also entirely up to , which is why it’s so important that you take care to and .
It’s safest to think of AI as a tool for writing code.Please don’t fall victim to the sci-fi hype. You already know that computer code is written by people, so it reflects their implicit values. Think of as an excellent tool for writing code — because — so the same basics hold. Remember to and all technology carefully, especially when it scales.
Machines are just tools. They’re extensions of their creators, who, according to the gory definition in the last bullet point of (“when our past experiences distort our perception of and reaction to information”), are biased.
All of us had our perceptions shaped by our past experience and all of us are products of our personal histories. In that sense, every single human is biased.Philosophical arguments invalidating the existence of truly unbiased and objective technology don’t give anyone an excuse to be a jerk. If anything, the fact that you can’t pass the ethical buck to a machine puts more responsibility on your shoulders, not less.
Sure, our perceptions are shaped by our times. Societal ideas of virtue, justice, kindness, fairness, and honor aren’t the same today as they were for peoples living a few thousand years ago and they may keep changing. That doesn’t make the ideas unimportant, it only means we can’t outsource them to a heap of wires. They’re the responsibility of all of us, together. We should strive to do our best ethically and treat all people with the highest standards of respect and care.
The fact that you can’t pass the ethical buck to a machine puts more responsibility on your shoulders, not less.
You are responsible for how you use your tools and where you point them.Another choice is which data to use for AI. You should expect better performance on examples that are similar to what your system learned from. If you choose not to use data from people like me, your system is more likely to make a mistake when I show up as your user. It’s your duty to think about the pain you could cause when that happens. Yes, it takes effort and imagination (and extensive ).
Unfamiliar with analytics for comparing ? Start your journey by reading up on like this one I made of the output of a model on two different simulated datasets. At a bare minimum, I hope you’d have the common sense to check whether the of your user matches the in your data. For example, if 100% of your training examples come from residents of a single country, but your target users are global… expect a mess. Your system will treat that country’s residents less sloppily than everyone else. Is that fair? If you’re keen to take your fairness analytics to the next level, there are specialized tools out there to help you. One of my favorites is the .
If you want to strive for fairness, work to annihilate your own ignorance of the consequences of your decisions.I’ve written a lot of words here, when I could have just told you that most of the research on the topic of bias and fairness in is about making sure that your system doesn’t have a disproportionate effect on some group of users relative to others. The primary focus is on checks and similar . The reason I wrote so much is that I want you to do even better. Automated distribution checks only go so far. No one knows a system better than its creators, so if you’re building one, take the time to think. Think about how your actions will affect people and do your best to give those you’ll affect a voice to guide you through your blind spots.
This article’s title plays on a quote from of the Wizard of Oz: “Pay no attention to that man behind the curtain.” With AI, puppeteers come in all genders, but there is always a human behind the machine. The trouble begins when you forget that.