visit
The most recent episode of , centers around a car/truck accident between a family of four (a mother, father and two girls) and a truck driver. Sudden flash rains, lost control and one person gets killed (I won’t spoil the story) and this leads to an Invisibilia team deep dive into emotions. While listening to the episode, I remembered sitting in an MBA Ethics Class and the conversation was on the ‘’. The question was, if you are the driver of a Trolley with faulty brakes whom would you choose to hit between the five unsuspecting workers directly on your path or you could turn the trolley and hit one unsuspecting worker? The dilemma that you have to actively decide to save some lives to kill one is a moral gray area with no right or wrong answer. At the point in the class when we were having the conversation, I didn’t really think too deeply about it; it was an abstract conversation about a situation I didn’t really think I would ever find myself in. It was more of an intellectual exercise than a real one to me.
But for some reason, listening to that Invisibilia episode on the way to picking up my son waiting in traffic behind a Tesla, the question became real to me. Because we are moving into a world where, while we might not have to make those Trolley Problem decisions, our technologies might…
Dreams of self-driving cars have been with us since the first cars were made. At the World Fair in 1939, Far from the models we see driving on the streets of California and Austin, these used more rudimentary technology.
1956 advertisement by America’s Independent Electric Light And Power Companies
Like most, I believe we are still a ways off from the machine learning technology being robust enough for full autonomy, even as pundits suggest that . But what if all this is much closer than we think? What if I am totally wrong and we’ll have full autonomy in 2018? With the work that the likes of GM, Waymo, Uber and Tesla this might not be so far in the future after all. So where does that leave us on the innovation path of viability -> feasibility -> desirability (from )
For companies like Uber, developing autonomous vehicles is core and, frankly, existential. The very business model that sustains Uber now depends on Uber replacing the drivers behind the wheel. As I laid out in the post, the company has to shift to driverless cars to reduce the cost of doing business. It’s a critical business decision. Do we that when Uber, with all it’s ethical and cultural problems, will build autonomous vehicles that make the customer-centric decision when it faces the ‘Trolley Problem’? Because you know it will happen don’t you? When the company deploys millions of autonomous vehicles on the roads, you know there will be accidents and moral decisions to make. No technology system is 100% perfect, with more possibilities for error, there will be errors.
For companies like Google and GM, are we comfortable that their machines will have our best interest at heart when it comes to non-binary decisions that might be related to life or death? Will a Waymo car be able to decide between swerving to hit a car with 4 cute puppies or risk the lives of your family in the car? How is this decision model being programmed into the self-driving cars? We know that the defaults embedded in our machines are .
And these questions do not just relates to autonomous vehicles; Robina (below) is a robot that is supposed to assist elderly residents in their homes. Robina has the machine intelligence required for it to learn from the performance and behavior of other Robinas, retrieving real-time information from centralized cloud databases. But who is to blame if something goes wrong with Robina, and she hurts/maims, as she treats my parents? What are the default mental models that are being embedded in Robina, Humanoid and ASIMO (all robots intended to serve elderly home care residents) that ensure they make the best decisions for us?
Technology advancement always beats policy and regulations. Always. So the defaults that will be embedded in these technologies will have to come from the moral codes of the programmers and technologists who will embed these We are on the cusp, and in some cases experiencing, and these technologies will improve our lives immensely. We now have to, as informed consumers, ask and demand answers to these questions from our leading tech companies. Our lives might depend on it.
I’ll leave you with this quote
‘Speed is irrelevant if you are traveling in the wrong direction.’ M. Gandhi
Are we moving too fast?
Please share, like and tweet. s_ign up for the Polymathic Monthly Newsletter_ , you’ll love it.