paint-brush
Metaphysics of an Egg Sorting Machine by@temitopeajileye
2,252 reads
2,252 reads

Metaphysics of an Egg Sorting Machine

by Temitope AjileyeAugust 2nd, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Chickens lay eggs in a variety of sizes, without solution of continuity, but we like to sell them in boxes of small, medium and large eggs.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Metaphysics of an Egg Sorting Machine
Temitope Ajileye HackerNoon profile picture

Chickens lay eggs in a variety of sizes, without solution of continuity, but we like to sell them in boxes of small, medium and large eggs.

What a tedious job it was, before the machines, for the poultry-men and poultry-women to sort hundreds of eggs every day! Despite that, they must have thought there was no other way.

“How lucky is the farmer, who uses mules and ploughs — they would say — and how lucky is the miller, whose job is done by the mill.”

“How else would we fill egg boxes with eggs of the correct size — they would conclude, nodding at each other — if we didn’t check them ourselves?”

For some this nod would come with the sad acceptance of being condemned to a Sisyphean task, for others a sense of dignity of doing a job that resisted the machines.

As it often happens, from boredom came inspiration. I like to think that, one morning, after the 200th egg of the day, one chicken farmer had the following realization.

“Yes, we humans are really good at sorting eggs, but we are not necessary at all! Nature itself can distinguish small, medium and large eggs. All we need to do is build a machine that allows nature to make the decision, in the same way that it decides that an apple should leave the branch it is attached to and fall to the ground.”

While pondering over this proposition, he would hold a pair of eggs of different sizes, one in each hand, and rhythmically throw them up in the air. Given his experience, he would be able to throw and catch them without looking.

Eureka! He would realize that the size of eggs was directly related to another property that machines could deal with more easily, their weight, and that there was a way to sort eggs without looking at them. He would then assemble an inclined plane, three seesaw swings with a different weight at one end and a conveyor belt into the first ever Egg Sorting Machine.

The conveyor belt brought each egg to the swings, in turns. Whenever its weight was greater than that of the seesaw it was standing on, it would be deposited onto the inclined plane and roll into a group of similarly sized eggs.

Behold, artificial intelligence!

Not only was the machine doing a task only humans were deemed capable of, it was completing it with speed and accuracy far greater than a farmer could manage.

Some farmers would receive this machine with unrestrained enthusiasm, others with skepticism, disappointment and fear:

“What a devilish machine!”

“Unless it takes the eggs from the nest to the box, it is of no use to me.”

“You are going to put farmers out of work.”

“Hand sorted eggs are surely better, people will see the difference.”

“Machine sorted eggs will alienate farmers from their work.”

“What if the machine goes on a rampage? What if the controls break and the machine sorts eggs faster than you can unload it, the inclined plane collapses and you are killed by an eggslide?”

At this point you might be amused by the egg-sorting-machine doom-sayers. You might even admit the possibility of death by eggslide but still be perplexed by claims of this machine being intelligent.

In a similar way I view the enfolding of results in artificial intelligence and the debates that periodically arise around it.

An increasing number of people is worried about the moment machines will become superior to human beings, a predicted moment in history named technological singularity.

An industry leader was recently heard saying that AI is “a fundamental existential risk for human civilization” and that there is a concrete risk of “ robots going down the street killing people”.

An important distinction needs to be made at this point. The menace of the singularity is not so much that machines could kill people in city streets or anywhere else — that has been happening since the first industrial revolution — but rather that they would have the will to do it on top of the capacity to do it effortlessly, much alike bandits ‘going down the street’ in a Western movie, spreading violence and bullets against helpless peasants. The greatest fear is that the machines, even before subduing us physically, will have beat us intellectually and will look at us like we look at cockroaches today.

I do not dispute the possibility of individuals or groups of people falling victim to machines in the near future, in ways more or less spectacular. However, I believe such an event, however scarier, not to be so different in nature from a death by a falling rock, or an eggslide, and that claims of machines wanting to kill us are as intellectually valid as claims of the egg-sorting-machine wanting to kill the chicken farmer.

The debate around the promise and threat of AI to humankind is reignited every time a machine accomplishes a task we believed inherently human. This has been the case with the recent victories of Alpha Go, the Deepmind built machine that has laid waste of the highest ranking professional Go players in China, Korea and Japan.

Go had resisted the machines for two decades after the capitulation of chess and was, because of this, regarded as an archetypical human game. Go was just too big for conventional machines to tackle; what human intuition gathers instantly from a board position would take machines minutes to get to.

After the defeat of Lee Sedol in March 2016, the situation changed dramatically.

“Now that machines have surpassed us in Go, — I have heard — it won’t be long till they surpass of everywhere.”

Similar reactions probably followed Deep Blue vs Kasparov in 1996, when a computer won against a human for the first time in a regular game. A chess master who watched the game, on the 10th of February, described Kasparov in no caring terms:

“Look at him, shaking his head under the cold, cold attack of the computer. I wish he could pull a rabbit out of his hat, but I’m afraid the rabbit’s dead.”

A computer beating humans, in a game which had been one of the battlefields of the cold war, was a news which promised reckoning in the human society, but today we play against computer chess programs on our smartphones that are greatly superior to the IBM machine and we do not feel intimidated in any way. In fact, we do not believe our smartphones to be smart at all, despite them being orders of magnitude more powerful and pervasive than laboratory computers of the 90's.

Some will argue that Alpha Go is a much more complicated machine than Deep Blue and that the two are incomparable.

It is undeniable that the Go playing machine is much better than the Chess playing one, but they are not incomparable. We know exactly how much Alpha Go is more complicated than Deep Blue, in fact we know how more complicated than an egg-sorting-machine it is. What we don’t know, and remains incomparable, is the distance between Alpha Go and human intellect.

I believe every machine we have ever built to be just a variation of an egg-sorting-machine. Regardless of how many such machines we put together or how many layers we stack on top of each other, what we get is still an egg-sorting-machine, albeit a very complicated one.

As technology progresses, machines able to solve a problem or class of problems better than humans will keep appearing. It is not beyond reason that in our generation a machine will write an award winning novel, without needing Borges’ infinite library to store random aggregates of characters produced as a by-product.

However, the greatest divide that remains between us and the machines is not how correctly or how efficiently we can solve a given problem, but our abilities to find them. It is my opinion this distance will not get any shorter, not because we will be able to conserve superiority in any given task, but because we won’t stop feeling boredom, irritation, stress, anxiety and anger.

I believe these conditions, commonly regarded as negative, to be the greatest catalysts for change, by forcing a feeling of unease and restlessness on us, thus making us hate status quos we once loved.

The automatic writer might be successful, but will it ever stop writing to reflect on itself and decide to pursue a new style? Why should it?

I cannot conclude without making the following admission. Badly hidden behind my argument is another outcome which, however improbable, I cannot exclude theoretically.

I firmly believe that increasing the technical complexity of our machines will not produce anything that separates itself significantly from the machines. However, what if we humans are reachable along the path of this growth? What if a sufficiently complicated machine will be indistinguishable from a human, not because machines will have jumped over the distance that separates us, but because the separation was never there? We ourselves would be self replicating machines built and left behind after a move and our idiosyncrasies results of molecular fluctuations.

If that is indeed the case, it is inevitable that a future iteration of the egg-sorting-machine will wake up one day and question itself over the necessity of sorting eggs or the necessity of sorting at all.

바카라사이트 바카라사이트 온라인바카라