paint-brush
Oh AGI, Can You Feel Me? by@f1r3flyceo
1,587 reads
1,587 reads

Oh AGI, Can You Feel Me?

by Lucius MeredithMarch 29th, 2023
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

AGI is more likely to be complication than a help in dealing with hard problems like climate change. Like Life, intelligence has a drive to replicate itself, but replicating intelligence without understanding the role of embodiment is probably an explanation for the Fermi paradox.
featured image - Oh AGI, Can You Feel Me?
Lucius Meredith HackerNoon profile picture

In a recent conversation with some notable researchers in artificial general intelligence (AGI) we were discussing whether AGI would be a help or a complication when it came to climate change. We got a bit into the weeds over how soon the IPCC reports crossing over the 1.5C line. For the record, — when the parents of today’s newborns might be expecting to send their little ones off to college.


The issue is that there are simply too many climate related problems to catalogue. It’s not just the fact that we cannot avoid blowing past 1.5C. It’s also that . This is another canary in a whole flock of them that have gone belly up in the coal mine we’ve dug for ourselves. So, we could really use some help. My colleagues’ position was that AGI will be that help. My position is that a generally intelligent agent will be autonomous. Its autonomy will be one of the key tests by which we recognize it as generally intelligent. After all, that’s the test we apply to ourselves. But an autonomous agent will need motivation to help us.


, we seem to lack the motivation to help ourselves. As for AGI having anything like compassion or empathy for or even simply valuing humanity enough to lend a hand, I remind you that these qualities of ordinary humans, when they exist, are rooted in the feelings, not our computational capacity, or our intelligence. There are plenty of extremely intelligent humans who historically showed not one iota of compassion or empathy and whose impact on society and human history is the stuff of legends and nightmares. From Jack the Ripper to Pol Pot, the examples are numerous and terrifying.


photo courtesy of Aaron Burden & Unsplash


Human feelings are deeply rooted in human morphology and human biology

Even sublime texts like Rumi’s Mathnawi transform the language of human lust into a language of human love. Many take it to be a language of Love, but it is really a way of pointing to Love specifically for humans. It’s very unlikely to be useful for Alpha Centaurans or other intelligences evolved in the universe, except as a tool for understanding humans and their relationship to Love. We cannot expect that raw computational capacity, rooted in radically different morphology and practically no biology, will have any sort of understanding of or resonance with human experience.


Com-passion – etymologically: same feeling, or feeling with – is often difficult for humans to develop towards each other, as our history, even very recent and immediate history shows.


Did the MAGA republicans who stormed the US Capitol have compassion for the officers they maimed or killed? Did the officer who killed George Floyd or the officers who looked on as it was happening have compassion for the man in front of them? Why would an intelligence rooted in completely different morphology with nothing like our biological imperatives have compassion for humanity?


Human feelings are deeply rooted in human morphology and human biology. Even sublime texts like Rumi’s Mathnawi transform the language of human lust into a language of human love.


That’s why I use the metaphor of introducing a new species of spiders — intelligent spiders with the capacity to plan and adapt — as a proxy for the likely outcomes of AGI. And that’s one of the better outcomes. Much worse outcomes begin with military uses of AGI gone awry, or humans lacking compassion or being downright malevolent, and imbuing autonomous intelligent agents with violent and malevolent motivations or tendencies.


Modern humans are terrible at understanding the behavior of even the simplest feedback systems — for good reason. They are enormously complex, especially the all too common ones enjoying . (For the layman this means systems where small differences in input can result in arbitrarily large differences in output.) Raw predictive power, indeed even universal computational power, is no match for this feature. Witness the “hallucinations” of ChatGPT. Everything from the disasters of introducing species into ecological niches for which they are ill suited to cascading side effects of drugs to our impacts on climate constitutes overwhelming evidence of our inability to grasp complex systems with our intelligence. When we do get it right -- and it’s not an accident -- it comes from some other place than our intelligence.


For example, the evidence that what we call consciousness and experience as conscious behavior in others is not rooted in intelligence, but in the feelings, is fairly compelling. Noted researcher, Mark Solms, in , gives a summary of the evidence. Anencephalic children — missing the neocortex — are still described and experienced as conscious. Meanwhile, a small 2 cubic centimeter region in the brain being damaged is 100% correlated with no one home, the individual is not conscious. This region in the brain is typically associated with affective processing.


the evidence that what we call consciousness and experience as conscious behavior in others is not rooted in intelligence, but in the feelings, is fairly compelling.


We cannot expect AGI to have feelings for us

We cannot expect to attain recognizably human level AGI (HLAGI) without these agents evincing something like human feelings, but these are rooted in human morphology and human biology. Radically different embodiment will result in radically different intelligence. But a radically different intelligence is a topologically transitive aka chaotic dynamical system. Just like a species ill suited to a niche it will have impacts on our environment that we are historically terrible at predicting. It is therefore dismayingly naive to expect HLAGI to be a help with climate change. It is much more likely to be a complication to an already thorny problem.


photo courtesy of Fernando Paredes & Unsplash


If there is one human who had an uncanny ability to envision alternative worlds with any kind of wholeness or verisimilitude, it was Frank Herbert. I remind you that Dune was set in a period of time after the AGI mistake had played itself out. That is very likely a much too optimistic view. It’s more likely that the Fermi paradox is explained by the ouroboric tendency of intelligence to try to replicate itself, thereby wiping itself out. In terms of , sidestepping this drive to replicate intelligence without understanding the role of embodiment is likely one of the hard steps intelligence has to get past to survive.



Lead image courtesy of Milad Fakurian & Unsplash

바카라사이트 바카라사이트 온라인바카라