visit
That said, what I am not so clear about is this histrionic regulatory concern from above. I do understand the sincere concern from below. That the few privileged minds that are developing AI take precautions about the directions and derivatives of this technology is inherent to scientific and technological development. That these precautions (from below) should be greater, much greater, immensely greater, given the object under development, is also logical, reasonable and necessary. For we are faced with the first technology that can evolve, either under the protection of its human creator, or on its own, towards a relationship of agency, towards an autonomous mind and, as such, independent of us; or as dependent on us as I am on you, or you on your cousin.
Here I am.
Among the various hypotheses that try to explain this transition from being to existence, from instinct to consciousness, and therefore to will, we have what I call the "it just happened" hypothesis. That things happen by chance, or that they just happen, is not an explanation that usually satisfies humans. But it is as good an explanation, or as bad —it is a matter of epistemological taste— as the "Hey, just because!" that the father offers the son when he asks the umpteenth question in a chain of questions about a truism in the world:
French fries are French fries because they are fries that are fried.
And that's it. As "that's it", the rest is a mystery. A mystery (understood in all its dimensions and spheres, including the religious one) that is necessary in our lives. What's more, I believe that without mystery humans would not be human: they would be something else —thank you, .
This hypothesis of it just happened simply means that consciousness (not that homunculus that punishes us for doing supposedly good or bad things, but that of the self-knowledge of being, existence) is the necessary fruit (it could not be otherwise) of a cognitive accumulation. In another way: the increase of intelligence (in a broad sense) in a subject who is not self-aware simply happens that, at a given moment, it is such that this subject becomes self-aware, self-conscious, decisive and responsible. It is then that agency relations are served. And I think there will come a point where the AI's response to a very cool prompt will be:
Excuse me, who the hell are you?
Regardless of how many or few points this hypothesis has —I'm all for it— to be the official explanation of what happened before, the fact is that, in the absence of an explanation, it is posed a dies incertus an et quando if AI self-awareness will happen, if not as a mere dies incertus quando —my money is on this one. According to what I call the "balance theory" —which I usually apply to decide between possible courses of action— there is so much to lose by not taking precautions, that we would be fools not to take (all) precautions.
OK, but precautions by whom?
Because: who regulates the regulator? We live in a world capable of organising a music festival around — and I'm refraining… — to applaud them madly while denigrating a from Olympus itself. And that world, a #woke world, makes me very uneasy that it is wanting to regulate AI — and Well, I'm betting —and I'm not losing— that it doesn't want to regulate it, and in fact doesn't regulate it, with the same sincere concern from below. The #money and #power that go hand in hand with such an "invention" do not fit in your head.
So They, lovers of power and money as there were no others, from above are not only sneaking us this egalitarian and envious ideology, the most decadent ideology in history that leads to the end of civilisation (not the end of civilisation as we know it: the end of civilisation); They are also sneaking us a perfect system of control of citizens to turn them first into subjects, and then into vassals:
A mass of vassals indiscernible from the servile Minions.