All kinds of writing

AI Antihypist

I am trying to find the right word for my attitude towards contemporary artificial intelligence – I don‘t like the candidate term “AI skeptic”, since skeptic has been usurped by under-informed science-deniers (“climate skeptic”). I am not denying the existence, or wide-spread success of AI in many fields. What I am opposed to is the hype surrounding the supposedly imminent takeover by AI powered super-intelligences.

Statistics, pattern matching and automation of a great number of simple tasks is what present-day AI excels in. Software has beaten people in chess and Go who easily beat players who easily beat me in these strategy games. These are very impressive achievements, but I don’t believe that from these points in technological advancement a direct path follows to us becoming mere menial workers in a society run by AI. I think AI Antihypist describes my position well.

I am thoroughly convinced that we are nowhere near the mythical singularity, the transition proposed Kurzweil and followers, where AI will become so smart that it can teach itself to become even smarter, leading to an avalanche-like rapid self-improvement way above human capacities, in no time. I think this suggestion is an involuntarily funny mix of two American trains of thought, the naive belief in technology, and the deep-seated hoping for salvation, as in Christianity.

And I also strongly believe that there are very solid, principal reasons why we – or those of us who claim to try – haven’t come close to simulating a human brain. My friend and colleague Dan Brooks and I think that the lack of attention to the leveled nature of neurobiology and to the influence of the ecological situation an animal evolved in on neural information processing are two of the major roadblocks towards such whole-brain simulations which will not be jumped anytime soon (Stiefel, K. M., & Brooks, D. S. (2019). Why is There No Successful Whole Brain Simulation (Yet)?. Biological Theory, 14(2), 122-130.).

I am not convinced, however, that no tighter symbiosis between us and our computers will come about in the next decades to centuries. Humans are already learning to cope with the fine points of the behavior computers just as our paleolithic ancestors learned to read the subtle aspects of their prey animals. A recent observation which made me reflect on this symbiosis again are the now common suggestions for replies by email programs. I get a choice of answers which I could give to the message I just received from another human (or from his/her email program). Facebook offers such suggestions now as well, I am sure many other communication programs do. One problem I have with these suggestions is that the only available options for the answers, should I be too lazy to actually think and type, are held in bland, soul-less corporate speak.

There is only plastic emotion in these answers, completely un-Dionysian. Do the folks who develop this software think everyone else speaks like that, or wants to speak like that? Have they been self-brainwashed into speaking like that all the time? I would at least like to have options in the style in which real people speak. Is this too much to ask for:

I think besides this unfortunate boring style of communication which is pushed by the answer-suggesting software, the problem runs deeper. Self-domestication has seemingly finally reached our most human skill, language. We have built machines to take away almost all of the hunting, lifting, walking and climbing we do and made many of us lazy and physically weak in the process. Now we increasingly have software which aims to make us lazy and weak when it comes to language production. Are we witnessing a case of the eternal return here? Did we go from being apes which point at stuff, to William Shakespeare, back to only pointing (and clicking) at stuff?

Aboriginal Computer

AI will not take over by becoming supremely smart and unimaginably knowledgeable, and subsequently treating us like its naughty children, as Kurzweil and his fanboys seem to think. I doubt that there will be a ravaging Terminator, an AI-powered robotic Übermensch out to hunt and kill us all. Instead the AI takeover might come in the form of turning us not just into physical slouches, but into mental ones as well, in the form of a hundred mental crutches used for every act every day, even for language use.

There are evolutionary precedents for species being so good at outsourcing work that their own capabilities degenerated. A few hundred ant species take slaves – they raid other ants’ nests and take workers or larvae and drag them to their own nest, where the abducted ants feed the slavemaker’s queen. Often the slavers are phylogenetically closely related to their slaves, this relationship is called Emery’s Rule. The pheromonal communication systems of slaves and masters must be similar for this type of outsourcing to work.

Taking things a step further are the degenerate slavemakers, which are completely dependent on their slaves. Temnothorax kraussei from Greece is one such species. This species only has a few workers, which occasionally stage lazy slave raids, but mostly the queens of this species just kill the queen of another colony and let the workers of the deposed queen take care of her. These degenerate slavemakers have a very limited behavioral repertory and apparently are completely helpless without their slaves (See: Buschinger, A. 1989. Workerless Epimyrma kraussei Emery 1915, the first parasitic ant of Crete. Psyche (Cambridge) 96:69-74., and see the very well curated AntWiki).

We might quite likely be headed that way, but instead of depending on robbing worker ants from the nests of other species we built the helpers which get us to be degenerate ourselves: This projected version of the AI takeover has less aktschn (you can imagine how many Terminator-related jokes I got to listen to when living in Kahli-fornia in the early 2000s while sounding somewhat like Schwarzenegger) than the Terminator variant of the AI takeover, and it’s less pseudo-religiously enthralling than the singularity version of the AI takeover, but this variant is quite a bit more realistic: we are half way there.

What did you think about this article?

This week’s music recommendation: TAB25. Spend 25 minutes to chill to this and think about something profound, about the most meaningful question you can think of not having to do with your own daily life or what you saw on the news. Do you even have such a question in mind? Monster Magnet will help you find out: