The AI Hysteria seen from the Philippine Countryside
Anyone reading the tech news or just the news in general could reasonably believe that the AI apocalypse is upon us; that hyper-intelligent computer algorithms will take over, controlling basically everything necessary for human life, a process most likely leading to the demise of humanity in the rather short term. The European Union even already feverishly works on regulating AI (everything looks like a nail if you have a big hammer).
I am not convinced. Partly I think the constant warning of the impeding AI apocalypse is simply clickbait, and even supposedly quality media outlets have become very click-baity in recent years, for my taste. Partially the AI panic is driven by a lack of technical understanding by the journalists covering the topic; There exists a complex interplay of intelligence, brains, and computer algorithms meant to mimic or replicate what brains are doing. Together with my colleague Dan Brooks I had previously published a theory paper about the so-far absence of a whole-brain simulation, despite persistent claims to the contrary. AI is different from brain simulations, but both topics are related in that they live on the aforementioned intersections of topics, and that judging what has and what hasn’t been achieved is hard for the average journalist.
There are of course highly competent AI researchers who warn of the dangers of powerful AI. Possibly they understand some things I don’t understand (am I wrong to question how much energy an super-AI would even use?). It’s also possible that these serious-sounding warnings are at least partially motivated by the urge – now ubiquitous in high-level academia – for public exposure and attention. Being not a part of the academic grant hamster wheel circus sideshow anymore, I don’t need to participate in this hyping. While the urge to promote one’s work in academia is very real, I don’t want to suggest wide-spread dishonesty, and I think there is a better explanation why I am not shaking in fear when contemplating machine learning/advanced statistics/”artificial intelligence” (the term itself is hyperbole, in my opinion).
Mainly I believe that my lack of faith in the upcoming AI Armageddon is informed by the way I live my life these days. I moved to the Philippines several years ago, mainly driven by my interest in marine fishes and scuba diving. For the most part, I still love it here. I now have a lovely young family, and my spouse & son and I live in the countryside, 20 kilometers along the coast south of the province’s capital, almost on the beach. We don’t live in the stone age, obviously, but we live a much less gadget-infused life than the folks in San Francisco or Berlin who currently freak out about the evils of super-AI. We use computers and smartphones, but we spend a lot of time on the beach and in gardens and on farms. I talk a lot to people who are not programmers or tech startup founders, but farmers or artisanal fishermen. While my neuroscience and neuromorphic engineering background is still alive and well in my head, my everyday life is embedded in the Philippine countryside. That changed my perspective.
A lot of the things which feature prominently in the warnings about the super-AI takeover just don’t exist in our lives and in our wider surroundings. There are no self-driving cars here which could run amok, and actually the traffic is chaotic enough with human participants only (and the occasional dog sleeping on the road). There is no nuclear power plant in the country (despite previous efforts to build one on a tectonic fault line) which the AI could melt down when it has a bad morning; And there are no intercontinental missile systems which an evil AI could fire on its own; Admittedly, another nation’s miss-guided missiles, steered by hypothetical satanic AI, could hit our lovely Island, but why? While it’s perfectly livable and charming here, there are no “high value” “strategic” targets anywhere nearby. No AI with superhuman intelligence would waste an expensive warhead on sugarcane fields.
We eat food which to a large degree is produced on the island where we live, or in the country. There is no AI involved in getting the vegetables we cook from the farmers uphill from us to the market where we buy them. Yes, the supermarket we shop in uses computers, but I doubt that they employ the most advanced AI to get cans of beans from Manila to our province.
The majority of the world’s human beings lives like us, or (often much) simpler; I don’t see how more powerful AI would eliminate them from the surface of the planet. The assumption that life is so interwoven with gadgets and algorithms that should they go awry our lives are in danger just doesn’t hold for the majority of humanity. I just can’t see how a chatbot that works somewhat better than the current generation will wipe us out, and that might have to do with the fact that I spend more time looking at fishes, my neighbor’s chickens & our eggplants, and less at screens.
The argument that AI is about to become an existential risk reminds me of the classic “underwear gnome” episode of the American cartoon series “South Park”. In this episode a tribe of gnomes steals underwear from the townspeople. In their lair they have a whiteboard with the master plan. It reads 1. Steal underwear 3. Take over the world. Step 2 is obviously missing, obvious to everyone but the gnomes. The arguments how chatbots will lead to the extinction of the human race unless we panic now seems very similar in structure to the gnome’s reasoning.