All kinds of writing

Is Your AI Expert Worth Listening To?

There is currently a lot of talk, social media posting and public speaking about AI. It’s obviously a highly interesting technology with ample present day uses and even more potential future uses. Multiple civilian and military applications of software have already been radically changed – there is no question that AI is one of the key technologies of the early 21st century. However, there is no certainty how much further AI will improve, and how much further its impact will go. This is rightfully a hotly debated issue. I just wish there would be a more … sciency … approach to the important question about the future development of this branch of computer science.

It’s difficult for the non-expert to navigate this discussion, since even the experts seem to have lost their navigation at times. Hence, I’d like to give you a checklist of four fallacies and omissions relatively specific to the debate. Will 2025 AI will turn into artificial general intelligence, or artificial super intelligence, or anything of that type, surpassing humans not in one specialized skill (like chess) but in all there is to reasoning about the world around us? I aim to explain some common problems in this discussion.

Ask yourself, does the expert which you consider putting some trust in fall for these two fallacies, and ignore the following two major issue inherent in present-day AI? Also, never ever forget: money in the bank account and fancy job titles at famous companies make no one an actual expert, in the sense of understanding a topic. So, let’s start with fallacy number one:

Child with an Insect Fallacy

The human brain doesn’t work like a digital computer, the type of computer manufactured for humans to run word processors on & browse the internet, and more. The digital computer is semiconductor based, electronic, and running serially. The brain is cell based, running on a variety of biological processes, and running in a highly highly parallel manner. This is fascinating. It’s also old news. Even John von Neumann, after whom the architecture of a modern digital computer is named, was pondering the meaning of these differences. There is a whole decades-old field of computer science, called “neuromorphic engineering”, which tries to use the principles of neuroscience to build better computers. I was in a lab focusing on neuromorphic engineering for 3 years during my time in Australia, and  I had fascinating discussion with the neuromorphic engineers.

Now if your “AI expert” tries to sell you the brain-computer difference as his novel, groundbreaking, unique insight, this is a red flag. A red flag the size of the Chinese national flag on Tiananmen Square. “We have only recently understood how the brain computes. Implementing these radically different computing principles in AI will accelerate it into true AGI, within a matter of years, if not months ….” Bable like this must not be mistaken for an insightful discussion about the future of AI.

It doesn’t even matter if the “AI expert” is dishonest and knowingly tries to sell old news, or if he/she actually just learned a bit about neuroscience and wants to pass on this insight as if it’s novel. The latter case gives the name to this fallacy: a child will always be absolutely enthused when he or she finds an insect. The child will believe that he or she is absolutely the first person to ever catch such an amazing critter.  Credit for the name of this fallacy goes to my good friend Jay Coggan.

Underwear Gnome Fallacy

In an episode of the very funny (at least when I used to watch it, many years ago) cartoon series “South Park”, a species of tiny humans steal a lot of underwear. The kids, the main characters of the cartoon, then visit a lair and see a sign “1. steal underwear 3. profit”. The joke of course is that a crucial point, “point 2”, is completely missing and that the connection between underwear stealing and profit is not obvious at all, if there is one.

In a lot of debate about the future of AI the argument goes as in 1. continued development of the statistical techniques called AI 3. Superintelligence & super-computer world take-over. Ask yourself: how much of a “point 2” does your “expert” offer. How will this supposed transition come about? Is there a proposed process how a qualitative jump in ability, willpower, and capability to rule the planet will happen? Don’t let an AI underwear gnome fool you.

Ok, so we have covered two fallacies. There are two major issues with contemporary implementations of AI which must not be ignored in any informed discussion of the topic. So ask yourself the question: Does your expert/”Expert” ignore …

Large Language Model Hallucinations?

What the large language models (LLM) which are the main pillars of modern AI do is to extrapolate, in a very high-dimensional space, between the vast amounts of text picked up from the internets. Any response they give is a clever mapping from the question/input to the right spot in the high-dimensional space. But sometimes, the spot in the high-dimensional space where the LLM extrapolates to is devoid of a real, true solution. As a consequence, LLMs also hallucinate: they make stuff up. They respond with some likely, but factually false answer, which lives in that spot in the high-dimensional space of knowledge to which the LLM extrapolated to. This effect is very well documented. LLM have made up case law, scientific papers, and lots of other things.

In a recent example in my personal little world, the X AI engine, Grok, made up a symbiosis between a fish and a shrimp, when I asked it about a particular goby:

This is a another really funny example, LLMs making up phrases:

The LLM hallucinations seem to be their inherent property, and there is no general or practical way of automatically detecting them (and correcting for their presence). Any use of LLMs will come with completely made-up results. It’s possible to write about the future of AI without discussing the LLM hallucinations, but any general, wide-ranging prediction should have a really good solution for this issue at hand. Does the AI expert you consider trusting have a good solution for the LLM hallucination issue?

Credit for bringing the severity of this issue to my attention goes to my friend Arthur Flexer.

Escalating AI energy use?

Put your hand on the bottom of your laptop. Should you have had it running for a while, it’s probably warm. Computation uses energy, which turns into heat. A lot of computation uses a lot of energy.

My friend Jay Coggan, the same guy whom I mentioned above, and myself published a paper in 2023 which outlines the core of this problem. Our argument is that human-brain-level intelligence would take  more energy than the US (pre ongoing economic woes) produces, by several orders of magnitude. Even if some other dudes don’t share our rather pessimistic outlook, the fact that more advanced AI will use huge amounts of energy is clear and well established. How will many, many instances of such energy-sucking AI engines take control of the planet if the current level of power output of the human civilization can’t remotely fuel them? Again, anyone with a far-ranging optimistic prediction for the glorious future of AI needs to have some kind of explanation here.

So, here you have it. Two fallacies, and two serious issues quite likely capping the future development of AI in a fundamental, and serious manner. Anyone falling for the fallacies and ignoring these issues is probably an AI “expert”, not an expert. Turn on your bullshit meter. Be cautious with hyperbolic claims in any public discussion you follow, but especially so when it comes to AI.

Below: My dive computer, a Heinrichs Weikamp OSTC. The computer calculates how long I can stay underwater, how deep, without having adverse effects after resurfacing. Pretty sweet! And all of that is calculated with pretty straightforward, conventional algorithms. A blenny decided to hang out under the dive computer when I took this shot. 

Dive Computer Blenny

 

Leave a Reply