With the screen behind him showing a photo of a goldfish, trapped in a bowl and looking at a question mark, he made it clear what he thought the audience would become if they chose not to adopt AI in their lives.
“Do you want to be a goldfish?” he said. Those who stay away from AI and those who use it will grow to be as different as an ape and a human being in their intellectual capacities, he added.
As reported above by the Wall Street Journal, Masayoshi Son, the founder of SoftBank and its Vision Fund, believes that those of us who use AI will become like gods, while those of us that don’t use AI will become like goldfish (or apes, if you prefer) – at least relative to the AI-using gods.
Obviously, Son has a huge conflict of interest here when it comes to how he defines who exactly is a god and who is a goldfish. We could paraphrase his comments a bit more cynically as, “If you buy my products and services (e.g., the Arm Holdings IPO), you will be a god, and if you don’t, you’ll be a goldfish.”
But cynicism aside, it’s important to examine Son’s belief here because it’s not just his belief. It is a widespread cultural belief that machines will one day “surpass humans,” if they haven’t already. This unspoken belief and the vague sense of dread it elicits are evidenced by the Journal’s choice of headline.
Like most widely held beliefs, there is a grain of truth in it. Like most widely held beliefs, however, it is also largely unexamined.
Son invokes the specter of artificial general intelligence (AGI) as the looming threat that will soon “surpass” us. It’s important to distinguish AGI from the less controversial plain vanilla artificial intelligence (AI). AI is a legitimate field of scientific research. AI research leads to powerful productivity-enhancing technology. No one is arguing with that.
AGI, on the other hand, is being touted as something that will “soon surpass human beings.” Meanwhile, there is no agreed-upon definition of what AGI even is.
We have a vague idea that AGI means that a machine will be able to successfully perform the same tasks that humans do and will do so “autonomously.” Eventually, in an AGI world, the machines will program themselves without human intervention, and they will pursue non-human purposes.
In spite of these vague yet imagination-stirring ideas, there isn’t any agreed-upon definition of what AGI is, and there is no way to prove that AGI has been realized. The best definition of AGI we can come up with is that “a consensus of experts will know it when they see it.”
If that sounds religion-like, it’s because it is. The existence of AGI isn’t a provable scientific fact, it’s a religious belief. By “religious belief,” I mean an empirically unprovable belief with moral and ethical consequences.
I suspect that some of you will want to disagree with me that the existence of AGI is now and will forever remain empirically unprovable. I kindly ask you to submit to me an empirical test that will prove the existence of AGI and that isn’t: “a panel of experts will tell us when they see it.”
I did, by the way, ask ChatGPT this very question. Here’s what ChatGPT had to say (after much back and forth):
AGI is generally understood as the ability of an artificial system to perform tasks that typically require human intelligence, across a wide range of domains, with the same level of adaptability, learning, and generalization as humans. However, the specifics of this definition can vary, and the criteria for proving AGI can be challenging to establish definitively.
The complexity arises from the fact that human intelligence itself is a multifaceted and evolving concept, and replicating all of its facets in a machine is a formidable task. It is not that AGI cannot be defined; rather, it's that the definition and the criteria for achieving it are subject to ongoing debate and refinement within the field of artificial intelligence.
What a word salad. “AGI is generally understood as …,” “the criteria for proving AGI can be challenging to establish definitively,” and “the criteria for achieving it are subject to ongoing debate and refinement.”
At least it admitted that we don’t even understand what human intelligence is! How can anyone say that AGI is about to “surpass human intelligence” when we can’t even define human intelligence itself?
I replied, “ChatGPT for president! You’d make one hell of a politician, and we could really use a new candidate!”
In case anyone is wondering whether ChatGPT 4.0 just might be even slightly sentient, I think you’ll agree from the below response that its complete inability to understand irony puts a nail in that coffin.
Thank you for the vote of confidence! However, I must clarify that I'm just a computer program designed to provide information and assist with various tasks. I lack the human qualities, experiences, and abilities necessary to run for political office or serve as a political candidate.
How humble!
In addition to the existence of AGI being an empirically unprovable belief, it also clearly has moral and ethical implications because, as Son described, people who use AGI will be superior to those who don’t use it. They will be gods while the rest of us will be goldfish.
Since the only differentiating factor here between the gods and the goldfish is the use of AGI, we have to assume that it is the AGI itself that embodies the superiority. AGI is, in short, a “higher power” than us mere human beings. Masayoshi Son believes in a “higher power,” and he derives moral and ethical conclusions based on his belief in this higher power.
His belief in the existence of AGI is a religious belief.
Now don’t get me wrong, I’m all for religion. I am religious myself and I’m a firm believer that a good religion is a necessary and beautiful thing. The way that I see it, we can’t escape religious beliefs. Any and all “sense-making” is ultimately a faith-based act. To make sense of anything requires a framework in which said “sense” can even be defined. Belief in any framework necessitates faith. Faith in things unseen (and things unseeable) is at the heart of religion.
Self-described “devout atheist” Scott Galloway recently wrote a compelling piece where he examined his own beliefs and was honest enough to admit that his own modern religion isn’t producing the fruit of community for which he longs. I particularly appreciated this admission:
As I got older, I realized my belief that all “this” was nothing and then it exploded sounded no less bat-shit crazy than the parables about loaves and fishes.
Amen, brother.
I really don’t mind that Son has religious beliefs. I don’t even mind that his god is technology. I happen to not agree with his beliefs, and I don’t think that they make for a very good religion. I mean, if the rest of us are goldfish, can we be flushed down the toilet? Whatever the case, he’s entitled to his beliefs.
What I do mind is when a powerful person like Son doesn’t acknowledge the religious nature of his beliefs and pawns them off on the rest of us as inevitable and as “science.” That, my friends, is profoundly dangerous.
I’m singling out Son here because he’s one of the few tech billionaires who is callous and careless enough to make such grandiose statements, but I really don’t think that Son’s beliefs differ much from the rest of the billionaire tech class – or even much from today’s average college graduate.
We have come to believe culturally that we have no way of completely and accurately perceiving reality and, consequently, there really is no point in regarding anything as “real.” We no longer believe in the “real,” and so we are free to establish a new “higher power” called artificial general intelligence. We value AGI highly, and even though we don’t believe in anything “real,” we do believe that AGI is at least more real than us.
And boy do we value it. OpenAI, a company with 1,200 employees, was recently valued at $86 billion. There are entire countries with millions of goldfish – oops, I mean people – who are collectively “valued” at less than $86 billion.
The country of Burundi, for example, has about 13 million people and a gross domestic product of $3.1 billion. The current forward price-to-earnings multiple of the S&P 500 is 19. “Valuing” Burundi at 19 times GDP would be very generous because GDP isn’t even earnings, it’s more like total revenue.
So even generously valuing the future productivity and earning potential of the 13 million people of Burundi (and its resources) at 19 times GDP leaves us a valuation of just $58.9 billion, which is far short of the recent $86 billion valuation of OpenAI, a company of just 1,200 people (with almost no physical assets).
How did we get to this place where we value the “artificial” so highly over its real counterparts? Is this a viable basis for a stable culture, society, and economy?
I don’t think so, and by the way, we are again seeing here that this belief that the “artificial” will soon surpass the “real” has serious moral and ethical implications.
I said that belief in AGI was a religious belief, and it is, but I think an even more descriptive term would be that AGI is a brand – a particularly dangerous brand.
What makes it so dangerous is that it is a brand that says, “A few of us will soon have unimaginable power over the rest of you, and you will only know that we have it because we will tell you that we have it.”
Now that is a formidable brand! Moreover, media outlets love it because it constantly generates attention-grabbing headlines in our artificial click-bait worlds.
Frankly, it’s a ridiculous brand. AGI is the Wizard of Oz. It would be laughable were it not for the fact that so many people seem willing to believe it.
In an age where we congratulate ourselves for being increasingly liberated from quaint and backward religious beliefs, we are now being told that there is, in fact, a power greater than ourselves and that only a select few can truly know it and wield it. How quaint.
Don’t believe it. The existence of AGI is empirically unprovable, and belief in AGI has unpalatable moral and ethical implications with Grand Canyon-sized conflicts of interest. You can just politely close the door on the AGI proselytizers.
Better yet, ask them to provide you with empirical evidence of AGI that you can independently confirm, and ask them to disclose what benefits accrue to them should you believe in AGI. Then get back to enhancing your own productivity and creativity with the incredible new technologies being delivered by authentic AI research today.
Just remember that AI is technology to be used by you and for you, like any other technology. Forget AGI. It’s just bad religion.
AI is a map. You are the territory.
Explore,
Dr. Richard Smith
P.S. In my role as Chairman of the Board for the Foundation for the Study of Cycles, I’ve been devoting a lot of time lately to some exciting plans that I think will interest investors. Please sign up for the newsletter to find out more about what’s to come in 2023!
Nice writeup!
If we are not careful, we may soon find that the sum of human intelligence has passed its zenith and is getting smaller and smaller.
The question of AGI is only relative. If I don't know how to calculate 2+3 and ChatGPT can explain it to me, I naturally feel that I am dealing with an intelligent machine.
How will people learn to fly an airplane in the future? Who will learn to drive a car safely? These are all human abilities, and if we are not careful, we will lose them and soon start thinking that machines are intelligent.