#101 | You won't believe this...
The baton of decision-making and sentience will pass from man to machine
“It is amazing how complete is the delusion that beauty is goodness.”
― Leo Tolstoy
“A truth that's told with bad intent
Beats all the lies you can invent.”
― William Blake
Artificial General Intelligence will land, like a nuke, on our society, yet we won't believe it when it arrives.
Our standard of living will climb as machines do more, and, like a duck floating a the top of a waterfall, we'll willingly cross the boundary of no possible return. We'll hand control of our messy blue planet to our machine-descendent. The candle of human sentience may be burning most brightly today! — but it may go out; indeed, it will dim compared to whatever we create. And through the transition, we'll cheerfully work hard while equally being in denial that it's happening at all.
Risk
It's common to envisage AGI as a nuke. At explosion, with far-reaching and generally tragic consequences, irrevocably changing our world. Nick Bostrom, in Superintelligence, uses the analogy:
Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. … We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound.
Bostrom expects unaligned AI to get started by "hijacking political processes, subtly manipulating financial markets, biasing information flows, or hacking human-made weapons systems". His predictions are overwhelmingly pessimistic.
The encroachment of AI technology will be so beneficial, fast, and economically productive that the creation of AGI — just one more letter! — will be one resistance-free conceptual step further on. The change will feel not like a bomb but a rising tide, and, at least at the beginning, like boats on the technological water, we'll greatly benefit from its ascent.
Having doubts
Yet when Bostrom's bomb goes off, will anyone hear a bang? Will politicians stand up and support the scientists who are making claims? And in doing so, will they discredit the scientists?!
We've just been through a similar era-defining experience, which doesn't bode well.
Timelines of the last couple of years give me anxiety — it was stressful. We all remember that on Jan 9 2020, the World Health Organisation identified a "mysterious coronavirus-related pneumonia" in Wuhan, China. Two months later, on Mar 11, the WHO declared a pandemic, followed by Trump declaring a national emergency two days later. By October, there had been 40 million cases globally. The news was ubiquitous, and presidents and prime ministers hosted daily press conferences. Europe was in lockdown.
Yet still, by the end of the year, there were severe doubts by many as if covid was a 'hoax': conspiracies whirled (and continue to spin), and half of all Republicans, by the end of 2020, believed "the conspiracy theory that powerful people intentionally planned the COVID-19 outbreak".
Conspiracy theories, well surmised in the video "Plandemic", now removed, capture the tales' essence:
Hydroxychloroquine is 'effective against these families of viruses'. 'If you've ever had a flu vaccine, you were injected with coronaviruses'. 'Wearing the mask literally activates your own virus. You're getting sick from your own reactivated coronavirus expressions.'
Doubt remained even when the world collapsed into lockdowns. Videos of overwhelmed health services, tragedies of lost loved ones, and economic crises pervaded. Yet, *still* there were substantial doubts as to whether the virus (a) existed or (b) was planted by the Rich And Powerful.
Hidden progress
The advancement of AI is relentless, and intentions are omnibenevolent. For example, it now takes moments to plug a title into an 'AI essay typer' (you'd think the AI would come up with a better name), and wham, an essay. (The results are mixed, see footnote). Consequently, the zeitgeist is drunk on the bountiful applications of video, audio and art, all generated from text. Is art over? Twitter repeatedly shrieks.
We're applying the same advancements to the physical world for the first time. For example, all Teslas are capable of full self-driving. And with one hand and no pre-training, a robotic hand can complete a Rubik's cube with environmental disturbances (like rubber gloves).
But these are domain-specific applications. Beyond 'efficiency', we await the development of general intelligence, which can reason better than a human, can learn faster and tear away from the intellectual limits of its human creators.
A welcome change
Elon Musk just hosted Tesla's AI Day 2022 (watch here). He presented Optimus, a humanoid robot that uses Tesla's self-driving technology. Optimus is intended for mass manufacture and will — in typically Elon speak — help to create 'a future of abundance'.
From Musk's introduction:
An economy is the productive entities times their productivity; 'capita' times productivity per capita. The point at which there is no limitation on capita, it's not clear what an economy even means — an economy becomes quasi-infinite. In the benign scenario, this means a future of abundance, a future where there is no poverty, where you can have whatever you want in terms of products and services. It really is a fundamental transformation of civilisation as we know it.
Musk's comments echo Keynes's prediction that after the economic problem is solved (which Musk's Optimus will support), we'll be confronted with our "permanent problem." Keynes writes:
for the first time since his creation, man will be faced with his real, his permanent problem — how to use his freedom from pressing economic cares, how to occupy the leisure, which science and compound interest will have won for him, to live wisely and agreeably and well.
The baton of decision-making and sentience may be passed from man to machine with less struggle and more enthusiasm than we can now imagine.
Assuming that AGI will arrive someday, there will be a point at which presumably a scientist understands its revelation. But upon arrival, how will they share what they've learned? And who will believe them?
My week in books
The Undoing Project by Michael Lewis. Recommended about 100 times by Hector H. Finally read –– excellent. Favourite quote:
"It's hard to know how people select a course in life," Amos said. "The big choices we make are practically random. The small choices probably tell us more about who we are."
Live well,
Hector
PS.
I used the 'AI essay typer' for this essay, with the title: "You won't believe this."
I don't know what to say.
When I think of all the times we've laughed, and how you always made me feel better when I was down, this is just too much. You're my best friend in the whole world. And now that you're leaving me, it's like losing a part of myself.
Disappointing yet poetic.