The Singularity is Near: When Humans Transcend Biologyby Published 26 Sep 2006
|The Singularity is Near: When Humans Transcend Biology.pdf|
For over three decades, Ray Kurzweil has been one of the most respected and provocative advocates of the role of technology in our future. In his classic The Age of Spiritual Machines, he argued that computers would soon rival the full range of human intelligence at its best. Now he examines the next step in this inexorable evolutionary process: the union of human and machine, in which the knowledge and skills embedded in our brains will be combined with the vastly greater capacity, speed, and knowledge-sharing ability of our creations.
"The Singularity is Near: When Humans Transcend Biology" Reviews
This starts with the thesis: Technological change is exponential!
This has been true for many measures such as micro-processor size, cost of mass-produced goods, etc.
It is not, however, a general rule of thumb to apply blindly to all things "technological"!
This seems to be Kurzweil's big mistake.
He extrapolates features of technology to an unrealistic infinity.
For example, Moor's law is running up against the quantum limit, so micro-processor size is exponential up to a fast-approaching limit.
To take another example, the cost of an iPod may drop exponentially as you scale up production, but you can only sell so many iPods.
Once everyone has an iPod, you read a production limit, and the price becomes stable.
The major claim in the book is that brains will merge with computers.
Kurzweil argues that since transistors are faster than neurons, they will make better brains.
The fallacy here is that you would ever WANT to build a brain out of transistors in the first place!
Neurons can network widely at a low price, but wide networks of transistors are slow and costly in power and heat.
No engineer would try to build a brain out of transistors.
The lesson here is that biological evolution, while it's scope has been limited, will ALWAYS win in a contest with human engineers.
The better your artificial brain performs, the more it will look like a human brain.
This is not coincidence.
One important point made in the book is that our only problem need be to produce an intelligence greater than our own. Once this is accomplished, all other tasks can be up to that greater intelligence. While this is absurd applied to most practical problems like baking a cake, it makes some sense in the realm of AI. Here I disagree with Kurzweil, who asserts any such improved intelligence would be "non-biological". Heck, many parents achieve this goal by giving their child a good education!
Another thing he gets right is the demystification of Searle's Chinese Room thought experiment. Searle's objection to an artificial brain is wrong. Mind is platform-independent, but the port is yet to be written.
Ray makes the same tired arguments over and over, redundantly redundant.
You can almost hear him TRYING to keep alive his delusional dream of living forever!
Ray argues that a future AI will be produced with the ability to iteratively improve its own intelligence.
Among his many skills, he's an accomplished software engineer, so he should know better.
This will never, ever happen in this millennium.
Even if we assume computing resources increased by a dozen powers of ten.
Even if we "reverse-engineer the human brain" (whatever that means).
I know this review is getting long, but he does make some speculations about the year 2010, most of which never came to pass. He predicts we will have virtual assistants that can look-up movie actors, etc. that will respond to our vocal queues in virtual vision contact lenses. Instead, we have Wikipedia on our iPhones. Pretty far off the mark, if you ask me.
By far the most annoying thing Ray brings up no fewer than 10 times is that the speed of light may be "circumvented". I swear, there's nothing sacred to this man! He embarrassingly bungles an explanation of quantum entanglement, calling it "quantum disentanglement" and mistaking spin axis for wave function phase. Yikes.
This book is a big house of mirrors meant to disguise the lunacy of the thesis.
Don't get me wrong, the mirrors are interesting to look at in their own right:
Nano technology, genetic engineering, genetic algorithms, neural networks.
It's fun stuff!
But the singularity is not near, it is the delusion of an old man who would like very much to live forever.
Can this book ever get to the point? Is there a point? In the future, when machines begin to express human discernment and burn books, I'm sure this endless and gigantic tome of wordy lists and nerd-spooge will be set alight, or edited towards readability. Either is fine with me. I would love to read the executive summary of this, but this book is too long.
The Singularity, if you’ve never heard it, is a term given to a theoretical point in the future when our technology will have become so advanced (compared to today) that it becomes impossible to see beyond it or understand its ramifications.
For example, try to imagine a person with an IQ of 200. Not that difficult. Empathy is still valid at that point. The thinking of a 200 IQ person is qualitatively similar to that of a 100 IQ person but scaled up: faster, sharper, wider, deeper. A 200 IQ person would still use his human flesh to navigate the world and would still use our languages and institutions. He might struggle with loneliness. He might experience love.
Now try to imagine an entity with an intelligence equivalent to an IQ of 10000. Impossible. Would it – could it even – be housed in a human body? Such an intelligence would certainly have no more use for human language than humans have for the chirping of birds. We have as much chance of understanding this 10000 IQ intelligence as an ant has of reading Shakespeare.
And yet we can understand and ponder the steps leading up to the development of this super-intelligence. This is the task that Ray Kurzweil attempts with The Singularity Is Near, and the picture he paints is certainly an intriguing one.
In Kurzweil’s vision of the (not so far) future, we have transcended our mortal coils. We’ve moved beyond genetic mastery wherein we use tools like CRISPR to edit out the flaws in our genetic code. We’ve moved beyond nanobots in our blood, which are capable of capturing sensory input and motor output to immerse us in a virtual reality that is indistinguishable from the real. Instead, we’ve shed our weak flesh to merge with (and become) immortal machine super-intelligences, spreading through the solar system, the galaxy, and then the universe in a quest to reach ever new heights of knowledge, art, beauty, and creation. And through this all, claims Kurzweil, we maintain our humanity.
Pretty grand! But how realistic? Will this vision of the future come true?
Put simply: yes. True enough, Kurzweil’s time-frame is almost certainly too optimistic and the steps along the way will probably turn out different than he imagines, but the end game? The singularity? This vision of ever increasing intelligence and complexity? Barring catastrophic human destruction, self-inflicted or otherwise, is there ANY reason to doubt it?
It is a completely indisputable, unambiguous statement to say that the history of our planet is one of escalating intelligence/complexity and that the history of human civilization is one of escalating technological and scientific advancement. Meanwhile history is full of breakthroughs skeptics claimed were impossible: Europeans crossing the Atlantic ocean, heavier-than-air flight, space flight, nigh-instantaneous world-wide communication, the elemental analysis of ultra distant stars and galaxies, etc. In 1934 Albert Einstein wrote, “There is not the slightest indication that [nuclear energy] will ever be obtainable. It would mean the atom would have to be shattered at will.” Mmhmm.
Even the numerous ultimately FAILED endeavors or companies that Kurzweil cites don’t render his general claims dubious. At one point, I estimated roughly 80% of the companies or technologies he was citing had since failed. Seems a slam dunk case against his predictions. In fact, the opposite is true. His “Law of Accelerating Returns” suggests that companies fail and arise at ever-increasing rates. Consider the solar power industry:
I recently undertook some research in various solar power companies with the aim of investing in their stock. I have little doubt that solar power will play a major role in the future of power generation, if just because solar power follows the general trend of decentralization that seems to be happening in so many industries. Even so, after my research, I opted not to invest in any stock. Why not?
Because no solar company seemed a particularly good bet. Here’s the problem: A solar company will create (or acquire) a new process or method for more efficient and/or cheaper solar modules or cells. So they build a new factory (or retool an old one) to manufacture these new solar cells, a rather expensive investment. This puts the company, momentarily, in the lead. Consumers, corporations, and governments buy their stuff. However technology is advancing so rapidly now that before this company can earn back its expensive investment, some other better solar product reaches the marketplace. Everyone switches to this new product. So the first company goes bankrupt and a new company acquires their expertise and technology and eventually puts out a newer, superior product. Thus the wheel turns.
So while the solar industry as a whole is progressing, no single company has yet to truly dominate the field, as Google has done with search engines. And I, at least, can’t predict which one, if any, will.
Point being, when looking at broad predictions of the future, we can’t let day-to-day or even year-to-year chaos and setbacks obfuscate the overall path. We must try to see the forest, despite the trees, and Kurzweil seems to do a decent job of this.
Nevertheless, I wasn’t a huge fan of The Singularity Is Near. Now I certainly learned *a lot.* This book inspired me to do outside research on a slew of topics including wormholes, genetic engineering, the phases of clinical trials, and much more. And yet, the book as a whole lacks humanity. It explores these visions of the future without ever truly exploring how they will affect humanity – at the visceral, emotional, dramatic level of individuals. Kurzweil’s writings were less engaging than they might have been because they rarely afforded me the opportunity to hypothesize upon what I personally might do, when faced with future ethical questions. In fact, the overall feel of the book is that it’s less about communicating with me as a fellow human being than it is about Kurzweil organizing his own thoughts and evidences on the matters he wishes to write about.
Such a lack is ironic because Kurzweil seems very concerned with countering the notion that our humanity might be lost when we escape our corporeal bondage. He finishes the prologue with a quote from Muriel Rukeyser: “The universe is made of stories, not of atoms” and the actual last line of the prologue is: “This book, then, is the story of the destiny of the human-machine civilization, a destiny we have come to refer as the Singularity.”
If this was truly his intention – to tell this story – then I would say Kurzweil singularly failed.
So, some good, some bad. That much for the book review. To finish, I want to try to succeed, in at least one tiny area, where I feel Kurzweil failed. I want to talk about immortality and what it’ll mean to you and to me, personally.
The year is 2060. I am 75 years old. My parents are dead, as will be many of those reading this right now. My nephews who are 5 and 6 years old right now will have become grown men, almost 50 years old themselves, maybe with kids of their own. At 75, I am extremely healthy. Not as spry as I am now, but like all those who can afford it, I undergo routine age extension treatments. The world government, which speaks English as the lingua franca, is constantly debating whether such age extension should be ‘nationalized’ but as yet it is not and the biomedical company that provides these treatments is the wealthiest company in the world and has a name as recognizable as that of Google.
Speaking of, Google has far transcended its early century roots as a mere search engine. It is a massive artificial general intelligence whose intelligence is well-documented to be quite beyond even the smartest unmodified human, who is on average 5 to 10 IQ points smarter than today’s human, given ubiquitous and routine pre-natal gene therapy techniques.
When I wake up in the morning, I say to Alexandra, my own personal household AI, more family than slave and also one of my best friends: “Alex, you there?”
“Yep,” she says. “Wondering what’s going on with the immortality debates?”
“You know me,” I reply.
She sighs dramatically. “Lord what fools these mortals be!”
I laugh in agreement. “Mmhmm. Show me.”
Alex turns on the debates – held in a virtual building of course – regarding the impending immortality treatment. The world is vastly more peaceful than it ever used to be. Superior medicine and therapy techniques have reduced the effect of mental illness, while increased prosperity and education have slowly eroded the last bastions of fundamentalism, crime, and irrationality. But of course, it is not all gone, as the vociferous debates demonstrate it. Immortality at our fingertips… and some still reject it.
Because, of course, they must. Immortality is one of my favorite topics to bring up to my students, and I’d say that, on average, more students REJECT the idea of immortality than embrace it. Their reasons include many objections: overpopulation; that being immortal would be “boring”; or that death gives us meaning or otherwise motivates us. Rather abstract objections. I encourage them to try to think of immortality not as some idea far out in the future but as an imminent issue requiring real, practical decisions.
Consider a person in my above setting. He’s 110 and dying and there are no more technologies to stop it. What will HE think about these debates regarding immortality? Are they, in some sense, tantamount to murder? What must it feel like to WANT to live forever, to be so near, and be so afraid that you’ll miss your chance by mere days or months? Or consider that man’s wife, who may have been married to him for eighty years, and who does live long enough to avoid death.
I’m particularly interested to see how religious people will respond to the real possibility of immortality. Will they REALLY choose to die – so that they might enter heaven? Some might, but I doubt most will, no more than Christians who get cancer today concede, “Welp guess this is God’s will. I’ll let myself go.” No of course not. Most fight tooth and nail to live. And how will the Pope and other religious leaders respond to this? How will they re-interpret their various holy books to account for this change in mortal fortunes?
Consider fundamentalists who WILL choose to die rather than choose to be immortal. That’s their choice, okay, sure. But what if they choose it for their children? What if immortality involves maintaining a neural and genetic backup and some fundamentalist parents refuse to let their children maintain such backups – just as many parents now refuse to let their children be vaccinated? Is this child abuse? Is this murder? Do we FORCE it? Will those same people who are so against abortion suddenly go from “pro-life” to “pro-death”? [Hint: Yes they will, though it won’t be called ‘pro-death.’]
Where will YOU stand on these issues? Will you stick by your religious beliefs and take the gamble for an eternity of post-death paradise? If your honest answer’s no, what does that say about your beliefs now? If your answer’s yes, then will you teach the same to your children? If some accident happens and they die, what will you tell yourself? How will you deal with the guilt? And in a broader sense, will you vote for politicians who run on anti-immortality, pro-death platforms, knowing that such might deny us non-believers the chance to extend our lives and the lives of those we love?
What if, say, becoming immortal meant becoming permanently sterile, either for biological reasons or as part of a government-enforced agreement to deal with potential overpopulation? Is that an acceptable trade for you?
Or will you take the opposite tack? What arguments can you make to convince those who are anti-immortality? How will you deal with the pseudo-science they will inevitably find showing that the, say, consciousness transfer technology doesn’t REALLY work? That it’s been shown that the copy persona isn’t ACTUALLY the original persona? Will you test out your supposed new immortality by undertaking daring and fatal stunts, like leaping from a plane with no parachute, just so experience what it’s like? Or will you be too fearful and consider the idea utterly foolish, if not disrespectful?
Such conundrums – and the drama, humanity, bravery, hate, and love associated with them – constitute the real story of the topics that Kurzweil brings up. Perhaps I am asking for too much to have wanted him to try to capture all that in his non-fiction book. Luckily, science fiction offers a wealth of stories which do explore such drama. Just from my own collection:
Paolo Bacigalupi’s short story Pop Squad focuses on the conundrum of immortality & sterility/population control.
In Richard Morgan’s very noir Altered Carbon, a rich man commits suicide, and his backup hires a detective to figure out why.
The second of Arthur C. Clarke’s three laws is stated thus: The only way of discovering the limits of the possible is to venture a little way past them into the impossible. Kurzweil’s book, and man himself, for all their faults, is a daring adventurer who does just that. He deserves at least a little applause.
[This review is part 2 of a small AI-focused reading study I undertook. The first book I read and reviewed was James Barrat’s Our Final Invention. The next book is Oxford Philosopher Nick Bostrom's Superintelligence]
Tired of sleeping peacefully? Do you feel a bit to contented and secure as you go about your daily business? Has your overwhelming sense of anxiety and ennui drifted to a mere background drone rather then an overpowering howl?
Then dear friends this is the book for you! Guaranteed to make you weep softly in the night as you clutch your knees to your chest! Certified to make you stop showering! Neglect your loved ones and friends because damnit what's the point!!?!?! Darkly contemplate your razor as you shave and wonder if you should indeed should just end the charade.
If the current state of technology has you feeling a bit ambivalent, wait a decade or so when if half the shit in this book turns out to be correct people will become freaking demigods.
Now if you'll excuse me I have to go build a bomb shelter and make sure my crates of pork and beans, shot gun shells, and distilled water has arrived.
Kurzweil has made a living of being a futurist and an inventor. Many of his inventions are the result of his predictions coming true, so there is good reason to listen to what he has to say on the topic. The main idea is that the evolution of technology is not linear (as most people think) but exponential. This exponential development of key technologies leads to dramatic changes in human history over relatively short periods of time. Good examples include the internet and cell phones. The book focuses specifically on 3 key technologies that will produce the human "singularity", an event where humans transcend their former selves and become something more than human. These key technologies (known as GNR) are genetics, nanotechnology, and robotics (or artificial intelligence). When these 3 things progress and converge in the next few decades, we will see humanity benefit by eradicating disease, prolonging life expectancy indefinitely, and promoting human intelligence to astronomical levels through direct neural connections to computer hardware. Effectively, we'll become so smart we'll be able to outlive and outhink "normal" humans to an unimaginable degree. He makes a compelling argument that the singularity is not a matter of "if" but of "when", and that we should be proactive in pursuing these technologies, not just for the benefit of humanity, but to keep amoral people from exploiting these things to an unfair advantage. It's a fascinating read and worth digging into if you have any appreciation for science in general.