Blog

Links

Book review: The Singularity is Nearer: When We Merge with AI, by Ray Kurzweil

posted: July 5, 2025

tl;dr: An AI virus risk worth taking?...

The most infamous and deadly scientific paper of the 21st century so far is A Flu Virus Risk Worth Taking, written by Anthony Fauci, Gary Nabel, and Francis Collins, published in The Washington Post on December 30, 2011. That paper argues in favor of creating more deadly viruses in the lab. When coupled with the subsequent actions taken by Fauci and Collins to fund death-maximizing virus research and distribute chimeric virus assembly technology to China, that paper almost certainly contributed to the creation of the COVID-19 pandemic and killed millions while disrupting life for nearly everyone on the planet.

But death-maximizing virus research is hardly the only new existential technological threat to human civilization. As I’ve pointed out in multiple posts including Mind the Gap and The Actual Origin of SARS-CoV-2, artificial intelligence (AI) is another. The threat from a rogue AI is also at the core of the plot of Mission: Impossible - The Final Reckoning, as it was in the 1968 film 2001: A Space Odyssey. I long for the days when all we had to worry about was a global nuclear war.

In The Singularity is Nearer: When We Merge with AI (TSIN), author Ray Kurzweil presents the techno-optimist case for artificial intelligence, just as Fauci and Collins presented the techno-optimist case for death-maximizing virus research. Kuzweil could easily be as wrong as Fauci and Collins, and I would urge readers to view TSIN skeptically. I recommend James Rickards’s MoneyGPT: AI and the Threat to the Global Economy as a good counterbalance to Kurzweil’s AI optimism.

A book cover with a black background and the title, subtitle, and author's name in white lettering on the left, and multicolored rays of light radiating upward and downward from a point on the right

Kurzweil started in AI in 1963, and TSIN covers developments up through 2023. Recent progress in the connections-based approach to AI has indeed been remarkable. Still, there are some crucial advances that Kurzweil is waiting for: a direct brain/computer interface, with Neuralink perhaps furthest along at present; and nanobots that would roam the human bloodstream, repairing and modifying organs to extend human lifespans. Kurzweil does expect humans to achieve some form of immortality, plus cognitive abilities (once human brains merge with AI) that can hardly be fathomed. Other advances Kurzweil foresees: abundant food thanks to vertical farming and lab-grown meat; replicants; AI-aided drug discovery to solve many diseases; universal basic income in the 2030s from taxing the AIs, to deal with the obsolescence problem of mere human-level intelligence; and self-improving AI that can write code to improve itself, at which point the AIs really do take over.

That is, of course, if AI doesn’t destroy humanity first, either because of a bad actor who uses AI for nefarious purposes, or (more likely) a series of unfortunate events. As a techno-optimist, Kurzweil fails to continually ask and answer the question “what could possibly go wrong?”. Regarding COVID-19, Kurzweil mentions the possibility that SARS-CoV-2 may have been genetically engineered and escaped from a lab, but doesn’t dwell on it or ask what might happen if AI starts designing death-maximizing human viruses or computer viruses. He says the cat is out of the bag regarding genetically engineered viruses, and pushes AI-driven vaccines as the answer. Kurzweil is more concerned about nanobots destroying life by creating “gray goo”, and his answer is of course more AI-based technology: “blue goo” that would detect and fight off the gray goo.

Scientists like Kurzweil can be extremely dangerous because they do not see all the downsides of the technologies they advocate. The track record of inventing new life forms is not a good one, as we saw with COVID-19. Kurzweil needs to spend more time contemplating how AI can be used to destroy the world, either intentionally or unintentionally, and less time on how it can be used to grow food more efficiently. Kurzweil does advocate for strong international norms for AI ethics. Well we don’t have that even today, almost six years after SARS-CoV-2 appeared, for death-maximizing virus research.

Will Kurzweil ever write another book due to his advanced age? I expect that a Kurzweil replicant will take over his book-writing duties at some point. Hopefully that replicant will produce many more books, since it will mean that we will not have been destroyed by the AI technology that Kurzweil so strongly advocates.