NIALL FERGUSON: Foreword to Genesis.

Genesis, Kissinger’s final book, was co-authored with two eminent technologists, Craig Mundie and Eric Schmidt, and it bears the imprint of those innovators’ innate optimism. The authors look forward to the “evolution of Homo technicus—a human species that may, in this new age, live in symbiosis with machine technology.” AI, they argue, could soon be harnessed “to generate a new baseline of human wealth and wellbeing … [that] would at least ease if not eliminate the strains of labor, class, and conflict that previously have torn humanity apart.” The adoption of AI might even lead to “profound equalizations … across race, gender, nationality, place of birth, and family background.”

Nevertheless, the eldest author’s contribution is detectable in the series of warnings that are the book’s leitmotif. “The advent of artificial intelligence is,” the authors observe, “a question of human survival. … An improperly controlled AI could accumulate knowledge destructively. … The convulsions that will soon bend the collective reality of the planet…mark a fundamental break from the past.” Here, rephrased for Genesis but immediately recognizable, is Kissinger’s original question from his 2018 Atlantic essay “How the Enlightenment Ends”:

[AI’s] objective capacity to reach new and accurate conclusions about our world by inhuman methods not only disrupts our reliance on the scientific method as it has been pursued continuously for five centuries but also challenges the human claim to an exclusive or unique grasp of reality. What can this mean? Will the age of AI not only fail to propel humanity forward but instead catalyze a return to a premodern acceptance of unexplained authority? In short: are we, might we be, on the precipice of a great reversal in human cognition—a dark enlightenment?

In what struck this reader as the book’s most powerful section, the authors contemplate a deeply troubling AI arms race. “If … each human society wishes to maximize its unilateral position,” the authors write, “then the conditions would be set for a psychological death-match between rival military forces and intelligence agencies, the likes of which humanity has never faced before. Today, in the years, months, weeks, and days leading up to the arrival of the first superintelligence, a security dilemma of existential nature awaits.”

If we are already witnessing “a competition to reach a single, perfect, unquestionably dominant intelligence,” then what are the likely outcomes? The authors envision six scenarios, by my count, none of them enticing:

  1. Humanity will lose control of an existential race between multiple actors trapped in a security dilemma.
  2. Humanity will suffer the exercise of supreme hegemony by a victor unharnessed by the checks and balances traditionally needed to guarantee a minimum of security for others.
  3. There will not be just one supreme AI but rather multiple instantiations of superior intelligence in the world.
  4. The companies that own and develop AI may accrue totalizing social, economic, military, and political power.
  5. AI might find the greatest relevance and most widespread and durable expression not in national structures but in religious ones.
  6. Uncontrolled, open-source diffusion of the new technology could give rise to smaller gangs or tribes with substandard but still substantial AI capacity.

Kissinger was deeply concerned about scenarios such as these, and his effort to avoid them did not end with the writing of this book. It is no secret that the final effort of his life—which sapped his remaining strength in the months after his 100th birthday—was to initiate a process of AI arms limitation talks between the United States and China, precisely in the hope of averting such dystopian outcomes.

Because otherwise, what could go wrong?

As Glenn wrote last year after ChatGPT reportedly passed the Turing Test, “The AI to really be afraid of is the one that deliberately fails the Turing Test.”

UPDATE: Kissinger’s final warning: Prepare now for ‘superhuman’ people to control Earth.

The authors offer a bracing message, warning that AI tools have already started outpacing human capabilities so people might need to consider biologically engineering themselves to ensure they are not rendered inferior or wiped out by advanced machines.

In a section titled “Coevolution: Artificial Humans,” the three authors encourage people to think now about “trying to navigate our role when we will no longer be the only or even the principal actors on our planet.”

“Biological engineering efforts designed for tighter human fusion with machines are already underway,” they add.

Current efforts to integrate humans with machine include brain-computer interfaces, a technology that the U.S. military identified last year as of the utmost importance. Such interfaces allow for a direct link between the brain’s electrical signals and a device that processes them to accomplish a given task, such as controlling a battleship.

The authors also raise the prospect of a society that chooses to create a hereditary genetic line of people specifically designed to work better with forthcoming AI tools. The authors describe such redesigning as undesirable, with the potential to cause “the human race to split into multiple lines, some infinitely more powerful than others.”

“Altering the genetic code of some humans to become superhuman carries with it other moral and evolutionary risks,” the authors write. “If AI is responsible for the augmentation of human mental capacity, it could create in humanity a simultaneous biological and psychological reliance on ‘foreign’ intelligence.”

Such a physical and intellectual dependence may create new challenges to separate man from the machines, the authors warn. As a result, designers and engineers should try to make the machines more human, rather than make humans more like machines.

Exit question: “Who will the Singularity eat first?”