Our AI Odyssey

10597

We need not wait for a science-fiction future; the age of AI is already upon us

ФОТО: pixabay.com

Henry A. Kissinger, Eric Schmidt, and Daniel Huttenlocher, The Age of AI: And Our Human Future, Little, Brown and Company, 2021.

CAMBRIDGE – An elder statesman, a retired Big Tech CEO, and a computer scientist meet in a bar. What do they talk about? Artificial intelligence, of course, because everyone is talking about it – or to it, whether they call it Alexa, Siri, or something else. We need not wait for a science-fiction future; the age of AI is already upon us. Machine-learning, in particular, is having a powerful effect on our lives, and it will strongly affect our future, too.

That is the message of this fascinating new book by former US Secretary of State Henry A. Kissinger, former Google CEO Eric Schmidt, and MIT dean Daniel Huttenlocher. And it comes with a warning: AI will challenge the primacy of human reason that has existed since the dawn of the Enlightenment.

Can machines really think? Are they intelligent? And what do those terms mean? In 1950, the renowned British mathematician Alan Turing suggested that we avoid such deep philosophical conundrums by judging performance: If we cannot distinguish a machine’s performance from a human’s, we should label it “intelligent.” Most early computer programs produced rigid and static solutions that failed this “Turing test,” and the field of AI went on to languish throughout the 1980s.

But a breakthrough occurred in the 1990s with a new approach that allowed machines to learn on their own, instead of being guided solely by codes derived from human-distilled insights. Unlike classical algorithms, which consist of steps for producing precise results, machine-learning algorithms consist of steps for improving upon imprecise results. The modern field of machine-learning – of programs that learn through experience – was born.

The technique of layering machine-learning algorithms within neural networks (inspired by the structure of the human brain) was initially limited by a lack of computing power. But that has changed in recent years. In 2017, AlphaZero, an AI program developed by Google’s DeepMind, defeated Stockfish, the most powerful chess program in the world. What was remarkable was not that a computer program prevailed over another computer program, but that it taught itself to do so. Its creators supplied it with the rules of chess and instructed it to develop a winning strategy. After just four hours of learning by playing against itself, it emerged as the world’s chess champion, beating Stockfish 28 times without losing a match (there were 72 draws).

AlphaZero’s play is informed by its ability to recognize patterns across vast sets of possibilities that human minds cannot perceive, process, or employ. Similar machine-learning methods have since taken AI beyond beating human chess experts to discovering entirely new chess strategies. As the authors point out, this takes AI beyond the Turing test of performance indistinguishable from human intelligence to include performance that exceeds that of humans.

Algorithmic Politics

Generative neural networks also can create new images or texts. The authors cite OpenAI’s GPT-3 as one of the most noteworthy generative AIs today. In 2019, the company developed a language model that trains itself by consuming freely available texts from the internet. Given a few words, it can extrapolate new sentences and paragraphs by detecting patterns in sequential elements. It is able to compose new and original texts that meet Turing’s test of displaying intelligent behavior indistinguishable from that of a human being.

I know this from experience. After I inserted a few words, it scoured the internet and in less than a minute produced a plausible false news story about me. I knew it was spurious, but I do not matter that much. Suppose the story had been about a political leader during a major election? What happens to democracy when the average internet user can unleash generative AI bots to flood our political discourse in the final days before people cast their ballots?

Democracy is already suffering from political polarization, a problem exacerbated by social media algorithms that solicit “clicks” (and advertising) by serving users ever-more extreme (“engaging”) views. False news is not a new problem, but its fast, cheap, and widespread amplification by AI algorithms most certainly is. There may be a right to free speech, but there is not a right to free amplification.

These fundamental issues, the authors argue, are coming to the fore as global network platforms such as Google, Twitter, and Facebook employ AI to aggregate and filter more information than their users ever could. But this filtration leads to segregation of users, creating social echo chambers that foment discord among groups. What one person assumes to be an accurate reflection of reality becomes quite different from the reality that other people or groups see, thus reinforcing and deepening polarization. AI is increasingly deciding what is important and what is true, and the results are not encouraging for the health of democracy.

Cracking New Codes

Of course, AI also has huge potential benefits for humanity. AI algorithms can read the results of a mammogram with greater reliability than human technicians can. (This raises an interesting problem for doctors who decide to override the machine’s recommendation: will they be sued for malpractice?)

The authors cite the case of halicin, a new antibiotic that was discovered in 2020 when MIT researchers tasked an AI with modeling millions of compounds in days – a computation far exceeding human capacity – to explore previously undiscovered and unexplained methods of killing bacteria. The researchers noted that without AI, halicin would have been prohibitively expensive or impossible to discover through traditional experimentation. As the authors say, the promise of AI is profound: translating languages, detecting diseases, and modeling climate change are just a few examples of what the technology could do.

The authors do not spend much time on the bogeyman of AGI – artificial general intelligence – or software that is capable of any intellectual task, including relating tasks and concepts across disciplines. Whatever the long-term future of AGI, we already have enough problems coping with our existing generative machine-learning AI. It can draw conclusions, offer predictions, and make decisions, but it does not have self-awareness or the ability to reflect on its role in the world. It does not have intention, motivation, morality, or emotion. In other words, it is not the equivalent of a human being.

But despite the limits of existing AI, we should not underestimate the profound effects it is having on our world. In the authors’ words:

“Not recognizing the many modern conveniences already provided by AI, slowly, almost passively, we have come to rely on the technology without registering either the fact of our dependence or the implications of it. In daily life, AI is our partner, helping us to make decisions about what to eat, what to wear, what to believe, where to go, and how to get there…But these and other possibilities are being purchased – largely without fanfare – by altering the human relationship with reason and reality.”

The AI Race

AI is already influencing world politics. Because AI is a general enabling technology, its uneven distribution is bound to affect the global balance of power. At this stage, while machine-learning is global, the United States and China are the leading AI powers. Of the seven top global companies in the field, three are American and four are Chinese.

Chinese President Xi Jinping has proclaimed the goal of making China the leading country in AI by the year 2030. Kai-Fu Lee of Sinovation Ventures in Beijing notes that with its immense population, the world’s largest internet, vast data resources, and low concern for privacy, China is well placed to develop its AI. Moreover, Lee argues that having access to an enormous market and many engineers may prove more important than having world-leading universities and scientists.

But the quality of data matters as much as the quantity, as does the quality of chips and algorithms. Here, the US may be ahead. Kissinger, Schmidt, and Huttenlocher argue that with data and computing requirements limiting the development of more advanced AI, devising training methods that use less data and less computer power is a critical frontier.

Arms and AI

In addition to the economic competition, AI will have a major impact on military competition and warfare. In the authors’ words, “the introduction of nonhuman logic to military systems will transform strategy.” When AI systems with generative machine-learning are deployed against each other, it may become difficult for humans to anticipate the results of their interaction. This will place premiums on speed, breadth of effects, and endurance.

AI thus will make conflicts more intense and unpredictable. The attack surface of digital networked societies will be too vast for human operators to defend manually. Lethal autonomous weapons systems that select and engage targets will reduce the capability of timely human intervention. While we may strive to have a human “in the loop” or “on the loop,” the incentives for preemption and premature escalation will be strong. Crisis management will become more difficult.

These risks ought to encourage governments to develop consultations and arms-control agreements; but it is not yet clear what arms control for AI would look like. Unlike nuclear and conventional weapons – which are large, visible, clunky, and countable – swarms of AI-enabled drones or torpedoes are harder to verify, and the algorithms that guide them are even more elusive.

It will be difficult to constrain the development of AI capabilities generally, given the importance and ubiquity of the technology for civilian use. Nonetheless, it may still be possible to do something about military targeting capabilities. The US already distinguishes between AI-enabled weapons and autonomous AI weapons. The first are more precise and lethal but still under human control; the latter can make lethal decisions without human operators. The US says it will not possess the second type.

Moreover, the United Nations has been studying the issue of a new international treaty to ban such weapons. But will all countries agree? How will compliance be verified? Given the learning capability of generative AI, will weapons evolve in ways that evade restraints? In any event, efforts to moderate the drive toward automaticity will be important. And, of course, automaticity should not be allowed anywhere near nuclear-weapons systems.

The Leadership Lag

For all the lucidity and wisdom in this well-written book, I wish the authors had taken us further in suggesting solutions to the problems of how humans can control AI both at home and abroad. They point out that AI is brittle because it lacks self-awareness. It is not sentient and does not know what it doesn’t know. For all its brilliance in surpassing humans in some endeavors, it cannot identify and avoid blunders that would be obvious to any child. The Nobel laureate novelist Kazuo Ishiguro dramatizes this brilliantly in his novel Klara and the Sun.

Kissinger, Schmidt, and Huttenlocher note that AI’s inability to check otherwise clear errors on its own underscores the importance of developing testing that allows humans to identify limits, review proposed courses of action, and build resilience into systems in case of AI failure. Societies should permit AI to be employed in systems only after its creators demonstrate its reliability through testing processes. “Developing professional certification, compliance monitoring, and oversight programs for AI – and the auditing expertise their execution will require – will be a crucial societal project,” the authors write.

To that end, the rigor of the regulatory regime should depend on the riskiness of the activity. AIs that drive cars should be subjected to greater oversight than AIs for entertainment platforms like TikTok.

The authors conclude with a proposal for a national commission comprising respected figures from the highest levels of government, business, and academia. It would have the dual function of ensuring that the country remains intellectually and strategically competitive in AI, while also raising global awareness of the technology’s cultural implications. Wise words, but I wish they had told us more about how to achieve these important objectives. In the meantime, they have produced a wonderfully readable introduction to issues that will be critical to humanity’s future and will force us to reconsider the nature of humanity itself.

Joseph S. Nye, Jr., a former US assistant secretary of defense for international security, former chair of the US National Intelligence Council, and former under secretary of state for security assistance, science and technology, is a professor at Harvard University. He is the author, most recently, of Do Morals Matter? Presidents and Foreign Policy from FDR to Trump, Oxford University Press, 2020

© Project Syndicate 1995–2021 

   Если вы обнаружили ошибку или опечатку, выделите фрагмент текста с ошибкой и нажмите CTRL+Enter

Орфографическая ошибка в тексте:

Отмена Отправить