Saturday, April 9, 2022

The Fortieth Anniversary of the Fifth Generation Computer Systems Initiative--and the Road Ahead for Artificial Intelligence

This month (April 2022) marks the fortieth anniversary of the announcement by the Japanese government's Ministry of International Trade and Industry's Fifth Generation Computer Systems initiative. These computers (said to represent another generation beyond first-generation vacuum tubes, second-generation transistors, third-generation integrated circuits, and fourth-generation Very Large Integrated circuits because of their use of parallel processing and logic programming) were, as Edward Feigenbaum and Pamela McCormick wrote in Creative Computing, supposed to be "able to converse with humans in natural language and understand speech and pictures," and "learn, associate, make inferences, make decisions, and otherwise behave in ways we have always considered the exclusive province of human reason."

With this declaration coming from the very institution which was credited with being the "brains" behind the Japanese economic miracle in the post-war period just as "Japan, Inc." was approaching the peak of its global prominence and power, not least on the basis of Japan's industrial excellence in the computing field, this claim--which meant nothing short of the long-awaited revolution in artificial intelligence being practically here--was taken very seriously indeed. In fact the U.S. and British governments, under administrations (those of Ronald Reagan and Margaret Thatcher, respectively) which were hardly fans of MITI-style involvement with private industry, answered Japan's challenge with their own initiatives.

The race was on!

Alas, it proved a race to nowhere. "'Fifth Generation' Became Japan's Lost Generation" sneered the title of a 1992 article in The New York Times, which went so far as to suggest that American computer scientists cynically overstated the prospects of the Japanese government attaining its stated goal to squeeze a bit more research funding out of the government. While one may argue the reasons for this, and their implications, the indisputable, bottom-line facts is that computers with the capabilities in question, based on those particular technologies or any other, never happened. Indeed, four decades after the announcement of the initiative, after astronomical increases in computing power, decades of additional study of human and machine intelligence, and the extraordinary opportunities for training such intelligences provided by broadband Internet, we continue to struggle to give computers real functionality along the lines that were supposed to be imminent four decades ago (the ability to converse in natural language, understand speech and pictures, make human-like decisions, etc.)--so much so that the burst of excitement we saw in the '10 about the possibility that we were "almost there" has already waned amid a great lowering of expectations.

In spite of the briskness of developments in personal computing over the past generation--in the performance, compactness, cheapness of the devices, the speed and ubiquity of Internet service, and the uses to which these capabilities have been put--it can seem that in other ways the field has been stagnant for a long time. Those first four generations of computing arrived within the space of four decades, between the 1940s and 1970s. Since the 1970s we have, if doing remarkable things with the basic technology, still been in the fourth generation for twice as long as it took us to go from generations zero to three. And in the face of the discouraging fact one may think that we will always be so. But I think that goes too far. If in 2022 we remain well short of the target announced in 1982 we do seem to be getting closer, if with what can feel like painful slowness, and I would expect us to go on doing so--if there seems plenty of room for argument about how quickly we will accomplish that.

For whatever it may be worth my suspicion (based on how after disappointing after the '90s neural nets delivered surprising progress in the '10s, when married to faster computers and the Internet) is that the crux of the problem is hardware--that our succeeding, or failing, to build sufficiently powerful computers is going to be the most important single factor in whether we build computers capable of human-like intelligence because ultimately they must have the capacity to simulate it. This would seem to simplify the issue in respects, given the steadiness in the growth of computing power over time, but it is, of course, uncertain just how powerful a computer has to be to do the job, while continued progress in the area is facing significant hurdles, given the slowness of post-silicon computer architectures to emerge, with the development of the carbon-based chips that had looked like the logical successor running "behind schedule," while more exotic possibilities like quantum computing, if potentially far more revolutionary and looking more dynamic, remain a long way from being ready for either really cutting-edge or everyday use. Still, the incentive and resources to keep forging ahead are undeniably there--while it may well be that, after all the prior disappointments, we have less far to go here than we may think.

No comments:

Subscribe Now: Feed Icon