Tuesday, January 25, 2022

How Powerful Would a Genuinely Thinking Computer Have to Be?

Discussing the prospect of a computer matching or exceeding human intelligence we find ourselves forced to consider just how it is that we measure human intelligence. That in itself is an old and difficult problem, reflecting the reality that there remains considerable disagreement about just what precisely human intelligence even is. However, one approach that has been suggested is to consider the human brain as a piece of computer hardware, and attempt to measure its apparent capacity by the yardsticks we commonly apply to computers. Based on that we then identify the minimum hardware performance a computer would have to have in order to display human-like performance.

How do we go about this as a practical matter? By and large it has been standard to measure computing power in terms of the number of calculations a computer can perform per second. Of course, there are a variety of kinds of calculation, but in recent years it has been common to think specifically in terms of "floating-point operations," in contrast with simpler "fixed point" operations. (Adding 1.0 to 2.0 to get 3.0 is a fixed point operation--the decimal in the same place in all three numbers. However, the addition of 1.2570 to 25.4620 to get 26.719 is a floating point operation, in that the decimal point appeared in a different place in each of the two numbers.) Indeed, anyone delving very deeply into the literature on high-end computers quickly encounters the acronym "FLOPS" (short for FLOating-point operations Per Second) and derivatives thereof, such as "teraflop" (a trillion flops per second), a "petaflop" (a quadrillion flops--a thousand trillion flops)" and "exaflop" (a quintillion flops--a thousand petaflops, or a million teraflops).

With computers' performance measured in terms of floating-point operations per second, those speculating about artificial intelligence attempt to equate the human brain's performance with a given number of flops. Among others, Ray Kurzweil published an estimate in his 1999 book The Age of Spiritual Machines, since revised in his 2005 The Singularity is Near. The principle he followed was his taking part of the nervous system, estimating its performance in FLOPS, and then extrapolating from that to the human brain. Working from the estimate that individual synapses are in performance equivalent to a two hundred flop computer, and the human brain contained some hundred trillion synapses, he conservatively estimated a figure of some twenty quadrillion (thousand trillion) floating-point operations per second--twenty petaflops--then suggested that the brain may actually run at about half that speed, ten petaflops sufficing.

In considering this one should note that other analysts have used quite different approaches, from which they produced vastly higher estimates of the brain's performance. This is especially the case when they assume the brain does not produce consciousness at the level of nerves, but rather at the level of quantum phenomena inside the nerves. (Jack Tuszynski and his colleagues suggested that not tens of quadrillions, but tens of trillions of quadrillions, would be required.) Of course such "quantum mind" theories (the best known exponent of whom is probably The Emperor's New Mind author Roger Penrose) are extremely controversial--as yet remaining broadly philosophical rather than scientific in the sense, with as yet no empirical evidence in their favor, and indeed, critics regarding such notions as mystical in a way all too common when people delve into quantum mechanics. Still, the idea that Kurzweil's estimate of just how much computing power a human brain possesses may be too low a couple of orders of magnitude is fairly widespread, popular science articles commonly citing the figure as an exaflop (a thousand petaflops).

Still, it can be said that the most powerful supercomputers have repeatedly attained and increasingly surpassed the level suggested by Kurzweil over the past decade. The Fujitsu "K" supercomputer achieved ten "petaflops" (ten quadrillion "floating-point calculations") per second back in November 2011. It also had a 1.4 petabyte memory, about ten times Kurzweil's estimate of the human brain's memory. Moreover, the Fujitsu K has been exceeded in its turn--by dozens of other supercomputers according to the latest (November 2021) edition of the TOP500 list of the world's fastest systems, in cases by orders of magnitude. At the time of this writing the fastest appears to be yet another Fujitsu machine, the Fujitsu Fukagu, with a performance of 442 petaflops per second--some forty times Kurzweil's estimate of human brain performance. And of course, present computer scientists have set their sights higher than that. Among them is a joint effort by the Department of Energy, Intel and Cray to build the Aurora, which is intended to be an exaflop-level machine--as a matter of course, running a hundred times as many calculations per second as Kurzweil's estimate of the human brain's performance--while even that seems modest next to a report this very day that the I4DI consortium is shooting for a 64 exaflop machine by the end of this very year (equivalent to sixty times those higher estimates of the brain's performance, and six thousand times Kurzweil's estimate).

Reading this one may wonder why Kurzweil's hypothesis about such a computer matching or exceeding the brain's capacity has not already been tested with results pointing one way or the other. The reality is that in practice supercomputers like these, which are as few as they are because they are so hugely expensive to build (the Fukagu's a billion dollar machine) and to run (their voracious energy consumption a constant theme of discussion of such equipment), are normally used by only the biggest-budgeted researchers for the most computationally intense tasks, like simulations of complex physical phenomena, such as the Earth's climate or the cosmos--or code-breaking by intelligence services. They have only rarely been available to artificial intelligence researchers. However, the recent enthusiasm for artificial intelligence research has reportedly meant that artificial intelligence researchers has been cited as a factor in the development of the next round of supercomputers (not least because of the utility of AI in facilitating their work).

Especially with this being the case it seems far from impossible that this will enable it to yield new insights into the subject--just as this past decade it was already the case that our having faster computers available permitted the striking advances in areas like machine learning that we saw this past decade. Indeed, even as the recent excitement over artificial intelligence turns into disappointment with the realization that the most-hyped applications (like Level 5 self-driving) are more remote than certain loud-mouthed hucksters promised, the continued expansion of computing power offers considerable grounds to not write those prospects off just yet.

No comments:

Subscribe Now: Feed Icon