After publishing his geopolitical forecast for the twenty-first century in The Next 100 Years (2009) George Friedman endeavored to provide a more detailed, explanatory and prescriptive discussion of his expectations for the next years, The Next Decade (2011).
This being 2022 we can look back and consider how he did. My thought is that while he made some good points in the book he did not do very well, particularly where his larger and more radical predictions were concerned. Consider the following:
* Friedman predicted that in the wake of the Great Recession the balance of power in economic life would shift back from the private sector to the public, with coming years looking something more like the post-war period. Of course, no such thing happened. The neoliberal model may stand in lower credit with the general public than ever, but it remains the conventional wisdom of business, government, academia, the media--and the few compromises required by policymakers hewing to the line have been slight indeed. (Britain, for example, may have left the EU--but the neoliberal standards of privatization, deregulation, profligacy with corporate welfare and stinginess with the general public, remain the order of the day, as does the foundation of economies on globalization and financialization.)
* Friedman envisioned the U.S. facing a rapprochement between Russia and Germany producing a Paris-Berlin-Moscow axis. Of course, the reality has been quite different.
* Friedman, who has never let go of his once headline-grabbing and since much derided vision of the U.S. and Japan as geopolitical rivals (The Coming War With Japan), or hewing to a pessimistic appraisal of China's prospects, once again predicted China's proving an also-ran in the 2010s, and the U.S. becoming more concerned with checking Japan's power instead. Again, this is not exactly how things have gone.
As readers of this blog know I think it simple-minded to sneer at prediction, and even forecasting, as so many do (with outrageous smugness). After all, as Nicholas Rescher made clear, we have no choice but to constantly make choices based on expectations of future conditions and the future outcomes of our choices, and most of the time we are correct. (For example, when we go to work in the morning we expect that our workplace will be there and be operative, and we are usually right about that.) In fact we take our capacity for correct prediction so much for granted that we are only aware of making predictions when we face the relatively small number of matters where prediction proves trickier. Still, the difficulty does not in itself give us an "out," and we still find ourselves forced to choose, with our only choice making as accurate a prediction as we can, not least by learning from our past mistakes in such situations. And for me that, and not the denigration of prediction, is the reason to revisit Friedman's work and consider where he went wrong--which, it seems to me, was in his simply reasoning from the wrong premises and indulging biases that had proved unhelpful in the past (his superficial grasp of political economy, his attitude toward Germany and China, for example, his endless attempts to rehabilitate his prediction about Japan). Alas, incorrect premises and problematic biases are not at all rare in the business--and unfortunately the media has been more inclined to simple-mindedly make such "experts" into authorities rather than appraise what they have to offer.
Friday, January 28, 2022
Tuesday, January 25, 2022
Carbon Nanotube-Based Microchips: A (Very Short) Primer
As Moore's Law runs its course those looking forward to continued improvements in computing power necessarily turn their attention in other directions--not least, a turn away from silicon to other materials as the material from which we make our integrated circuits. For decades one of the more promising possibilities has been the use of carbon nanotubes.
Carbon nanotubes are one of many materials formed from an arrangement of carbon atoms in hexagonal (six-sided) structures, themselves arranged in a lattice, with every one of the six atoms connected to and forming part of the adjacent structures of the same type. Left as a flat sheet, the resulting material is called graphene. With a nanotube what happens is that the "sheet" is bent, and its opposite, horizontal ends joined, to form a hollow cylinder.
These nanotubes' ultrathin bodies and smooth walls enable charges to flow through them more rapidly, and at a lower supply voltage, than is the case with silicon, making them potentially faster and more energy-efficient. Their smoother, more energy-efficient structures may also make them a more suitable material than silicon for denser "3-D" chip designs--which would not just put more transistors on a single 2-D surface the way conventional chips do, but layer those 2-D surfaces on top of one another to cram more computing power into a given space. And altogether the combination of attributes has led to speculation about their permitting a thousandfold gain in computing speed over silicon, equivalent to about ten more doublings and an additional couple of decades of Moore's Law-like progress, which as a practical matter that would convert today's personal computers into machines comparable to today's supercomputers.
Of course, in considering all this one has to note that there has been a long record of great expectations and great disappointments in regard to the mass-production of carbon nanotubes, not least because of the old "works well enough in the lab but not ready for real life" problem of consistently getting the required quality at a competitive price when producing at the relevant scale. (Back in 2014 IBM said they would have them in 2020. Well, 2020 has come and gone, with the arrival of the technology, like that of so many others, deferred indefinitely into the future.) Still, it is one thing to acknowledge that the technology has been slower to emerge than hoped, another to write it off--and it may well be that the belated arrival of carbon nanotube-based chips, and the boost they deliver to the continued progress of computing power, will be what opens the way to the next great round of advance in the development of an artificial intelligence sector that it may turn out has been held back from realizing its promises only by the limits of the hardware.
Carbon nanotubes are one of many materials formed from an arrangement of carbon atoms in hexagonal (six-sided) structures, themselves arranged in a lattice, with every one of the six atoms connected to and forming part of the adjacent structures of the same type. Left as a flat sheet, the resulting material is called graphene. With a nanotube what happens is that the "sheet" is bent, and its opposite, horizontal ends joined, to form a hollow cylinder.
These nanotubes' ultrathin bodies and smooth walls enable charges to flow through them more rapidly, and at a lower supply voltage, than is the case with silicon, making them potentially faster and more energy-efficient. Their smoother, more energy-efficient structures may also make them a more suitable material than silicon for denser "3-D" chip designs--which would not just put more transistors on a single 2-D surface the way conventional chips do, but layer those 2-D surfaces on top of one another to cram more computing power into a given space. And altogether the combination of attributes has led to speculation about their permitting a thousandfold gain in computing speed over silicon, equivalent to about ten more doublings and an additional couple of decades of Moore's Law-like progress, which as a practical matter that would convert today's personal computers into machines comparable to today's supercomputers.
Of course, in considering all this one has to note that there has been a long record of great expectations and great disappointments in regard to the mass-production of carbon nanotubes, not least because of the old "works well enough in the lab but not ready for real life" problem of consistently getting the required quality at a competitive price when producing at the relevant scale. (Back in 2014 IBM said they would have them in 2020. Well, 2020 has come and gone, with the arrival of the technology, like that of so many others, deferred indefinitely into the future.) Still, it is one thing to acknowledge that the technology has been slower to emerge than hoped, another to write it off--and it may well be that the belated arrival of carbon nanotube-based chips, and the boost they deliver to the continued progress of computing power, will be what opens the way to the next great round of advance in the development of an artificial intelligence sector that it may turn out has been held back from realizing its promises only by the limits of the hardware.
Understanding Moore's Law
Those of us attentive to computing are generally familiar with the existence of something called "Moore's Law." However, really satisfying explanation of what Moore's Law actually is would seem a rarer thing--with one result a great deal of confusion about what it means.
Simply put, Moore's Law has to do with "integrated circuits," or, in more everyday usage, microchips--small wafers ("chips") of semiconducting material, usually silicon, containing an electronic circuit. Within these chips transistors amplify, regulate and switch the electric signals passing through them, enabling them to store and move electronic data. Placing more transistors inside a chip means that more such activity can go on inside it at once, which gives the chip, and the device incorporating it, enabling more "parallelism"--the ability to do more at once, and therefore to work faster. All other things being equal one can only put more transistors on the same-sized chip if the transistors are themselves smaller--which means that the electrons passing through them travel shorter distances, which increases the speed at which the system executes its operations yet again.
Since their invention in the late 1950s microchip manufacturers have steadily increased the number of transistors in their chips, by shrinking transistor size--a process that also caused the cost of each transistor to fall. In 1965 electronics engineer Gordon Moore published a short paper titled "Cramming More Components Into Integrated Circuits" in which he noted that the "density at minimum cost per transistor" doubled every year. He extrapolated from that trend that in the next five years they would have chips with twenty times as many transistors on them, each costing just a tenth of their 1965 price, and that this pattern would continue for "at least ten years."
Moore's prediction (which, it is worth recalling, he never called a "law") was inexactly borne out during those years. He proved somewhat overoptimistic, transistor density not quite doubling annually, and today, in fact, different versions of this "law" get quoted with varying claims about doubling times. (Some say one year, some say eighteen months, some say two years, while claims about the implications for processing power and price also vary.) However, the swift doubling in the number of transistors per chip, and the fall in the price of computing power that went with it, continued for a lot longer than the ten years he suggested, instead going on for a half century past that point. The result is that where an efficiently made chip had fifty transistors on it in 1965, they now contain billions of transistors—all as the low price of these densely transistorized chips means that hundreds of billions of them are manufactured annually, permitting them to be stuffed into just about everything we use.
Nonetheless, Moore's Law has certain in-built limitations. The most significant of these is the physical limit to transistor miniaturization. One cannot make a silicon transistor smaller than a single nanometer (a billionth of a meter, equivalent to the width of a single atom) after all, while even before one gets to that point shrinking size makes transistors so small that the electrons whose movements they are supposed to control actually pass (or "tunnel") through their walls.
Of course, when Moore presented his "Law" the prospects of single atom-wide transistors, or even tunneling, seemed remote in the extreme. Transistors in 1971 were drawn on a ten micrometer (millionth of a meter) scale—ten thousand nanometers in the terms more commonly discussed today. However, by 2017 the transistors in commercially made chips were just a thousandth their earlier size, a mere ten nanometers across. The following year major chipmakers began the mass-production of mere seven nanometer transistors got underway, leaving very little space for further size reductions.
This has led a good many observers to declare that "Moore's Law is dead," or will be before too much longer. The claim is controversial--perhaps more than it ought to be. After all, no one disputes that chip speeds cannot continue to increase on the basis of reducing the size of the transistors on silicon wafers--and that is exactly what Moore's Law was concerned with, not the possibility or impossibility of continued progress in computing power. The result is that those who are convinced that the tendency to the exponential increase of computing power is virtually bound to continue as before might do better to set aside claims for Moore's Law continuing, and instead speak of Ray Kurzweil's "Law of Accelerating Returns."
Simply put, Moore's Law has to do with "integrated circuits," or, in more everyday usage, microchips--small wafers ("chips") of semiconducting material, usually silicon, containing an electronic circuit. Within these chips transistors amplify, regulate and switch the electric signals passing through them, enabling them to store and move electronic data. Placing more transistors inside a chip means that more such activity can go on inside it at once, which gives the chip, and the device incorporating it, enabling more "parallelism"--the ability to do more at once, and therefore to work faster. All other things being equal one can only put more transistors on the same-sized chip if the transistors are themselves smaller--which means that the electrons passing through them travel shorter distances, which increases the speed at which the system executes its operations yet again.
Since their invention in the late 1950s microchip manufacturers have steadily increased the number of transistors in their chips, by shrinking transistor size--a process that also caused the cost of each transistor to fall. In 1965 electronics engineer Gordon Moore published a short paper titled "Cramming More Components Into Integrated Circuits" in which he noted that the "density at minimum cost per transistor" doubled every year. He extrapolated from that trend that in the next five years they would have chips with twenty times as many transistors on them, each costing just a tenth of their 1965 price, and that this pattern would continue for "at least ten years."
Moore's prediction (which, it is worth recalling, he never called a "law") was inexactly borne out during those years. He proved somewhat overoptimistic, transistor density not quite doubling annually, and today, in fact, different versions of this "law" get quoted with varying claims about doubling times. (Some say one year, some say eighteen months, some say two years, while claims about the implications for processing power and price also vary.) However, the swift doubling in the number of transistors per chip, and the fall in the price of computing power that went with it, continued for a lot longer than the ten years he suggested, instead going on for a half century past that point. The result is that where an efficiently made chip had fifty transistors on it in 1965, they now contain billions of transistors—all as the low price of these densely transistorized chips means that hundreds of billions of them are manufactured annually, permitting them to be stuffed into just about everything we use.
Nonetheless, Moore's Law has certain in-built limitations. The most significant of these is the physical limit to transistor miniaturization. One cannot make a silicon transistor smaller than a single nanometer (a billionth of a meter, equivalent to the width of a single atom) after all, while even before one gets to that point shrinking size makes transistors so small that the electrons whose movements they are supposed to control actually pass (or "tunnel") through their walls.
Of course, when Moore presented his "Law" the prospects of single atom-wide transistors, or even tunneling, seemed remote in the extreme. Transistors in 1971 were drawn on a ten micrometer (millionth of a meter) scale—ten thousand nanometers in the terms more commonly discussed today. However, by 2017 the transistors in commercially made chips were just a thousandth their earlier size, a mere ten nanometers across. The following year major chipmakers began the mass-production of mere seven nanometer transistors got underway, leaving very little space for further size reductions.
This has led a good many observers to declare that "Moore's Law is dead," or will be before too much longer. The claim is controversial--perhaps more than it ought to be. After all, no one disputes that chip speeds cannot continue to increase on the basis of reducing the size of the transistors on silicon wafers--and that is exactly what Moore's Law was concerned with, not the possibility or impossibility of continued progress in computing power. The result is that those who are convinced that the tendency to the exponential increase of computing power is virtually bound to continue as before might do better to set aside claims for Moore's Law continuing, and instead speak of Ray Kurzweil's "Law of Accelerating Returns."
How Powerful Would a Genuinely Thinking Computer Have to Be?
Discussing the prospect of a computer matching or exceeding human intelligence we find ourselves forced to consider just how it is that we measure human intelligence. That in itself is an old and difficult problem, reflecting the reality that there remains considerable disagreement about just what precisely human intelligence even is. However, one approach that has been suggested is to consider the human brain as a piece of computer hardware, and attempt to measure its apparent capacity by the yardsticks we commonly apply to computers. Based on that we then identify the minimum hardware performance a computer would have to have in order to display human-like performance.
How do we go about this as a practical matter? By and large it has been standard to measure computing power in terms of the number of calculations a computer can perform per second. Of course, there are a variety of kinds of calculation, but in recent years it has been common to think specifically in terms of "floating-point operations," in contrast with simpler "fixed point" operations. (Adding 1.0 to 2.0 to get 3.0 is a fixed point operation--the decimal in the same place in all three numbers. However, the addition of 1.2570 to 25.4620 to get 26.719 is a floating point operation, in that the decimal point appeared in a different place in each of the two numbers.) Indeed, anyone delving very deeply into the literature on high-end computers quickly encounters the acronym "FLOPS" (short for FLOating-point operations Per Second) and derivatives thereof, such as "teraflop" (a trillion flops per second), a "petaflop" (a quadrillion flops--a thousand trillion flops)" and "exaflop" (a quintillion flops--a thousand petaflops, or a million teraflops).
With computers' performance measured in terms of floating-point operations per second, those speculating about artificial intelligence attempt to equate the human brain's performance with a given number of flops. Among others, Ray Kurzweil published an estimate in his 1999 book The Age of Spiritual Machines, since revised in his 2005 The Singularity is Near. The principle he followed was his taking part of the nervous system, estimating its performance in FLOPS, and then extrapolating from that to the human brain. Working from the estimate that individual synapses are in performance equivalent to a two hundred flop computer, and the human brain contained some hundred trillion synapses, he conservatively estimated a figure of some twenty quadrillion (thousand trillion) floating-point operations per second--twenty petaflops--then suggested that the brain may actually run at about half that speed, ten petaflops sufficing.
In considering this one should note that other analysts have used quite different approaches, from which they produced vastly higher estimates of the brain's performance. This is especially the case when they assume the brain does not produce consciousness at the level of nerves, but rather at the level of quantum phenomena inside the nerves. (Jack Tuszynski and his colleagues suggested that not tens of quadrillions, but tens of trillions of quadrillions, would be required.) Of course such "quantum mind" theories (the best known exponent of whom is probably The Emperor's New Mind author Roger Penrose) are extremely controversial--as yet remaining broadly philosophical rather than scientific in the sense, with as yet no empirical evidence in their favor, and indeed, critics regarding such notions as mystical in a way all too common when people delve into quantum mechanics. Still, the idea that Kurzweil's estimate of just how much computing power a human brain possesses may be too low a couple of orders of magnitude is fairly widespread, popular science articles commonly citing the figure as an exaflop (a thousand petaflops).
Still, it can be said that the most powerful supercomputers have repeatedly attained and increasingly surpassed the level suggested by Kurzweil over the past decade. The Fujitsu "K" supercomputer achieved ten "petaflops" (ten quadrillion "floating-point calculations") per second back in November 2011. It also had a 1.4 petabyte memory, about ten times Kurzweil's estimate of the human brain's memory. Moreover, the Fujitsu K has been exceeded in its turn--by dozens of other supercomputers according to the latest (November 2021) edition of the TOP500 list of the world's fastest systems, in cases by orders of magnitude. At the time of this writing the fastest appears to be yet another Fujitsu machine, the Fujitsu Fukagu, with a performance of 442 petaflops per second--some forty times Kurzweil's estimate of human brain performance. And of course, present computer scientists have set their sights higher than that. Among them is a joint effort by the Department of Energy, Intel and Cray to build the Aurora, which is intended to be an exaflop-level machine--as a matter of course, running a hundred times as many calculations per second as Kurzweil's estimate of the human brain's performance--while even that seems modest next to a report this very day that the I4DI consortium is shooting for a 64 exaflop machine by the end of this very year (equivalent to sixty times those higher estimates of the brain's performance, and six thousand times Kurzweil's estimate).
Reading this one may wonder why Kurzweil's hypothesis about such a computer matching or exceeding the brain's capacity has not already been tested with results pointing one way or the other. The reality is that in practice supercomputers like these, which are as few as they are because they are so hugely expensive to build (the Fukagu's a billion dollar machine) and to run (their voracious energy consumption a constant theme of discussion of such equipment), are normally used by only the biggest-budgeted researchers for the most computationally intense tasks, like simulations of complex physical phenomena, such as the Earth's climate or the cosmos--or code-breaking by intelligence services. They have only rarely been available to artificial intelligence researchers. However, the recent enthusiasm for artificial intelligence research has reportedly meant that artificial intelligence researchers has been cited as a factor in the development of the next round of supercomputers (not least because of the utility of AI in facilitating their work).
Especially with this being the case it seems far from impossible that this will enable it to yield new insights into the subject--just as this past decade it was already the case that our having faster computers available permitted the striking advances in areas like machine learning that we saw this past decade. Indeed, even as the recent excitement over artificial intelligence turns into disappointment with the realization that the most-hyped applications (like Level 5 self-driving) are more remote than certain loud-mouthed hucksters promised, the continued expansion of computing power offers considerable grounds to not write those prospects off just yet.
How do we go about this as a practical matter? By and large it has been standard to measure computing power in terms of the number of calculations a computer can perform per second. Of course, there are a variety of kinds of calculation, but in recent years it has been common to think specifically in terms of "floating-point operations," in contrast with simpler "fixed point" operations. (Adding 1.0 to 2.0 to get 3.0 is a fixed point operation--the decimal in the same place in all three numbers. However, the addition of 1.2570 to 25.4620 to get 26.719 is a floating point operation, in that the decimal point appeared in a different place in each of the two numbers.) Indeed, anyone delving very deeply into the literature on high-end computers quickly encounters the acronym "FLOPS" (short for FLOating-point operations Per Second) and derivatives thereof, such as "teraflop" (a trillion flops per second), a "petaflop" (a quadrillion flops--a thousand trillion flops)" and "exaflop" (a quintillion flops--a thousand petaflops, or a million teraflops).
With computers' performance measured in terms of floating-point operations per second, those speculating about artificial intelligence attempt to equate the human brain's performance with a given number of flops. Among others, Ray Kurzweil published an estimate in his 1999 book The Age of Spiritual Machines, since revised in his 2005 The Singularity is Near. The principle he followed was his taking part of the nervous system, estimating its performance in FLOPS, and then extrapolating from that to the human brain. Working from the estimate that individual synapses are in performance equivalent to a two hundred flop computer, and the human brain contained some hundred trillion synapses, he conservatively estimated a figure of some twenty quadrillion (thousand trillion) floating-point operations per second--twenty petaflops--then suggested that the brain may actually run at about half that speed, ten petaflops sufficing.
In considering this one should note that other analysts have used quite different approaches, from which they produced vastly higher estimates of the brain's performance. This is especially the case when they assume the brain does not produce consciousness at the level of nerves, but rather at the level of quantum phenomena inside the nerves. (Jack Tuszynski and his colleagues suggested that not tens of quadrillions, but tens of trillions of quadrillions, would be required.) Of course such "quantum mind" theories (the best known exponent of whom is probably The Emperor's New Mind author Roger Penrose) are extremely controversial--as yet remaining broadly philosophical rather than scientific in the sense, with as yet no empirical evidence in their favor, and indeed, critics regarding such notions as mystical in a way all too common when people delve into quantum mechanics. Still, the idea that Kurzweil's estimate of just how much computing power a human brain possesses may be too low a couple of orders of magnitude is fairly widespread, popular science articles commonly citing the figure as an exaflop (a thousand petaflops).
Still, it can be said that the most powerful supercomputers have repeatedly attained and increasingly surpassed the level suggested by Kurzweil over the past decade. The Fujitsu "K" supercomputer achieved ten "petaflops" (ten quadrillion "floating-point calculations") per second back in November 2011. It also had a 1.4 petabyte memory, about ten times Kurzweil's estimate of the human brain's memory. Moreover, the Fujitsu K has been exceeded in its turn--by dozens of other supercomputers according to the latest (November 2021) edition of the TOP500 list of the world's fastest systems, in cases by orders of magnitude. At the time of this writing the fastest appears to be yet another Fujitsu machine, the Fujitsu Fukagu, with a performance of 442 petaflops per second--some forty times Kurzweil's estimate of human brain performance. And of course, present computer scientists have set their sights higher than that. Among them is a joint effort by the Department of Energy, Intel and Cray to build the Aurora, which is intended to be an exaflop-level machine--as a matter of course, running a hundred times as many calculations per second as Kurzweil's estimate of the human brain's performance--while even that seems modest next to a report this very day that the I4DI consortium is shooting for a 64 exaflop machine by the end of this very year (equivalent to sixty times those higher estimates of the brain's performance, and six thousand times Kurzweil's estimate).
Reading this one may wonder why Kurzweil's hypothesis about such a computer matching or exceeding the brain's capacity has not already been tested with results pointing one way or the other. The reality is that in practice supercomputers like these, which are as few as they are because they are so hugely expensive to build (the Fukagu's a billion dollar machine) and to run (their voracious energy consumption a constant theme of discussion of such equipment), are normally used by only the biggest-budgeted researchers for the most computationally intense tasks, like simulations of complex physical phenomena, such as the Earth's climate or the cosmos--or code-breaking by intelligence services. They have only rarely been available to artificial intelligence researchers. However, the recent enthusiasm for artificial intelligence research has reportedly meant that artificial intelligence researchers has been cited as a factor in the development of the next round of supercomputers (not least because of the utility of AI in facilitating their work).
Especially with this being the case it seems far from impossible that this will enable it to yield new insights into the subject--just as this past decade it was already the case that our having faster computers available permitted the striking advances in areas like machine learning that we saw this past decade. Indeed, even as the recent excitement over artificial intelligence turns into disappointment with the realization that the most-hyped applications (like Level 5 self-driving) are more remote than certain loud-mouthed hucksters promised, the continued expansion of computing power offers considerable grounds to not write those prospects off just yet.
Tuesday, January 18, 2022
What The Magnificent Ambersons Can Teach Us About Technological Change
I remember reading Booth Tarkington's The Magnificent Ambersons years ago and finding it a rather slight, tepid tale--so much so that I found it hard to understand why Orson Welles, after giving us his ferocious epic Citizen Kane, picked it for a follow-up (and suspected that it was because slight and tepid was what he wanted after the sheer hell he went through making his first movie).
Still, some bits of the novel have stuck in my mind, not least those which had to do with the emergence of the automobile. As men like Eugene Morgan toiled on the vehicles the broader public tended to look at them ironically--an attitude epitomized by the way idiot vulgarians would yell "Git a hoss!" at anybody with a car, and delighting especially in the sight of some motorist stuck repairing a malfunctioning vehicle. However, the technology progressed, and the world changed greatly, leaving the "Git a hoss!"-yelling oafs looking foolish.
Tarkington depicted the shift with some nuance, with this striking me as especially the case in the scenes regarding Aunt Fanny's investment in the headlight manufacturer. She was impressed by a demonstration of the technology, but Morgan, who at this point was growing wealthy from having got into the automotive revolution "on the ground floor" explained that while the headlight in question worked "well enough in the shop" on the road it could only stay lit if a car was going at high speed (twenty-five miles an hour minimum, fifty miles an hour for full illumination)--which meant that the light failed if the motorist drove at all more slowly, and greatly limited the practical usefulness and salability of the technology. Morgan acknowledged that work to improve it continued, but that for the time being she had best eschew putting her money into the company. However, Fanny went ahead and put her money into the company anyway, and ended up broke.
It is as striking a dramatic illustration as I can remember in a major novel of how amid a time of technological flux people go from dismissing a technology altogether to being utterly credulous as a great deal suddenly seems possible--and how at such a moment the word "startup" can seem synonymous with "gold mine." It is a striking illustration, too, of how what works "in the shop" may not be ready for the street just yet, or any time soon, or even ever--with the innovation in question perhaps likely to come out of a different shop, a different startup, than the one that initially caught the eye, because even if it came along later it ended up being the one that made the thing work, or at least cut the deals that got it to market when it became workable.
The more sophisticated technology-watchers, of course, understand this, and indeed NASA developed an excellent system for judging these matters with which I think everyone who cares about these issues should acquaint themselves. Of course, to go by what passes for "science" journalism," which consistently, overwhelmingly, shows and promotes to its audience Fanny's unsophistication rather than Morgan's astuteness, few bother to do so--or in any other way come to understand what Tarkington was able not just to explain but to dramatize in his novel a century ago.
Still, some bits of the novel have stuck in my mind, not least those which had to do with the emergence of the automobile. As men like Eugene Morgan toiled on the vehicles the broader public tended to look at them ironically--an attitude epitomized by the way idiot vulgarians would yell "Git a hoss!" at anybody with a car, and delighting especially in the sight of some motorist stuck repairing a malfunctioning vehicle. However, the technology progressed, and the world changed greatly, leaving the "Git a hoss!"-yelling oafs looking foolish.
Tarkington depicted the shift with some nuance, with this striking me as especially the case in the scenes regarding Aunt Fanny's investment in the headlight manufacturer. She was impressed by a demonstration of the technology, but Morgan, who at this point was growing wealthy from having got into the automotive revolution "on the ground floor" explained that while the headlight in question worked "well enough in the shop" on the road it could only stay lit if a car was going at high speed (twenty-five miles an hour minimum, fifty miles an hour for full illumination)--which meant that the light failed if the motorist drove at all more slowly, and greatly limited the practical usefulness and salability of the technology. Morgan acknowledged that work to improve it continued, but that for the time being she had best eschew putting her money into the company. However, Fanny went ahead and put her money into the company anyway, and ended up broke.
It is as striking a dramatic illustration as I can remember in a major novel of how amid a time of technological flux people go from dismissing a technology altogether to being utterly credulous as a great deal suddenly seems possible--and how at such a moment the word "startup" can seem synonymous with "gold mine." It is a striking illustration, too, of how what works "in the shop" may not be ready for the street just yet, or any time soon, or even ever--with the innovation in question perhaps likely to come out of a different shop, a different startup, than the one that initially caught the eye, because even if it came along later it ended up being the one that made the thing work, or at least cut the deals that got it to market when it became workable.
The more sophisticated technology-watchers, of course, understand this, and indeed NASA developed an excellent system for judging these matters with which I think everyone who cares about these issues should acquaint themselves. Of course, to go by what passes for "science" journalism," which consistently, overwhelmingly, shows and promotes to its audience Fanny's unsophistication rather than Morgan's astuteness, few bother to do so--or in any other way come to understand what Tarkington was able not just to explain but to dramatize in his novel a century ago.
Thursday, January 6, 2022
Putting the Reports About the Thwaites Glacier Into Perspective
Last month reports that the Thwaites glacier in Antarctica is likely to collapse within a number of years (as few as three in some of the reports) made the rounds of the news, along with projections that the event could by itself raise sea levels by two feet, and lead to further collapses producing a ten foot sea level rise, inundating all the world's coastlines.
The tone in which all this was reported, of course, made "could" seem like "will," and does not disabuse anyone not reading closely of the impression that the maximum sea level rise anticipated here will happen instantaneously.
Even worse than most in this regard is the title of the piece in Rolling Stone--"'The Fuse Has Been Blown,' and the Doomsday Glacier Is Coming for Us All."
This flatly tells us not that something potentially very bad may be happening not very far from now, or even that something actually very bad is inevitably happening, but that the worst has already happened.
That is, of course, not actually the case--as the article itself makes clear if one reads it rather than just Retweeting the headline. In spite of Rolling Stone's propensity for "collapse porn" (it was through their pages that I became acquainted with the writing of James Howard Kunstler, whom Leigh Phillips has described as "hav[ing] a veritable hard-on for the end of the world, imagining with relish . . . collapse . . . retreat from modernity and an embrace of the Medieval"), Jeff Goodell's article is considerably better-informed, more factually grounded and intellectually nuanced than the great majority of the other items on the subject I have seen thus far. Goodell (who, by the way, writes that collapse may come inside a decade--rather than the five and even three year periods so many others are talking about) acknowledges that the report's findings are not the surprise much of the media seems to think it is (Goodell himself wrote a noted piece on Thwaites back in 2017), and that there are enormous uncertainties not only regarding when and even if Thwaites will crack up, but what would happen afterward. He notes that this does not completely exclude scenarios where it may not make the (already pretty terrible) picture too much worse. And even the really bad ones (ten feet of sea rise) would likely be a matter of a century.
The result is that the title is a real shame--but all too revealing of how the media has tended to report on this subject (as it does on a great many others), prioritizing shock and fear over comprehension, and getting away with it because of what passes for "reading" these days. Still, even this piece lacked something that I think we should be seeing more of, hearing more of, when discussing matters like the Thwaites glacier and climate change generally--solutions. Obviously reducing the accumulation of greenhouse gases in the atmosphere (decarbonizing our energy-transport system, planting more trees, etc.) is central and indispensable to the ultimate, bottom-line, long-term solution to the larger problem of global warming, and no one serious about the matter suggests not doing so. Yet there has also long been discussing the use of other ameliorative strategies that can help us cope with particular effects of such warming as we seem unlikely to be able to avoid, like saving glaciers through engineering efforts (and indeed, the idea of rescuing Thwaites specifically in this manner specifically is not unprecedented). There is no doubt that the schemes are ambitious, relying on unproven technologies--but it would certainly seem that given the stakes we should be hearing A LOT of calls for programs to develop and deploy anything that will help. However, the championing of such ideas is exactly what we do not see when we look at coverage of the issue.
Of course, it is the media's job to be skeptical--but it goes about that job in awfully "selective" fashion. Consider, for example, the New York Times Magazine's 2019 piece on hypersonic missiles. Hypersonic missiles, certainly, are a radical, far from proven technology--but that does not stop the NYT Magazine piece from being hawkish in the extreme about the claims of such missiles' development being a national priority for the United States. I cannot think of a single occasion when I saw an article in the Times and its associated publications, or any other media outlet of comparable standing, try so hard to sell its readers on the importance of a specific technological program that could help with our environmental problems, or show so much respect for the proponents of such a program, or so uncritically embrace their optimism about the feasibility and value of that technology, as they do in the case of those missiles. Instead they strain for any excuse to dismiss such a project, and close on a note of "Don't get your hopes up." And that says everything about the media's prejudices, not least that preference for fear-mongering and defeatism on the subject--to the very great cost of the dialogue on these matters, and our chances of actually dealing with the problem.
The tone in which all this was reported, of course, made "could" seem like "will," and does not disabuse anyone not reading closely of the impression that the maximum sea level rise anticipated here will happen instantaneously.
Even worse than most in this regard is the title of the piece in Rolling Stone--"'The Fuse Has Been Blown,' and the Doomsday Glacier Is Coming for Us All."
This flatly tells us not that something potentially very bad may be happening not very far from now, or even that something actually very bad is inevitably happening, but that the worst has already happened.
That is, of course, not actually the case--as the article itself makes clear if one reads it rather than just Retweeting the headline. In spite of Rolling Stone's propensity for "collapse porn" (it was through their pages that I became acquainted with the writing of James Howard Kunstler, whom Leigh Phillips has described as "hav[ing] a veritable hard-on for the end of the world, imagining with relish . . . collapse . . . retreat from modernity and an embrace of the Medieval"), Jeff Goodell's article is considerably better-informed, more factually grounded and intellectually nuanced than the great majority of the other items on the subject I have seen thus far. Goodell (who, by the way, writes that collapse may come inside a decade--rather than the five and even three year periods so many others are talking about) acknowledges that the report's findings are not the surprise much of the media seems to think it is (Goodell himself wrote a noted piece on Thwaites back in 2017), and that there are enormous uncertainties not only regarding when and even if Thwaites will crack up, but what would happen afterward. He notes that this does not completely exclude scenarios where it may not make the (already pretty terrible) picture too much worse. And even the really bad ones (ten feet of sea rise) would likely be a matter of a century.
The result is that the title is a real shame--but all too revealing of how the media has tended to report on this subject (as it does on a great many others), prioritizing shock and fear over comprehension, and getting away with it because of what passes for "reading" these days. Still, even this piece lacked something that I think we should be seeing more of, hearing more of, when discussing matters like the Thwaites glacier and climate change generally--solutions. Obviously reducing the accumulation of greenhouse gases in the atmosphere (decarbonizing our energy-transport system, planting more trees, etc.) is central and indispensable to the ultimate, bottom-line, long-term solution to the larger problem of global warming, and no one serious about the matter suggests not doing so. Yet there has also long been discussing the use of other ameliorative strategies that can help us cope with particular effects of such warming as we seem unlikely to be able to avoid, like saving glaciers through engineering efforts (and indeed, the idea of rescuing Thwaites specifically in this manner specifically is not unprecedented). There is no doubt that the schemes are ambitious, relying on unproven technologies--but it would certainly seem that given the stakes we should be hearing A LOT of calls for programs to develop and deploy anything that will help. However, the championing of such ideas is exactly what we do not see when we look at coverage of the issue.
Of course, it is the media's job to be skeptical--but it goes about that job in awfully "selective" fashion. Consider, for example, the New York Times Magazine's 2019 piece on hypersonic missiles. Hypersonic missiles, certainly, are a radical, far from proven technology--but that does not stop the NYT Magazine piece from being hawkish in the extreme about the claims of such missiles' development being a national priority for the United States. I cannot think of a single occasion when I saw an article in the Times and its associated publications, or any other media outlet of comparable standing, try so hard to sell its readers on the importance of a specific technological program that could help with our environmental problems, or show so much respect for the proponents of such a program, or so uncritically embrace their optimism about the feasibility and value of that technology, as they do in the case of those missiles. Instead they strain for any excuse to dismiss such a project, and close on a note of "Don't get your hopes up." And that says everything about the media's prejudices, not least that preference for fear-mongering and defeatism on the subject--to the very great cost of the dialogue on these matters, and our chances of actually dealing with the problem.
Monday, January 3, 2022
I Don't Want to Hear Another Word About Climate Change (Unless You're Going to Tell Us What We're Going To Do About It)
I am not and have never been a climate change denier. I do not dispute that there is an ongoing, anthropogenic and rapidly progressing process of climate change, driven mainly by emissions of GreenHouse Gases (GHGs) like carbon dioxide and methane, taking a large rising toll on the natural environment, and in human life and property (already in 2009 one report estimated 300,000 lives a year--and, given the likelihood of the figure rising rather than falling, implies deaths in the many millions to date). Indeed, I would go further than most in saying that the damage we are seeing indicates that the process has already advanced to an unacceptable degree--that the world is already too hot--and that even were we go to carbon-neutral today the GHGs already in the atmosphere will mean decades more heating, which would very likely have such second-order effects as thawing permafrost likely to intensify the heating that much further, so that even if we did far, far more than we have done to date the survival of human civilization, and perhaps even the human species itself as vast portions of the planet become literally uninhabitable, may be in doubt . . .
Of course, if one recognizes that all this is real then no reasonable person can expect the media to stop talking about it, can they?
No, of course not. But it is also undeniable that what the mainstream media has done is inflict an incessant hard rain of bad news on an already terrified public, while being relentlessly negative about any and every possible way of seriously redressing the problem. It has treated renewable energy (like solar) with disdain, and even as those technologies win victory after victory in the marketplace it still never misses a chance to badmouth them (to the delight of the pro-nuclear trolls, whose activity seems to be way, way up these days--just in case you thought that you'd heard the last of them). It sneers at any talk of a Green New Deal. It falls all over itself trying to criticize even the idea of cellular agriculture, and "Transportation-as-a-Service." It scarcely acknowledges the existence of ideas to save the world's glaciers. And of course, its hostility to geoengineering of any type has been relentless.
In short, after a long period of drawing a false equivalence between acknowledgment of climate change and climate denial it can seem to have shifted (intentionally or unintentionally) to a narrative of climate defeatism, utterly determined to beat down any hope of useful action whatsoever--which leaves us in the same place in the end as that denial to which it was so much an accessory (while, one might add, saddling the powerless with enormous guilt, because somehow not the politicians, not the CEOs, but they, are responsible for it all, especially if they ever ate a burger in their life).
In response I offer a modest proposal. Ordinarily I do not think that it is reasonable to demand that someone pointing out a problem also have a detailed solution in hand. In fact I tend to think of this as a way of suppressing the dialogue over an issue, and thus also the efforts to deal with the problem. However, as we already know how bad things are--are already literally becoming sick over the knowledge of how bad things are as the depressingly, cripplingly, overfamiliar news is pounded into our heads over and over and over again; and as it seems to me that there are a multitude of ideas that could help (and I don't mean the "hairshirt," agonize-over-your-personal-carbon footprint stuff, or even just decarbonizing our electric grid, but also ameliorative stuff like glacier preservation and kelp farming and direct air capture) that ought to be getting far more discussion, and at least some of which seems to me to be worthy of the genuine, massive backing that alone can speed its development and implementation--it is time for the coverage, any coverage we are to take seriously as anything but a promotion of climate defeatism, to start emphasizing what can be done, at length and in detail and as soon as possible, and show the greatest possible rigor in thinking through and explaining those solutions.
This is, of course, not what the media does. It trafficks in fear. And intelligent explanation of problems, never mind solutions, has never been its forte. Yet it is the climate coverage we need to see because we have already long, long passed the point where merely getting people frightened becomes counterproductive.
Of course, if one recognizes that all this is real then no reasonable person can expect the media to stop talking about it, can they?
No, of course not. But it is also undeniable that what the mainstream media has done is inflict an incessant hard rain of bad news on an already terrified public, while being relentlessly negative about any and every possible way of seriously redressing the problem. It has treated renewable energy (like solar) with disdain, and even as those technologies win victory after victory in the marketplace it still never misses a chance to badmouth them (to the delight of the pro-nuclear trolls, whose activity seems to be way, way up these days--just in case you thought that you'd heard the last of them). It sneers at any talk of a Green New Deal. It falls all over itself trying to criticize even the idea of cellular agriculture, and "Transportation-as-a-Service." It scarcely acknowledges the existence of ideas to save the world's glaciers. And of course, its hostility to geoengineering of any type has been relentless.
In short, after a long period of drawing a false equivalence between acknowledgment of climate change and climate denial it can seem to have shifted (intentionally or unintentionally) to a narrative of climate defeatism, utterly determined to beat down any hope of useful action whatsoever--which leaves us in the same place in the end as that denial to which it was so much an accessory (while, one might add, saddling the powerless with enormous guilt, because somehow not the politicians, not the CEOs, but they, are responsible for it all, especially if they ever ate a burger in their life).
In response I offer a modest proposal. Ordinarily I do not think that it is reasonable to demand that someone pointing out a problem also have a detailed solution in hand. In fact I tend to think of this as a way of suppressing the dialogue over an issue, and thus also the efforts to deal with the problem. However, as we already know how bad things are--are already literally becoming sick over the knowledge of how bad things are as the depressingly, cripplingly, overfamiliar news is pounded into our heads over and over and over again; and as it seems to me that there are a multitude of ideas that could help (and I don't mean the "hairshirt," agonize-over-your-personal-carbon footprint stuff, or even just decarbonizing our electric grid, but also ameliorative stuff like glacier preservation and kelp farming and direct air capture) that ought to be getting far more discussion, and at least some of which seems to me to be worthy of the genuine, massive backing that alone can speed its development and implementation--it is time for the coverage, any coverage we are to take seriously as anything but a promotion of climate defeatism, to start emphasizing what can be done, at length and in detail and as soon as possible, and show the greatest possible rigor in thinking through and explaining those solutions.
This is, of course, not what the media does. It trafficks in fear. And intelligent explanation of problems, never mind solutions, has never been its forte. Yet it is the climate coverage we need to see because we have already long, long passed the point where merely getting people frightened becomes counterproductive.
Sunday, November 21, 2021
Revisiting the Battle of France
ORIGINALLY PUBLISHED ON JULY 11, 2015
These past months have marked the seventy-fifth anniversary of the Battle of France--the principal subject of William Shirer's The Collapse of the Third Republic.
Reading Shirer's book it seems plain that the disaster was overdetermined. Relative to Germany, France was a declining power--demographically, industrially, and therefore also militarily. And of course, France's government was far from making the most of what resources it did have, participating with Britain in making mistake after mistake. Their repeated failure to oppose Germany's overturning of Versailles (the reinstitution of conscription, the march into the Rhineland, the annexation of Austria, the destruction of Czechoslovakia) strengthened Germany's strategic position by increasing its resources and depriving France of crucial allies, while diminishing the credibility of the Western powers--so that Belgium opted for neutrality, and Hitler was emboldened. The hesitation of the French government to come to terms with the Soviet Union cleared the way for Molotov-Ribbentrop, and the attack on Poland--during which the Western Allies let slip a significant opportunity by failing to attack western Germany while its forces were engaged in the east.
Still, even if being confronted with Hitler in the first place was the result of a staggering chain of miscalculations (some more understandable than others), when the Battle of France began, the Western Allies were hardly at a disadvantage in either numbers or quality of equipment (except, perhaps, in the air), one reason why their quick collapse in 1940 came as such a shock at the time--one that staggered contemporary observers, who groped for explanations.
Alongside the profound failures of diplomacy and grand strategy, there was military failure, with much often made of the French leadership's falure to appreciate the significance of the battle tank.1 However, the fact remains that the French had about as many tanks as the Germans, and more powerful tanks. It would therefore be more accurate to say that they failed to appreciate the offensive potential of the tank--and that this was not a matter of simple technological short-sightedness, but larger and deeper mental blinders.
Prior to World War I, the French army had been gripped by the "cult of the offensive"--the mind-over-matter stupidity that held the elan of French soldiers would somehow overcome machine gun fire. Some have attributed this to the influence of the deeply anti-rational Modernist philosopher Henri Bergson--which seems to make about as much sense as basing military theories on T.S. Eliot's "The Wasteland."
A million or so dead in No Man's Land later, the idea finally sank in--and in a turnaround worthy of Moliere's Orgon, switched from the cult of the offensive to the "cult of the defensive." This was reflected in a whole host of ways, like the French doctrine regarding responses to enemy breakthroughs. Rather than massing their forces for a counterattack, they dispersed them to "prevent infiltration"--and were quick to fall back in the interest of maintaining a continuous defensive line. The accent on the defensive, the dismissal of the offensive, also meant a lower premium on the speed of movement, or on the communications technology necessary to conduct a fast-moving battle. Thus, the French army made do with motorcycle-riding dispatch riders instead of teletype and radio as ways of delivering orders drawn up and acted upon in leisurely fashion.
Given these assumptions it was natural for the French high command to think of tanks as infantry support weapons, to be dispersed among its infantry unit rather than concentrated into a handful of hard-hitting divisions. Natural, too, that it use rail cars to move its tanks over even short distances, and that it not bother to equip its tanks with radios (and that the air force neglect close-air support).
In short, the French high command's appraisal of the tank reflected its broader appraisal of warfare more generally--a product of deep national trauma and equally deep attachment to the apparent causes of past victories, the hold of which was the greater because in the French army the senior officers of World War I were, often, the senior officers of the '30s (Maurice Gamelin, Maxime Weygand, Phillippe Petain).
It was by no means the only mistake the French commanders made in organizing the country's defense. The lack of unified command (the French Army had three separate headquarters, as well as an incoherent division of responsbility between Generals Gamelin and Georges); the astonishingly wrong-headed dismissal of the possibility of an armored charge through the Ardennes so that only scanty and weak forces covered the area (despite the warnings of, among others, Basil Liddell Hart); the failure to provide a reserve in case of a breakthrough (extreme even by the standards of the cult of the defensive); the failure to properly utilize the air force (much of the fleet of aircraft never even going into action)--and of course, the retention of the superannuated old men in charge, which contributed to so much of the above (the accounts of their enervation are astonishing)--were each colossal in themselves.2
It might be noted, too, that the Western Allies having arrived at this juncture, not all the mistakes were France's. The British were no more far-seeing when it came to the new style of warfare, the Belgians too slow to prepare for the German attack, let alone collaborate with the British and French. Still, the weaknesses in the essential doctrine and organization of the French forces, insured that there was no recovery from these mistakes. That propensity for dispersing forces and falling back, that slowness to make its moves, turned retreats into routs, disorganized what resources remained available, and left the commanders further and further behind the evolving situation so that the counterattack which could have thwarted the invasion never materialized.3 Instead the Germans' high-risk gamble on an armored thrust through Sedan to the Channel paid off beyond their wildest expectations.
In hindsight, it all seems a reminder of the price of drawing the wrong lessons from history; the worship of old pieties making open questions seem as if they had been settled for all time; failing to appreciate the ways technology interacts with culture and organization, and the latter's hampering the former, even while being disrupted by it; and the staggering mediocrity, delusion and incompetence of those who typically wind up in high office; problems by no means exclusive to the military sphere.
1. The claims about a "Maginot mentality" scarcely seem worth discussing. Fortifications, when properly conceived, built and maintained, are not a substitute for other forms of military power, but a support to them--and the French seem to have appreciated this. The Maginot Line, which the German commanders knew better than to take head-on, did its job by confining their attack to limited portions of the frontier. Additionally, the French were never totally reliant on it, having as they did the field army that they moved into Belgium, and which they could have positioned to block the attack through the Ardennes.
2. Weygand's mind, appallingly, seemed stuck not even in the last war (1914-1918), but the war before that (the 1870 Franco-Prussian War), the French "generalissimo" obsessing over combating an imaginary Communist takeover of Paris as German forces swept to crushing victory.
3. Overwhelming as the detail in Shirer's account of the campaign can seem, it does masterfully show exactly how the failures of doctrine, organization and the commanders' nerve led to tactical, operational and strategic disaster, where a reader of a briefer and simpler account would have to be content with the generalities.
These past months have marked the seventy-fifth anniversary of the Battle of France--the principal subject of William Shirer's The Collapse of the Third Republic.
Reading Shirer's book it seems plain that the disaster was overdetermined. Relative to Germany, France was a declining power--demographically, industrially, and therefore also militarily. And of course, France's government was far from making the most of what resources it did have, participating with Britain in making mistake after mistake. Their repeated failure to oppose Germany's overturning of Versailles (the reinstitution of conscription, the march into the Rhineland, the annexation of Austria, the destruction of Czechoslovakia) strengthened Germany's strategic position by increasing its resources and depriving France of crucial allies, while diminishing the credibility of the Western powers--so that Belgium opted for neutrality, and Hitler was emboldened. The hesitation of the French government to come to terms with the Soviet Union cleared the way for Molotov-Ribbentrop, and the attack on Poland--during which the Western Allies let slip a significant opportunity by failing to attack western Germany while its forces were engaged in the east.
Still, even if being confronted with Hitler in the first place was the result of a staggering chain of miscalculations (some more understandable than others), when the Battle of France began, the Western Allies were hardly at a disadvantage in either numbers or quality of equipment (except, perhaps, in the air), one reason why their quick collapse in 1940 came as such a shock at the time--one that staggered contemporary observers, who groped for explanations.
Alongside the profound failures of diplomacy and grand strategy, there was military failure, with much often made of the French leadership's falure to appreciate the significance of the battle tank.1 However, the fact remains that the French had about as many tanks as the Germans, and more powerful tanks. It would therefore be more accurate to say that they failed to appreciate the offensive potential of the tank--and that this was not a matter of simple technological short-sightedness, but larger and deeper mental blinders.
Prior to World War I, the French army had been gripped by the "cult of the offensive"--the mind-over-matter stupidity that held the elan of French soldiers would somehow overcome machine gun fire. Some have attributed this to the influence of the deeply anti-rational Modernist philosopher Henri Bergson--which seems to make about as much sense as basing military theories on T.S. Eliot's "The Wasteland."
A million or so dead in No Man's Land later, the idea finally sank in--and in a turnaround worthy of Moliere's Orgon, switched from the cult of the offensive to the "cult of the defensive." This was reflected in a whole host of ways, like the French doctrine regarding responses to enemy breakthroughs. Rather than massing their forces for a counterattack, they dispersed them to "prevent infiltration"--and were quick to fall back in the interest of maintaining a continuous defensive line. The accent on the defensive, the dismissal of the offensive, also meant a lower premium on the speed of movement, or on the communications technology necessary to conduct a fast-moving battle. Thus, the French army made do with motorcycle-riding dispatch riders instead of teletype and radio as ways of delivering orders drawn up and acted upon in leisurely fashion.
Given these assumptions it was natural for the French high command to think of tanks as infantry support weapons, to be dispersed among its infantry unit rather than concentrated into a handful of hard-hitting divisions. Natural, too, that it use rail cars to move its tanks over even short distances, and that it not bother to equip its tanks with radios (and that the air force neglect close-air support).
In short, the French high command's appraisal of the tank reflected its broader appraisal of warfare more generally--a product of deep national trauma and equally deep attachment to the apparent causes of past victories, the hold of which was the greater because in the French army the senior officers of World War I were, often, the senior officers of the '30s (Maurice Gamelin, Maxime Weygand, Phillippe Petain).
It was by no means the only mistake the French commanders made in organizing the country's defense. The lack of unified command (the French Army had three separate headquarters, as well as an incoherent division of responsbility between Generals Gamelin and Georges); the astonishingly wrong-headed dismissal of the possibility of an armored charge through the Ardennes so that only scanty and weak forces covered the area (despite the warnings of, among others, Basil Liddell Hart); the failure to provide a reserve in case of a breakthrough (extreme even by the standards of the cult of the defensive); the failure to properly utilize the air force (much of the fleet of aircraft never even going into action)--and of course, the retention of the superannuated old men in charge, which contributed to so much of the above (the accounts of their enervation are astonishing)--were each colossal in themselves.2
It might be noted, too, that the Western Allies having arrived at this juncture, not all the mistakes were France's. The British were no more far-seeing when it came to the new style of warfare, the Belgians too slow to prepare for the German attack, let alone collaborate with the British and French. Still, the weaknesses in the essential doctrine and organization of the French forces, insured that there was no recovery from these mistakes. That propensity for dispersing forces and falling back, that slowness to make its moves, turned retreats into routs, disorganized what resources remained available, and left the commanders further and further behind the evolving situation so that the counterattack which could have thwarted the invasion never materialized.3 Instead the Germans' high-risk gamble on an armored thrust through Sedan to the Channel paid off beyond their wildest expectations.
In hindsight, it all seems a reminder of the price of drawing the wrong lessons from history; the worship of old pieties making open questions seem as if they had been settled for all time; failing to appreciate the ways technology interacts with culture and organization, and the latter's hampering the former, even while being disrupted by it; and the staggering mediocrity, delusion and incompetence of those who typically wind up in high office; problems by no means exclusive to the military sphere.
1. The claims about a "Maginot mentality" scarcely seem worth discussing. Fortifications, when properly conceived, built and maintained, are not a substitute for other forms of military power, but a support to them--and the French seem to have appreciated this. The Maginot Line, which the German commanders knew better than to take head-on, did its job by confining their attack to limited portions of the frontier. Additionally, the French were never totally reliant on it, having as they did the field army that they moved into Belgium, and which they could have positioned to block the attack through the Ardennes.
2. Weygand's mind, appallingly, seemed stuck not even in the last war (1914-1918), but the war before that (the 1870 Franco-Prussian War), the French "generalissimo" obsessing over combating an imaginary Communist takeover of Paris as German forces swept to crushing victory.
3. Overwhelming as the detail in Shirer's account of the campaign can seem, it does masterfully show exactly how the failures of doctrine, organization and the commanders' nerve led to tactical, operational and strategic disaster, where a reader of a briefer and simpler account would have to be content with the generalities.
Tuesday, October 12, 2021
Making Sense of Keir Starmer
The New Statesman recently ran a poll discussing the confusion among the British public regarding Keir Starmer's politics.
The confusion did not surprise me in the least, for three reasons.
1. The extreme sloppiness with political language seen across society. Words like "conservative," "liberal," "neoliberal," "socialist" all have well-established, quite coherent and useful, meanings, but even supposed "pundits" (another horribly misunderstood and misused word given that "pundit" literally means "learned" and so many of them are anything but) do not even seem aware of those meanings. Even when they give the impression that they have sufficient mental capacity to grasp the concepts (this does not happen often) they show those meanings little respect--at their worst falling back on the postmodernist claptrap that all speech is language games, and no definition any more or less valid than any other. The result is that even those approaching the issue with genuine knowledge and in good faith have a hard time making themselves understood in such a situation.
2. The generally lousy job the press does of clarifying the issues of the day and the positions of prominent political figures toward them--instead subjecting the reader or viewer to an incessant rain of details without context, worsened by its preference for politics to policy, its fear of analysis, its "both sidesism," its tabloid foolishness.
3. The reality that politicians find it useful to not be too clear on what they stand for. This is hardly new to the neoliberal age, but it has arguably become a wider problem in it because just about anyone near the mainstream is forced to stand on an economic platform that has never been popular, and which has grown ever more deeply unpalatable to the broad public. Accordingly the "center-right" pretends to be more moderate than it really is, the "center-left" pretends to be more left than it is (in Britain, hardline neoliberals from the Conservative Party posing as "One Nation conservatives," and their not terribly counterparts from Labour calling themselves "socialists").
These days the game is wearing thin--and one cannot claim that Keir Starmer has refused to play it. In presenting himself to the public he has called himself a "socialist," made quite the list of social democratic promises, and heavily invoked the spirit of "Old Labour" (where Blair never missed a chance to put it down"). Still, close-reading his more substantive statements about how he sees the world and what he is prepared to do, especially the statements that came further and further away he got from winning his party's leadership contest, he often it seemed to me that he sounded far more Blair then Bevan, as his more leftish promises seemed to fall by the wayside. (You can find my detailed take on this here.)
Those inclining to appraise Starmer generously may see him as a well-intentioned figure walking a fine line in difficult circumstances--wanting to take another path while anxious about offending elites, and a party Establishment, still deeply committed to neoliberalism, no matter how it has disappointed, or grown discredited. Those grading less generously, however, can see him as only the latest in that very, very, very long line of neoliberals who pretended to be something other than what they were, producing what they see as the train wreck that is "fuel crisis Britain"--and indeed it strikes me as neither surprising nor inappropriate that British voters identify him more with Tony Blair than anyone else.
The confusion did not surprise me in the least, for three reasons.
1. The extreme sloppiness with political language seen across society. Words like "conservative," "liberal," "neoliberal," "socialist" all have well-established, quite coherent and useful, meanings, but even supposed "pundits" (another horribly misunderstood and misused word given that "pundit" literally means "learned" and so many of them are anything but) do not even seem aware of those meanings. Even when they give the impression that they have sufficient mental capacity to grasp the concepts (this does not happen often) they show those meanings little respect--at their worst falling back on the postmodernist claptrap that all speech is language games, and no definition any more or less valid than any other. The result is that even those approaching the issue with genuine knowledge and in good faith have a hard time making themselves understood in such a situation.
2. The generally lousy job the press does of clarifying the issues of the day and the positions of prominent political figures toward them--instead subjecting the reader or viewer to an incessant rain of details without context, worsened by its preference for politics to policy, its fear of analysis, its "both sidesism," its tabloid foolishness.
3. The reality that politicians find it useful to not be too clear on what they stand for. This is hardly new to the neoliberal age, but it has arguably become a wider problem in it because just about anyone near the mainstream is forced to stand on an economic platform that has never been popular, and which has grown ever more deeply unpalatable to the broad public. Accordingly the "center-right" pretends to be more moderate than it really is, the "center-left" pretends to be more left than it is (in Britain, hardline neoliberals from the Conservative Party posing as "One Nation conservatives," and their not terribly counterparts from Labour calling themselves "socialists").
These days the game is wearing thin--and one cannot claim that Keir Starmer has refused to play it. In presenting himself to the public he has called himself a "socialist," made quite the list of social democratic promises, and heavily invoked the spirit of "Old Labour" (where Blair never missed a chance to put it down"). Still, close-reading his more substantive statements about how he sees the world and what he is prepared to do, especially the statements that came further and further away he got from winning his party's leadership contest, he often it seemed to me that he sounded far more Blair then Bevan, as his more leftish promises seemed to fall by the wayside. (You can find my detailed take on this here.)
Those inclining to appraise Starmer generously may see him as a well-intentioned figure walking a fine line in difficult circumstances--wanting to take another path while anxious about offending elites, and a party Establishment, still deeply committed to neoliberalism, no matter how it has disappointed, or grown discredited. Those grading less generously, however, can see him as only the latest in that very, very, very long line of neoliberals who pretended to be something other than what they were, producing what they see as the train wreck that is "fuel crisis Britain"--and indeed it strikes me as neither surprising nor inappropriate that British voters identify him more with Tony Blair than anyone else.
Saturday, July 17, 2021
The Other Road to Singularity
In his landmark 1993 paper "The Coming Technological Singularity: How to Survive the Post-Human Era," Vernor Vinge raised four possible paths to the "intelligence explosion" that is the Singularity. The first three involved computers--1. the advent of a computer with what we would call strong "artificial intelligence"; 2. a computer network's more spontaneously developing such intelligence; 3. an integration of human and computer intelligence through advanced mind-machine interfaces (allowing, for example, a human brain with its superior pattern recognition and other abilities to access the vastly superior computational capacities and data storage of computer systems); and 4. a purely biological enhancement of human intelligence.
It is, of course, the case that we hear most about number one, somewhat less about two and three, and least of all about number four, the non-computer, biological option.
Why is that the case?
My impression is that this reflects the observable rate of progress in those areas. Where the computing sector has had "Moore's Law," and the geometrical expansion of the computing power available at any given price (and total computing power on Earth) that goes with it, the medical sector's feats have been . . . less impressive. Indeed, the contrast has been sufficiently pointed, and noted, that the term "Eroom's Law" has been coined to refer to how the development of treatments and cures has, rather than becoming quicker and cheaper in the manner of computers, become slower and more expensive instead.
Amid all that it has been far easier to picture rising computer power beating biotechnology to the finish line in the race to produce a greater-than-human intelligence for decades. In fact, even with some questioning the likelihood of Moore's Law's continuing as microchip fabrication reaches the apparent limits of the possible in silicon, and the long-awaited substitutes prove to be (like so much else these days) "behind schedule," it can still seem that way.
It is, of course, the case that we hear most about number one, somewhat less about two and three, and least of all about number four, the non-computer, biological option.
Why is that the case?
My impression is that this reflects the observable rate of progress in those areas. Where the computing sector has had "Moore's Law," and the geometrical expansion of the computing power available at any given price (and total computing power on Earth) that goes with it, the medical sector's feats have been . . . less impressive. Indeed, the contrast has been sufficiently pointed, and noted, that the term "Eroom's Law" has been coined to refer to how the development of treatments and cures has, rather than becoming quicker and cheaper in the manner of computers, become slower and more expensive instead.
Amid all that it has been far easier to picture rising computer power beating biotechnology to the finish line in the race to produce a greater-than-human intelligence for decades. In fact, even with some questioning the likelihood of Moore's Law's continuing as microchip fabrication reaches the apparent limits of the possible in silicon, and the long-awaited substitutes prove to be (like so much else these days) "behind schedule," it can still seem that way.
Subscribe to:
Posts (Atom)