I remember back about the turn of the century Thomas P.M. Barnett emerged as a national security counterpart to Thomas Friedman who could be characterized as devoting himself to explaining just how, exactly, "McDonnell-Douglas" would back up "McDonald's."
Barnett's conformity to the globalization-singing conventional wisdom of the day made him the sort of fashionable Public Intellectual who got written up in places like Esquire (which magazine's "foreign-policy guru" he subsequently became).
I was rather less impressed than the folks at Esquire. Still, Barnett was astute enough to acknowledge that the whole thing could unravel, citing Admiral William Flanagan in his book Blueprint for Action about the possibility that "the 1990s might be a replay of the 1920s," raising "the question . . . What would it take for the 2000s to turn as sour as the 1930s?"
The analogy seems to me fair enough. Like the 1920s the 1990s were a period after the end of a major international conflict which it was hoped would never be followed by another like it. After all, those optimistic about the trend of things (at least, in the politically orthodox way) imagined that conflict's end supposedly auguring the arrival of a more orderly, peaceful--and prosperous--world, with many a parallel quite striking. On both occasions the U.S. had emerged from that conflict as victor, hyperpowered arbiter of the world's fate, and in the "American way," the pointer to everyone else's future amid a financial boom and euphoria over a supposedly epochal revolution in productivity and consumerism bound up with new technology, and immense self-satisfaction about their freer, "liberated" lifestyles--all the while ignoring anything that gave the lie to their illusions, dismissing the financial crises, the international crises as mere bumps on the road, which they insisted were quite manageable by deified Overseers of the Global Economy, and the stirrings of radicalism at home and abroad (the country certainly had its "status politics," its "culture wars") a much-ado-about-nothing on the wrong side of the end of history.
Considering it Barnett--whom it must be noted again, was fashionable because he was conventional--was on the whole optimistic that the challenges could remain so manageable in the long term. (Hence, that "Blueprint for Action.") Those taking a less sanguine view of these developments thought otherwise, and they have since proven correct as, just as the illusions of the '20s died, so did those of the '90s.
When we look back at the '20s the tendency is to think of them as having come to an end in 1929, with "the Great Crash." Of course, the end of the mood we associate with the decade was not so obviously tidy. But it does seem that the illusions of the '90s may have been a longer time dying than those of the '20s, dying only a bit at a time, with the way things played out enabling the denials to last longer. One may recall, for example, the rush to declare the financial crisis that broke out in 2007-2008 past--such that even an Adam Tooze, when buckling down to study the event properly, was himself surprised to conclude in a book published a decade later that it had never gone away, as it still has not, simply merging with other crises, like the COVID-19 pandemic and its own attached economic crisis (also not behind us, even if some pretend it is), into something bigger and worse and scarier. (How big and bad and scary? Well, according to one calculation, even before the pandemic economic growth rates had either virtually flatlined or turned significantly negative for most of the planet--which makes all the backlash against neoliberalism--the votes for Trump, Britain's exit from the EU, and all the rest--that much less surprising.) Meanwhile, if any doubts had remained after a decade of intensifying and increasingly militarized conflict among the great powers, the war in Ukraine has made it very, very clear that the actuality of such things--open, large-scale, sustained interstate warfare between large nation-states in the middle of Europe, and escalating confrontation between NATO and Russia--is a significant and worsening part of our present reality.
Looking at the news I do not get the impression that very many have properly processed the fact yet. But the neo-'20s mood that characterized the '90s, and lingered in varying ways and to varying degrees long after the years on the calendar ceased to read 199-, seems ever more remote these days, any indication otherwise ever more superficial.
Tuesday, May 17, 2022
Thursday, May 5, 2022
Did Anyone Actually Read Paul Kennedy's The Rise and Fall of Great Powers?
I have recently remarked what makes for a nonfiction bestseller generally--which, of course, leaves little space for anything that could be called "history." Of course, we do see history reach a wider audience--but only within that demand the public makes for the affirmative and the entertaining. Thus it is what Michael Parenti called gentlemen's history--history by, of and for the comfortable, who are supposed to feel comfortable during and after reading it; history which is conservative and "patriotic" (in the sense of loyalty to those in power, rather than to their country's well-being) and in line with all that self-congratulatory (from the standpoint of the elite in question).
Meanwhile, in its tending to be Great Man-centered it tends toward the personal and the narrative--to, indeed, being biography rather than history. (As A.J.P. Taylor remarked the two genres are actually very different--in the former the individual everything and society nothing; in the latter, the individual nothing and society everything.) It also tends toward, even while presenting its figures in a heroic light, also the gossipy. (Taylor remarked, too, that a "glamorous sex life" was a prerequisite for a successful biography.)
As Jeremy Black demonstrates all of this translates over to military history, which is dominated by biography-memoir-operational account--by the Great Captain subgenre of the Great Man genre, in which such Captains are presented as the dominating figures of the Decisive Battles of History, the same battles over and over and over again (with Britain's portion of the Napoleonic Wars, the U.S. Civil War, and the portions of the two world wars those countries experienced pretty much it for the more popular market in Britain and the U.S.).
One may add that, even in comparison with much other history, it tends especially heavily to the conservative and patriotic--to the hero-worship of generals, nationalistic flag-waving and the rest.
All of this was much on my mind when considering the reception of Paul Kennedy's The Rise and Fall of Great Powers. Certainly a work of history, and very reasonably readable as a work of military history, it stayed on the New York Times hardcover nonfiction bestseller list for 34 weeks--in spite of its being a very different book indeed. Far from offering personal narrative in it Kenndy presents an academic thesis resting on a detailed examination of five hundred years of Western and world history, where the "characters" are not individuals but entire nations and empires, whose development and clashing, ascent and descent, are construed not as the deeds of so-called Great Men, but the hard material facts of geography, technology, demographics, of industries and institutions. Of battles, campaign and wars there are plenty, but little of tactics and strategy and even less of generalship, with what really mattered the way resources, and the matching of resources to objects, told in the crunch.
Covering so much territory even in a seven hundred page volume, of course, means that Kennedy treats any one bit in only so much detail (as is the more evident if one compares it to, for example, his earlier, Britain-focused treatment of the same theme in The Rise and Fall of British Naval Mastery, which I recommend highly to anyone interested in the subject, by the way). Still, the quantitative data alone is, by the standard of popular works, immense, as testified by the inclusion of over fifty charts and tables, with the academic character of the work underlined by the 83 pages of notes and 38 pages of bibliography appended to the over five hundred page main text. Kennedy writes clearly and well, but it is an undeniably data-heavy, analytically-oriented work, with no attempt to enliven the proceedings with what an editor might call "color."
And of course, it was anything but self-congratulatory in the sense discussed here.
Considering Kennedy's book I find myself also considering another major--and similarly unstereotypical--bestseller of 1988, Stephen Hawking's A Brief History of Time. Hawking's book was much shorter (256 pages to the 677 pages of Kennedy's book), and while intellectual hierarchy-addicted morons such as Hollywood writes for take it as a given that physics is the most demanding field of intellectual endeavor, the reality is that even by pop science standards it seemed to me "easy," while I might add, Hawking's tone was sprightly. He clearly meant to produce a book that a broad audience could get something out of, and in my view did so. Kennedy's book most certainly did not. The result is that, if Hawking's book is, as I have seen it called, the most widely-selling unread book in history, I would imagine that very few bothered to read Kennedy's book all the way through--an opinion that Kennedy himself seems to share. He has publicly remarked--joked?--that he didn't "think many people read more than the final chapter on the US and the USSR"--and I would imagine that many more simply knew the alleged contents of the chapter secondhand.
Meanwhile, in its tending to be Great Man-centered it tends toward the personal and the narrative--to, indeed, being biography rather than history. (As A.J.P. Taylor remarked the two genres are actually very different--in the former the individual everything and society nothing; in the latter, the individual nothing and society everything.) It also tends toward, even while presenting its figures in a heroic light, also the gossipy. (Taylor remarked, too, that a "glamorous sex life" was a prerequisite for a successful biography.)
As Jeremy Black demonstrates all of this translates over to military history, which is dominated by biography-memoir-operational account--by the Great Captain subgenre of the Great Man genre, in which such Captains are presented as the dominating figures of the Decisive Battles of History, the same battles over and over and over again (with Britain's portion of the Napoleonic Wars, the U.S. Civil War, and the portions of the two world wars those countries experienced pretty much it for the more popular market in Britain and the U.S.).
One may add that, even in comparison with much other history, it tends especially heavily to the conservative and patriotic--to the hero-worship of generals, nationalistic flag-waving and the rest.
All of this was much on my mind when considering the reception of Paul Kennedy's The Rise and Fall of Great Powers. Certainly a work of history, and very reasonably readable as a work of military history, it stayed on the New York Times hardcover nonfiction bestseller list for 34 weeks--in spite of its being a very different book indeed. Far from offering personal narrative in it Kenndy presents an academic thesis resting on a detailed examination of five hundred years of Western and world history, where the "characters" are not individuals but entire nations and empires, whose development and clashing, ascent and descent, are construed not as the deeds of so-called Great Men, but the hard material facts of geography, technology, demographics, of industries and institutions. Of battles, campaign and wars there are plenty, but little of tactics and strategy and even less of generalship, with what really mattered the way resources, and the matching of resources to objects, told in the crunch.
Covering so much territory even in a seven hundred page volume, of course, means that Kennedy treats any one bit in only so much detail (as is the more evident if one compares it to, for example, his earlier, Britain-focused treatment of the same theme in The Rise and Fall of British Naval Mastery, which I recommend highly to anyone interested in the subject, by the way). Still, the quantitative data alone is, by the standard of popular works, immense, as testified by the inclusion of over fifty charts and tables, with the academic character of the work underlined by the 83 pages of notes and 38 pages of bibliography appended to the over five hundred page main text. Kennedy writes clearly and well, but it is an undeniably data-heavy, analytically-oriented work, with no attempt to enliven the proceedings with what an editor might call "color."
And of course, it was anything but self-congratulatory in the sense discussed here.
Considering Kennedy's book I find myself also considering another major--and similarly unstereotypical--bestseller of 1988, Stephen Hawking's A Brief History of Time. Hawking's book was much shorter (256 pages to the 677 pages of Kennedy's book), and while intellectual hierarchy-addicted morons such as Hollywood writes for take it as a given that physics is the most demanding field of intellectual endeavor, the reality is that even by pop science standards it seemed to me "easy," while I might add, Hawking's tone was sprightly. He clearly meant to produce a book that a broad audience could get something out of, and in my view did so. Kennedy's book most certainly did not. The result is that, if Hawking's book is, as I have seen it called, the most widely-selling unread book in history, I would imagine that very few bothered to read Kennedy's book all the way through--an opinion that Kennedy himself seems to share. He has publicly remarked--joked?--that he didn't "think many people read more than the final chapter on the US and the USSR"--and I would imagine that many more simply knew the alleged contents of the chapter secondhand.
Wednesday, May 4, 2022
Emmanuel Todd, China and the Graying of the World
In a recent interview sociologist and demographer Emmanuel Todd, discussing the matter of China's rise, argued that the country's far-below-replacement level fertility rate (which Todd says is 1.3 per woman) makes unlikely the visions of the country as hegemon for the simple reason that its labor force is bound to contract sharply, with massive implications for its already slowing economy, and its national power.
Considering this I find myself thinking of three counterarguments:
1. The 1.3 Total Fertility Rate (TFR) for China was registered in the wake of the pandemic, with its associated economic and other stresses. Before that it was up at about 1.7--a significant difference, such that rebound is hardly out of the question.
2. Even if one takes the 1.3 TFR as a "new normal" for China the rate in question is not only evident across its neighborhood, but actually more advanced in many neighboring countries. (Japan's TFR was scarcely above that before the pandemic, just 1.36, while South Korea's slipped below 1 in 2018 and was at 0.92 in 2019, according to World Bank figures.)
3. Even if the drop were to go further in China than elsewhere China, with a population of 1.4 billion, is, even after a much more demographic contraction than in neighboring states (a scenario hardly in the cards), still a colossus relative to the other states (Japan today having scarcely an eleventh of China's population, South Korea one-twenty-eighth its population.)
Still, China's contraction is coming at a point at which it is rather poorer than neighbors like Japan and South Korea (with a per capita Gross Domestic Product of $10,000 a year, versus $40,000 for Japan, $30,000 for South Korea)--and already seeing its economic growth slow sharply (those legendary 10 percent a year rates a thing of the past, with the 2012-2019 average more like 7 percent and still falling). The result is that the demands of an aging population could weigh that much more heavily on its resources.
All the same, how much the fact will matter ultimately depends on how societies handle the matter of their aging populations. One can picture a scenario in which modern medicine succeeds in alleviating the debilitating effects of getting older, permitting older persons to need less care. One can also picture a scenario in which rising economic productivity more than makes up for the decline of the labor supply and the rise in the dependency ratio (perhaps by lowering the cost of living). In either case, or one combining the benefits of both, the demographic transition may turn out to be managed easily enough, in China and elsewhere. Yet one can picture less happy scenarios as well--and I am sorry to say, rather easily in light of the disappointments of recent decades on all these scores. But even in that eventuality I would not be too quick to envision the melodramatic collapse scenarios making the rounds of the headlines yet again in recent months.
Considering this I find myself thinking of three counterarguments:
1. The 1.3 Total Fertility Rate (TFR) for China was registered in the wake of the pandemic, with its associated economic and other stresses. Before that it was up at about 1.7--a significant difference, such that rebound is hardly out of the question.
2. Even if one takes the 1.3 TFR as a "new normal" for China the rate in question is not only evident across its neighborhood, but actually more advanced in many neighboring countries. (Japan's TFR was scarcely above that before the pandemic, just 1.36, while South Korea's slipped below 1 in 2018 and was at 0.92 in 2019, according to World Bank figures.)
3. Even if the drop were to go further in China than elsewhere China, with a population of 1.4 billion, is, even after a much more demographic contraction than in neighboring states (a scenario hardly in the cards), still a colossus relative to the other states (Japan today having scarcely an eleventh of China's population, South Korea one-twenty-eighth its population.)
Still, China's contraction is coming at a point at which it is rather poorer than neighbors like Japan and South Korea (with a per capita Gross Domestic Product of $10,000 a year, versus $40,000 for Japan, $30,000 for South Korea)--and already seeing its economic growth slow sharply (those legendary 10 percent a year rates a thing of the past, with the 2012-2019 average more like 7 percent and still falling). The result is that the demands of an aging population could weigh that much more heavily on its resources.
All the same, how much the fact will matter ultimately depends on how societies handle the matter of their aging populations. One can picture a scenario in which modern medicine succeeds in alleviating the debilitating effects of getting older, permitting older persons to need less care. One can also picture a scenario in which rising economic productivity more than makes up for the decline of the labor supply and the rise in the dependency ratio (perhaps by lowering the cost of living). In either case, or one combining the benefits of both, the demographic transition may turn out to be managed easily enough, in China and elsewhere. Yet one can picture less happy scenarios as well--and I am sorry to say, rather easily in light of the disappointments of recent decades on all these scores. But even in that eventuality I would not be too quick to envision the melodramatic collapse scenarios making the rounds of the headlines yet again in recent months.
Tuesday, May 3, 2022
What Ever Became of the Information Age?
You may remember having heard the term "information age"--but it is entirely possible you have only a vague notion of what it meant. This may be because it has been a long time since you heard it last, but also because the term is slippery, having many usages.
Like the terms "atomic age," "jet age," "space age" "information age" can mean an era in which a revolutionary technology has arrived on the scene--and while "information technology" is not really new (writing, and even spoken language, is describable as an "information technology") there is no question that the significance of the electronic computer and its associated communications systems, in their various forms, represented something different from what came before. And indeed the information age came to pass in this sense.
Like the term "Industrial Age" "Information Age" can also denote a shift in the particular, fundamental conditions of work and consumption. The industrial age saw the decline of the rural, agrarian, peasant way of life as the norm as a revolutionary, inanimate energy-powered machine-based form of mass manufacturing became the predominant condition of our existence (employing a quarter of the American labor force at mid-century, while overwhelmingly accounting for the rise in material output and living standards). Likewise the information age held out the prospect of a great increase in the work effort devoted to, in one way or another, producing, processing and communicating information--as the volume of information being produced, processed and communicated exploded. And this, too, did come to pass.
However, the term had other meanings. Of these the one that was most exciting--because it was the one that could really, really make it matter in a way that would merit speaking of A New Age--was the idea that information itself, which has always been substitutable for other economic inputs like land and capital and labor, and substituted for them (this was how the Industrial Age happened, after all, the technical know-how to exploit those energy sources and build all the other machines enabling eventually massive labor substitution), would become so much radically substitutable for everything else that we would in this respect altogether transcend the smokestack, raw material-processing, secondary sector-centered Industrial Age. Thus, if the supply of some good ran short, information-age INNOVATION! would promptly turn scarcity into abundance, with what was promised for nanotechnology exemplary (the radical new materials like carbon nanotubes that would be stronger and lighter and better than so much else, the molecular-scale assemblers that, working atom by atom, would waste not and leave us wanting not). Increasingly suspending the bad old laws of "the dismal science," this would explode growth, even as it liberated growth from reliance on natural resources and the "limits to growth" they imposed, solving the problem of both material scarcity and our impact on the natural environment--socially uplifting and ecological at once. Indeed, thinkers came to speak of literally everything in terms of "information," of our living in a world not of matter and energy but information that we could manipulate as we did lines of computer code if only we knew how, as they were confident we would soon know how, down to our own minds and bodies (most notoriously in the mind uploading visions of Ray Kurzweil and other Singularitarians).
In the process the word "information" itself came to seem fetishistic, magical, not only in the ruminations of so-called pundits mouthing the fashionable notions of the time, but at the level of popular culture--such that in an episode of Seinfeld in which Jerry's neighbor, the postal worker Newman, wanting to remind Jerry that he was a man not to be trifled with, told him in a rather menacing tone that "When you control the mail, you control information."
The line (which has become an Internet meme) seemed exceedingly contemporary to me at the time--and since, as distinctly '90s as any line can get, precisely because, as I should hope is obvious to you, the information age in this grander sense never came to pass. Far from our seeing magical feats of productivity-raising, abundance-creating INNOVATION!, productivity growth collapsed--proving a fraction of what it had been in the heyday of the "Old Economy" at which those lionizers of the information age sneered. Meanwhile we were painfully reminded time and again that at our actually existing technological level economic growth remains a slave to the availability and throughput of natural resources, with the cheap commodities of the '90s giving way to exploding commodity prices in the '00s that precipitated a riot-causing food-and-fuel crisis all over the world. If it is indeed the case that the world is all "just information," to go by where we are in 2022 (in which year we face another painful reminder of our reliance on natural resource as the war in Ukraine precipitates yet another food-and-fuel crisis) the day when we can manipulate matter like microcode remains far off.
Unsurprisingly the buzzwords of more recent years have been more modest. The term one is more likely to hear now is the "Fourth Industrial Revolution"--the expectation that the advances in automation widely projected will be as transformative as the actually existing information age may plausibly be said to have been--but not some transcendent leap beyond material reality.
I do not know for a fact that a Fourth Industrial Revolution is really at hand--but I do know that, being a rather less radical vision than those nano-assembler-based notions of the '90s, the thought that it may be so bespeaks how even our techno-hype has fallen into line with an era of lowered expectations.
Like the terms "atomic age," "jet age," "space age" "information age" can mean an era in which a revolutionary technology has arrived on the scene--and while "information technology" is not really new (writing, and even spoken language, is describable as an "information technology") there is no question that the significance of the electronic computer and its associated communications systems, in their various forms, represented something different from what came before. And indeed the information age came to pass in this sense.
Like the term "Industrial Age" "Information Age" can also denote a shift in the particular, fundamental conditions of work and consumption. The industrial age saw the decline of the rural, agrarian, peasant way of life as the norm as a revolutionary, inanimate energy-powered machine-based form of mass manufacturing became the predominant condition of our existence (employing a quarter of the American labor force at mid-century, while overwhelmingly accounting for the rise in material output and living standards). Likewise the information age held out the prospect of a great increase in the work effort devoted to, in one way or another, producing, processing and communicating information--as the volume of information being produced, processed and communicated exploded. And this, too, did come to pass.
However, the term had other meanings. Of these the one that was most exciting--because it was the one that could really, really make it matter in a way that would merit speaking of A New Age--was the idea that information itself, which has always been substitutable for other economic inputs like land and capital and labor, and substituted for them (this was how the Industrial Age happened, after all, the technical know-how to exploit those energy sources and build all the other machines enabling eventually massive labor substitution), would become so much radically substitutable for everything else that we would in this respect altogether transcend the smokestack, raw material-processing, secondary sector-centered Industrial Age. Thus, if the supply of some good ran short, information-age INNOVATION! would promptly turn scarcity into abundance, with what was promised for nanotechnology exemplary (the radical new materials like carbon nanotubes that would be stronger and lighter and better than so much else, the molecular-scale assemblers that, working atom by atom, would waste not and leave us wanting not). Increasingly suspending the bad old laws of "the dismal science," this would explode growth, even as it liberated growth from reliance on natural resources and the "limits to growth" they imposed, solving the problem of both material scarcity and our impact on the natural environment--socially uplifting and ecological at once. Indeed, thinkers came to speak of literally everything in terms of "information," of our living in a world not of matter and energy but information that we could manipulate as we did lines of computer code if only we knew how, as they were confident we would soon know how, down to our own minds and bodies (most notoriously in the mind uploading visions of Ray Kurzweil and other Singularitarians).
In the process the word "information" itself came to seem fetishistic, magical, not only in the ruminations of so-called pundits mouthing the fashionable notions of the time, but at the level of popular culture--such that in an episode of Seinfeld in which Jerry's neighbor, the postal worker Newman, wanting to remind Jerry that he was a man not to be trifled with, told him in a rather menacing tone that "When you control the mail, you control information."
The line (which has become an Internet meme) seemed exceedingly contemporary to me at the time--and since, as distinctly '90s as any line can get, precisely because, as I should hope is obvious to you, the information age in this grander sense never came to pass. Far from our seeing magical feats of productivity-raising, abundance-creating INNOVATION!, productivity growth collapsed--proving a fraction of what it had been in the heyday of the "Old Economy" at which those lionizers of the information age sneered. Meanwhile we were painfully reminded time and again that at our actually existing technological level economic growth remains a slave to the availability and throughput of natural resources, with the cheap commodities of the '90s giving way to exploding commodity prices in the '00s that precipitated a riot-causing food-and-fuel crisis all over the world. If it is indeed the case that the world is all "just information," to go by where we are in 2022 (in which year we face another painful reminder of our reliance on natural resource as the war in Ukraine precipitates yet another food-and-fuel crisis) the day when we can manipulate matter like microcode remains far off.
Unsurprisingly the buzzwords of more recent years have been more modest. The term one is more likely to hear now is the "Fourth Industrial Revolution"--the expectation that the advances in automation widely projected will be as transformative as the actually existing information age may plausibly be said to have been--but not some transcendent leap beyond material reality.
I do not know for a fact that a Fourth Industrial Revolution is really at hand--but I do know that, being a rather less radical vision than those nano-assembler-based notions of the '90s, the thought that it may be so bespeaks how even our techno-hype has fallen into line with an era of lowered expectations.
Tuesday, April 26, 2022
Checking in With Emmanuel Todd
Over the years I have often found Emmanuel Todd an original, provocative thinker. In The Final Fall he displayed considerable insight into the weaknesses of the Soviet Union in its later years, enough so as to accurately predict important aspects of its final dissolution (not least, the way the reform process required to redress the country's economic stagnation would unleash centrifugal forces, tearing away first the Warsaw Pact satellites, and then the non-Russian republics).
However, Todd has also proven fairly wide off the mark on a number of important occasions, not least in his analysis of the "fall" of the American Empire. In that book he claimed that an ever more deindustrialized and debt- and bubble-reliant U.S. economy would all soon be revealed as an Enron-like house of cards, and lead to a downgrading of America's GNP by observers to compare with the downgrading of the Soviet GNP at the time of that country's collapse. Moreover, he predicted that the "American fall" he described--a matter of the country's means not only being recognized as smaller than advertised but America's having to "live within" those smaller means (no longer able to run its colossal trade deficits seemingly consequence-free)--was to be brought much nearer than would otherwise be the case not by American action, but by Europe's coming into its own. Todd specifically projected Europe coming to include Britain as a full-fledged member of the European project, and embracing Russia as well--the former bringing to the assembly its position as a global financial center, the latter its vast population, natural resources and military assets--with the result the end of European reliance on and subordination to the United States, and the end of the special privileges of "the dollar." Capping this off was Todd's prediction that the end of American predominance would mean the end of neoliberalism, with the world returning to a Keynesian economics that would facilitate growth and development around the world, while Europe's charting a course apart from American neoconservatism would conduce to stability and progress at home and abroad.
Of course, absolutely none of that happened. No such downgrading of the U.S. economy's weight ever occurred. Meanwhile, far from Britain and Russia entering the fold to make Europe the world's indisputable greatest power Britain exited the European Union entirely (with a closer relationship with the U.S. much on the minds of the advocates of that course) while, in sharp contrast with Todd's expectation, Europe and Russia grew apart rather than closer. And for what it is worth, European elites, whose connections with the U.S. were if anything affirmed by the 2007-2008 economic crisis (as Adam Tooze notes, it was a trans-Atlantic banking crisis, rather than an American banking crisis, and only the U.S. had the sheer scale to deliver the bailout), while European elites proved themselves second to none in their attraction to neoliberalism (even if their publics made the implementation of the program slower than they would have liked), and themselves fairly inclined to neoconservatism (displaying the same kind of interventionism from Mali to Syria and beyond)--and all that to such a degree that the English-language press stopped sneering and started praising the continent's governments.
All the same, even when wrong Todd made a sufficiently interesting case to leave us something to think about, rather more so than innumerable Public Intellectuals with infinitely higher profiles. And indeed, as someone who had at least occasionally got a good deal deal of mainstream notice I wondered why we did not hear of him more consistently. I initially supposed that this was a matter of Anglophone insularity, but it seemed he was not terribly present in the French press either--and was surprised to find an interview with him in Japan's Mainichi Shimbun in which he informed them that
However, Todd has also proven fairly wide off the mark on a number of important occasions, not least in his analysis of the "fall" of the American Empire. In that book he claimed that an ever more deindustrialized and debt- and bubble-reliant U.S. economy would all soon be revealed as an Enron-like house of cards, and lead to a downgrading of America's GNP by observers to compare with the downgrading of the Soviet GNP at the time of that country's collapse. Moreover, he predicted that the "American fall" he described--a matter of the country's means not only being recognized as smaller than advertised but America's having to "live within" those smaller means (no longer able to run its colossal trade deficits seemingly consequence-free)--was to be brought much nearer than would otherwise be the case not by American action, but by Europe's coming into its own. Todd specifically projected Europe coming to include Britain as a full-fledged member of the European project, and embracing Russia as well--the former bringing to the assembly its position as a global financial center, the latter its vast population, natural resources and military assets--with the result the end of European reliance on and subordination to the United States, and the end of the special privileges of "the dollar." Capping this off was Todd's prediction that the end of American predominance would mean the end of neoliberalism, with the world returning to a Keynesian economics that would facilitate growth and development around the world, while Europe's charting a course apart from American neoconservatism would conduce to stability and progress at home and abroad.
Of course, absolutely none of that happened. No such downgrading of the U.S. economy's weight ever occurred. Meanwhile, far from Britain and Russia entering the fold to make Europe the world's indisputable greatest power Britain exited the European Union entirely (with a closer relationship with the U.S. much on the minds of the advocates of that course) while, in sharp contrast with Todd's expectation, Europe and Russia grew apart rather than closer. And for what it is worth, European elites, whose connections with the U.S. were if anything affirmed by the 2007-2008 economic crisis (as Adam Tooze notes, it was a trans-Atlantic banking crisis, rather than an American banking crisis, and only the U.S. had the sheer scale to deliver the bailout), while European elites proved themselves second to none in their attraction to neoliberalism (even if their publics made the implementation of the program slower than they would have liked), and themselves fairly inclined to neoconservatism (displaying the same kind of interventionism from Mali to Syria and beyond)--and all that to such a degree that the English-language press stopped sneering and started praising the continent's governments.
All the same, even when wrong Todd made a sufficiently interesting case to leave us something to think about, rather more so than innumerable Public Intellectuals with infinitely higher profiles. And indeed, as someone who had at least occasionally got a good deal deal of mainstream notice I wondered why we did not hear of him more consistently. I initially supposed that this was a matter of Anglophone insularity, but it seemed he was not terribly present in the French press either--and was surprised to find an interview with him in Japan's Mainichi Shimbun in which he informed them that
he does not respond to interviews in France, where the media does not permit levelheaded debate. But because Japan is a safety zone for him, he continues, he does interviews for the Japanese media.I can't say that I'm terribly surprised by his assessment of the French media. However, I suspect that Japan's being a "safety zone" for him is more a function of the "hot buttons" he addresses at home having rather less emotive effect there, while a homegrown counterpart to Todd would probably find his country's media as inhospitable as Todd does France's.
Monday, April 11, 2022
The Return of Space-Based Solar Power to the Conversation?
Back in the 1970s a great deal was said of the prospect of space-based solar power--of massive arrays of photovoltaic solar panels placed in orbit which would transmit the electricity they generated back down to Earth, with Gerard K. O'Neill famously offering a particularly detailed proposal of the type in that '70s-era space development classic The High Frontier. (The tired sneer of the renewables-bashers is that the sun does not shine all the time. But the sun really does shine all the time in space, permitting a much more consistent and greater output from solar panels situated in orbit than on Earth.)
Of course, no such project ever materialized. There were many reasons for that, among them the unswerving commitment of business to fossil fuels, and government commitment to business' reading of its interests (which, to the lament of those concerned for climate change, endures almost unaltered). But there was also the reality that a crucial part of such plans--given the sheer amount of infrastructure that had to be constructed in space--was bringing down the very high cost of space launch. Key to this vision generally, and O'Neill's vision in particular, was the expectation that the space shuttle--which was, as the name indicates, expected to indeed be a shuttle, with a rapid turnaround time providing very regular Earth-to-orbit transit --would produce a drastic fall in space launch costs, with three to four flights a month thought plausible.
Alas, rather than three or four flights a month the shuttles we got in practice could at best manage three or four flights a year--while as the fate of the Challenger and Columbia tragically showed, the risk of their failing to return safe and sound from a mission was well over one percent. As might be expected, the space shuttle was anything but a "shuttle," and while the cost estimates vary greatly, absolutely no one regards it as having cut the price of space launch the way its proponents had hoped. The result was that any attempt to utilize space-based solar power on any significant scale was prohibitively expensive in the circumstances.
Still, the idea never altogether went away, and has received renewed attention in the wake of a British government proposal to pursue such a project. Plausibly also contributing to this attention are the claims by sympathetic analysts that SpaceX has succeeded in achieving lower space launch costs (not nearly so low as O'Neill had banked on--five times O'Neill's figures, in fact, $2500 a pound or so to low Earth orbit as against the $500 or so O'Neill had in mind--but still a considerable improvement); while photovoltaic solar panels have become an ever cheaper way of generating electricity (indeed, the cheapest ever), as well as thinner and lighter, with all that implies for the possibility of designing lighter, more compact and therefore more cost-effective space-based arrays to cut down on cargo size and launch cost.1
I am generally sympathetic to both space development, and to renewable energy, but I also have to admit my doubts in regard to this particular combination of them--in part because every gain in the efficiency of solar panels that makes electricity production from space-based solar cheaper and more efficient also makes terrestrially-based solar cheaper and more efficient, minus the immense launch costs, and the difficulties posed by the continued lack of convenient, regular physical access. (Earth orbit is a crowded, dangerous place--and a massive investment in such a project if we do not have the capacity to effect repair in the case of accident or a collision with a meteorite or piece of space debris seems problematic at best.) Especially barring a much more drastic fall in launch costs than have been claimed by even the most sympathetic for SpaceX; or the advent of the kind of robotics capability that would make humans completely unnecessary to the construction, maintenance, repair of such an infrastructure; or preferably both; it seems to me impractical. Indeed, for the time being it seems to me the safer course to develop solar power on Earth, with RethinkX's "Clean Energy Super Power" concept seeming to me a more compelling approach to the problems posed by the intermittency of solar-generated electricity--and deserving of far more attention than it has received to date.
1. O'Neill's 1976 book estimated that the cargo variant of the space shuttle on which he was counting would get cargo up to low Earth orbit at the price of $110 a pound. Adjusted for inflation using the Consumer Price Index $110 in 1976 would be the equivalent of about $520 in 2021.
Of course, no such project ever materialized. There were many reasons for that, among them the unswerving commitment of business to fossil fuels, and government commitment to business' reading of its interests (which, to the lament of those concerned for climate change, endures almost unaltered). But there was also the reality that a crucial part of such plans--given the sheer amount of infrastructure that had to be constructed in space--was bringing down the very high cost of space launch. Key to this vision generally, and O'Neill's vision in particular, was the expectation that the space shuttle--which was, as the name indicates, expected to indeed be a shuttle, with a rapid turnaround time providing very regular Earth-to-orbit transit --would produce a drastic fall in space launch costs, with three to four flights a month thought plausible.
Alas, rather than three or four flights a month the shuttles we got in practice could at best manage three or four flights a year--while as the fate of the Challenger and Columbia tragically showed, the risk of their failing to return safe and sound from a mission was well over one percent. As might be expected, the space shuttle was anything but a "shuttle," and while the cost estimates vary greatly, absolutely no one regards it as having cut the price of space launch the way its proponents had hoped. The result was that any attempt to utilize space-based solar power on any significant scale was prohibitively expensive in the circumstances.
Still, the idea never altogether went away, and has received renewed attention in the wake of a British government proposal to pursue such a project. Plausibly also contributing to this attention are the claims by sympathetic analysts that SpaceX has succeeded in achieving lower space launch costs (not nearly so low as O'Neill had banked on--five times O'Neill's figures, in fact, $2500 a pound or so to low Earth orbit as against the $500 or so O'Neill had in mind--but still a considerable improvement); while photovoltaic solar panels have become an ever cheaper way of generating electricity (indeed, the cheapest ever), as well as thinner and lighter, with all that implies for the possibility of designing lighter, more compact and therefore more cost-effective space-based arrays to cut down on cargo size and launch cost.1
I am generally sympathetic to both space development, and to renewable energy, but I also have to admit my doubts in regard to this particular combination of them--in part because every gain in the efficiency of solar panels that makes electricity production from space-based solar cheaper and more efficient also makes terrestrially-based solar cheaper and more efficient, minus the immense launch costs, and the difficulties posed by the continued lack of convenient, regular physical access. (Earth orbit is a crowded, dangerous place--and a massive investment in such a project if we do not have the capacity to effect repair in the case of accident or a collision with a meteorite or piece of space debris seems problematic at best.) Especially barring a much more drastic fall in launch costs than have been claimed by even the most sympathetic for SpaceX; or the advent of the kind of robotics capability that would make humans completely unnecessary to the construction, maintenance, repair of such an infrastructure; or preferably both; it seems to me impractical. Indeed, for the time being it seems to me the safer course to develop solar power on Earth, with RethinkX's "Clean Energy Super Power" concept seeming to me a more compelling approach to the problems posed by the intermittency of solar-generated electricity--and deserving of far more attention than it has received to date.
1. O'Neill's 1976 book estimated that the cargo variant of the space shuttle on which he was counting would get cargo up to low Earth orbit at the price of $110 a pound. Adjusted for inflation using the Consumer Price Index $110 in 1976 would be the equivalent of about $520 in 2021.
Saturday, April 9, 2022
The Fortieth Anniversary of the Fifth Generation Computer Systems Initiative--and the Road Ahead for Artificial Intelligence
This month (April 2022) marks the fortieth anniversary of the announcement by the Japanese government's Ministry of International Trade and Industry's Fifth Generation Computer Systems initiative. These computers (said to represent another generation beyond first-generation vacuum tubes, second-generation transistors, third-generation integrated circuits, and fourth-generation Very Large Integrated circuits because of their use of parallel processing and logic programming) were, as Edward Feigenbaum and Pamela McCormick wrote in Creative Computing, supposed to be "able to converse with humans in natural language and understand speech and pictures," and "learn, associate, make inferences, make decisions, and otherwise behave in ways we have always considered the exclusive province of human reason."
With this declaration coming from the very institution which was credited with being the "brains" behind the Japanese economic miracle in the post-war period just as "Japan, Inc." was approaching the peak of its global prominence and power, not least on the basis of Japan's industrial excellence in the computing field, this claim--which meant nothing short of the long-awaited revolution in artificial intelligence being practically here--was taken very seriously indeed. In fact the U.S. and British governments, under administrations (those of Ronald Reagan and Margaret Thatcher, respectively) which were hardly fans of MITI-style involvement with private industry, answered Japan's challenge with their own initiatives.
The race was on!
Alas, it proved a race to nowhere. "'Fifth Generation' Became Japan's Lost Generation" sneered the title of a 1992 article in The New York Times, which went so far as to suggest that American computer scientists cynically overstated the prospects of the Japanese government attaining its stated goal to squeeze a bit more research funding out of the government. While one may argue the reasons for this, and their implications, the indisputable, bottom-line facts is that computers with the capabilities in question, based on those particular technologies or any other, never happened. Indeed, four decades after the announcement of the initiative, after astronomical increases in computing power, decades of additional study of human and machine intelligence, and the extraordinary opportunities for training such intelligences provided by broadband Internet, we continue to struggle to give computers real functionality along the lines that were supposed to be imminent four decades ago (the ability to converse in natural language, understand speech and pictures, make human-like decisions, etc.)--so much so that the burst of excitement we saw in the '10 about the possibility that we were "almost there" has already waned amid a great lowering of expectations.
In spite of the briskness of developments in personal computing over the past generation--in the performance, compactness, cheapness of the devices, the speed and ubiquity of Internet service, and the uses to which these capabilities have been put--it can seem that in other ways the field has been stagnant for a long time. Those first four generations of computing arrived within the space of four decades, between the 1940s and 1970s. Since the 1970s we have, if doing remarkable things with the basic technology, still been in the fourth generation for twice as long as it took us to go from generations zero to three. And in the face of the discouraging fact one may think that we will always be so. But I think that goes too far. If in 2022 we remain well short of the target announced in 1982 we do seem to be getting closer, if with what can feel like painful slowness, and I would expect us to go on doing so--if there seems plenty of room for argument about how quickly we will accomplish that.
For whatever it may be worth my suspicion (based on how after disappointing after the '90s neural nets delivered surprising progress in the '10s, when married to faster computers and the Internet) is that the crux of the problem is hardware--that our succeeding, or failing, to build sufficiently powerful computers is going to be the most important single factor in whether we build computers capable of human-like intelligence because ultimately they must have the capacity to simulate it. This would seem to simplify the issue in respects, given the steadiness in the growth of computing power over time, but it is, of course, uncertain just how powerful a computer has to be to do the job, while continued progress in the area is facing significant hurdles, given the slowness of post-silicon computer architectures to emerge, with the development of the carbon-based chips that had looked like the logical successor running "behind schedule," while more exotic possibilities like quantum computing, if potentially far more revolutionary and looking more dynamic, remain a long way from being ready for either really cutting-edge or everyday use. Still, the incentive and resources to keep forging ahead are undeniably there--while it may well be that, after all the prior disappointments, we have less far to go here than we may think.
With this declaration coming from the very institution which was credited with being the "brains" behind the Japanese economic miracle in the post-war period just as "Japan, Inc." was approaching the peak of its global prominence and power, not least on the basis of Japan's industrial excellence in the computing field, this claim--which meant nothing short of the long-awaited revolution in artificial intelligence being practically here--was taken very seriously indeed. In fact the U.S. and British governments, under administrations (those of Ronald Reagan and Margaret Thatcher, respectively) which were hardly fans of MITI-style involvement with private industry, answered Japan's challenge with their own initiatives.
The race was on!
Alas, it proved a race to nowhere. "'Fifth Generation' Became Japan's Lost Generation" sneered the title of a 1992 article in The New York Times, which went so far as to suggest that American computer scientists cynically overstated the prospects of the Japanese government attaining its stated goal to squeeze a bit more research funding out of the government. While one may argue the reasons for this, and their implications, the indisputable, bottom-line facts is that computers with the capabilities in question, based on those particular technologies or any other, never happened. Indeed, four decades after the announcement of the initiative, after astronomical increases in computing power, decades of additional study of human and machine intelligence, and the extraordinary opportunities for training such intelligences provided by broadband Internet, we continue to struggle to give computers real functionality along the lines that were supposed to be imminent four decades ago (the ability to converse in natural language, understand speech and pictures, make human-like decisions, etc.)--so much so that the burst of excitement we saw in the '10 about the possibility that we were "almost there" has already waned amid a great lowering of expectations.
In spite of the briskness of developments in personal computing over the past generation--in the performance, compactness, cheapness of the devices, the speed and ubiquity of Internet service, and the uses to which these capabilities have been put--it can seem that in other ways the field has been stagnant for a long time. Those first four generations of computing arrived within the space of four decades, between the 1940s and 1970s. Since the 1970s we have, if doing remarkable things with the basic technology, still been in the fourth generation for twice as long as it took us to go from generations zero to three. And in the face of the discouraging fact one may think that we will always be so. But I think that goes too far. If in 2022 we remain well short of the target announced in 1982 we do seem to be getting closer, if with what can feel like painful slowness, and I would expect us to go on doing so--if there seems plenty of room for argument about how quickly we will accomplish that.
For whatever it may be worth my suspicion (based on how after disappointing after the '90s neural nets delivered surprising progress in the '10s, when married to faster computers and the Internet) is that the crux of the problem is hardware--that our succeeding, or failing, to build sufficiently powerful computers is going to be the most important single factor in whether we build computers capable of human-like intelligence because ultimately they must have the capacity to simulate it. This would seem to simplify the issue in respects, given the steadiness in the growth of computing power over time, but it is, of course, uncertain just how powerful a computer has to be to do the job, while continued progress in the area is facing significant hurdles, given the slowness of post-silicon computer architectures to emerge, with the development of the carbon-based chips that had looked like the logical successor running "behind schedule," while more exotic possibilities like quantum computing, if potentially far more revolutionary and looking more dynamic, remain a long way from being ready for either really cutting-edge or everyday use. Still, the incentive and resources to keep forging ahead are undeniably there--while it may well be that, after all the prior disappointments, we have less far to go here than we may think.
Friday, April 8, 2022
Australia's Nuclear Sub Program: The Global Britain Angle
In considering the Australian decision to acquire nuclear submarines in a deal made with Britain and the U.S. my thoughts turned back to Britain's "tilt to the Indo-Pacific"--the British government's decision to focus British foreign policy, and reorient its military policy, on the region, in a break with the European emphasis that has prevailed since the 1960s.
Considering that move one fact of the situation I have repeatedly noted has been that Britain's ability to project force into the region is relatively limited, especially as that region becomes more militarized--with Japan acquiring attack carriers and India a nuclear sub fleet, and Australia expanding its old force of diesel subs and frigates/destroyers into something much larger and more ambitious, reducing the "value" of what Britain can bring from so far away. (Already in the '60s the country's Far East forces, while vastly larger than anything Britain could really afford to station in the area, were inadequate to make being "east of Suez" worthwhile.)
However, Britain's capacity to provide technology that as yet few others can may be a handy supplement to such resources--especially where the resources are so sensitive. Apart from the U.S.' provision of technical support to Britain's nuclear submarine program, and Russian collaboration with India in the development of its own nuclear sub program (which has seen India lease working Russian vessels, in the '80s and again in this century), I cannot think of anything to compare at all with the new deal. Certainly what some have suggested as one possible form the deal may take (given Australia's lack of a nuclear industry), Australia's purchase of nuclear subs outright--possibly from Britain--simply has no precedent.
It is also no isolated action. Indeed, it may be useful to think of how some proponents of a post-Brexit Britain have suggested stronger ties to the Commonwealth--in this case, a relatively large piece of the Commonwealth in the crucial Indo-Pacific arena--as a replacement for its continental connections, with the sub deal a building block for a broader partnership with Australia that would strengthen Britain's local influence. Such an approach seems the more plausible given that, if rather less sensitive and controversial in nature, Indo-Pacific-minded Britain has already turned to a collaboration with Japan to produce their own sixth-generation fighter.
Meanwhile, even as they strengthen Britain's military connections with nations in East Asia such deals can be seen as conducing to the strength of the British military-industrial base that remains a key strategic asset for the country, more important than many appreciate. Like Russia Britain is a nation which has suffered considerable deindustrialization but still possessed of a disproportionately large and advanced military-industrial complex--not least because as British policy from Thatcher forward proved ready to sacrifice the country's manufacturing base for the sake of the bigger neoliberal program, the defense-industrial portion of the sector continued to get government support (with Thatcher herself making a personal lobbying effort to clinch the infamous "deal of the century" with the Saudis back in '88) that has translated to the complex's political and economic importance also being disproportionate. As the cost and complexity of weaponry only continues to grow exports become only more important as a way of keeping such a base viable--while what remains of Britain's manufacturing is that much more dependent on it.
Selling Australia critical technology--and perhaps, even its own versions of the Astute-class submarine--might not balance the country's payments by itself. However, it also does not evoke the derisive laughter that the "tea and biscuits" plan did.
Considering that move one fact of the situation I have repeatedly noted has been that Britain's ability to project force into the region is relatively limited, especially as that region becomes more militarized--with Japan acquiring attack carriers and India a nuclear sub fleet, and Australia expanding its old force of diesel subs and frigates/destroyers into something much larger and more ambitious, reducing the "value" of what Britain can bring from so far away. (Already in the '60s the country's Far East forces, while vastly larger than anything Britain could really afford to station in the area, were inadequate to make being "east of Suez" worthwhile.)
However, Britain's capacity to provide technology that as yet few others can may be a handy supplement to such resources--especially where the resources are so sensitive. Apart from the U.S.' provision of technical support to Britain's nuclear submarine program, and Russian collaboration with India in the development of its own nuclear sub program (which has seen India lease working Russian vessels, in the '80s and again in this century), I cannot think of anything to compare at all with the new deal. Certainly what some have suggested as one possible form the deal may take (given Australia's lack of a nuclear industry), Australia's purchase of nuclear subs outright--possibly from Britain--simply has no precedent.
It is also no isolated action. Indeed, it may be useful to think of how some proponents of a post-Brexit Britain have suggested stronger ties to the Commonwealth--in this case, a relatively large piece of the Commonwealth in the crucial Indo-Pacific arena--as a replacement for its continental connections, with the sub deal a building block for a broader partnership with Australia that would strengthen Britain's local influence. Such an approach seems the more plausible given that, if rather less sensitive and controversial in nature, Indo-Pacific-minded Britain has already turned to a collaboration with Japan to produce their own sixth-generation fighter.
Meanwhile, even as they strengthen Britain's military connections with nations in East Asia such deals can be seen as conducing to the strength of the British military-industrial base that remains a key strategic asset for the country, more important than many appreciate. Like Russia Britain is a nation which has suffered considerable deindustrialization but still possessed of a disproportionately large and advanced military-industrial complex--not least because as British policy from Thatcher forward proved ready to sacrifice the country's manufacturing base for the sake of the bigger neoliberal program, the defense-industrial portion of the sector continued to get government support (with Thatcher herself making a personal lobbying effort to clinch the infamous "deal of the century" with the Saudis back in '88) that has translated to the complex's political and economic importance also being disproportionate. As the cost and complexity of weaponry only continues to grow exports become only more important as a way of keeping such a base viable--while what remains of Britain's manufacturing is that much more dependent on it.
Selling Australia critical technology--and perhaps, even its own versions of the Astute-class submarine--might not balance the country's payments by itself. However, it also does not evoke the derisive laughter that the "tea and biscuits" plan did.
Thursday, April 7, 2022
Nuclear vs. Conventionally-Powered Subs--and the Australian Turn to a Nuclear Submarine Fleet
It appears that most have misperceptions about non-nuclear subs, and in particular their underwater endurance. This seems partly reflective of misapprehensions about the history of submarines. Remembering the submarine campaigns of the First and Second World Wars they rarely realize just how much time those vessels spent on the surface, and submerged only when actually on the attack or evading attack themselves--precisely because when underwater they had to run on the batteries of that earlier day, and because when underwater they could operate only at much lower speed.1 Submarines were, properly speaking, submersibles, capable of going under the water, with the capability important, but subsurface not where they spent most of their time. This was one reason why it was such an important turning point in the Battle of the Atlantic when the Allies extended their aerial patrols to cover the entirety of the trans-Atlantic convoy routes (and the increasing equipment of those aircraft with radar)--on the surface the U-boats were not much less detectable than any other surface ship of comparable size and profile. It was also why the advent of the snorkel was important--it let submarines use their diesel engines when just below the surface, permitting some trade-off between stealth and endurance.
Nuclear power plants, however, enabled submarines to effectively operate underwater for as long as their crews and their supplies could hold out, while running as fast as any other vessel afloat, over ranges limited only by their speed and endurance, and all that while carrying a far heavier armament. This made them virtually a requirement for large ballistic missile submarines; for any submarine intended to attack them or protect them from attack; for subs simply intended to carry large payloads of tactical weapons for any other purpose, like large loads of cruise missiles for anti-ship or land-attack; and for subs which are simply intended for rapid dispatch to distant regions, whether out in the open ocean or littorals far from home.
To use that horribly overused and misused term, they were a game-changer.
Of course, impressive as the performance afforded by a nuclear power plants is it comes with significant downsides. Those plants are not cheap or easy to build, operate, refuel, maintain—and bring all the safety risks so famously dramatized in, for example, Kathryn Bigelow's K19. And the vessels with all the extra capabilities that are the whole point of going in for a nuclear power plant are not cheap. Even limiting the comparison to attack-type boats a high-quality diesel submarine like the German Type 212 or Swedish Gotland runs about a half billion dollars--while a Virginia-class boat runs about three billion, six times as much. It is even the case that the quietest diesel-electric boats tend to be quieter and therefore stealthier than their nuclear counterparts—-while, as if all that were not enough, air independent propulsion has wrought a great improvement in the underwater endurance of non-nuclear vessels, perhaps to the point of giving conventionally-powered subs with trans-oceanic range while submerged (exemplified by the Ocean-class submarine concept).
The result is that a government with purely local security concerns--which wants its subs mainly for coastal defense purposes--or which has a limited budget, has enormous incentive to stick with the simpler, cheaper conventional boats. Indeed, the attractions have been such that those who follow the naval literature have likely seen over the years many analysts make the case for the long all-nuclear U.S. Navy supplementing its forces with such boats for littoral warfare.
What, then, does it mean that Australia has taken the nuclear submarine path?
One may see the matter in terms of the country's position being fairly unique, starting with the plain facts of its physical geography. Australia has what may be the world's seventh-longest coastline (about 16,000 miles), and its third-largest Exclusive Economic Zone (some 3.4 million square miles). Moreover, there are Australia's additional military commitments across Southeast Asia and the South Pacific (with troops and planes still rotating through Butterworth Air Field in Malaysia, its membership in the Five Power Defence Arrangements tying it in also with Singapore and New Zealand, its preparedness for interventions as far afield as East Timor, the Solomon Islands and Fiji as seen in the past) extending the Australian Defence Force's expected zone of operations considerably beyond that. And Australia undertakes all this with relatively small forces--recruiting from its population of 25 million an armed forces of 60,000, with 15,000 in the navy--with comparatively little military back-up furnished by large allies close at hand (in comparison with other countries with small populations and vast areas of concern like Canada, with its proximity to the U.S., or Norway, with its inclusion within the European NATO space).
Seen from the purely naval perspective that is a lot of "battlespace" to cover, especially with the military resources at hand, with one reflection how, given its combination of unavoidably small forces and desire for a long reach has long played an important part in Australian procurement decisions (the country the only customer for the F-111 strike aircraft besides the U.S. Air Force). With the region ever more intensely militarized it is unsurprising that the tendency is particularly evident now, with the submarines just one element in a shift to a larger, longer-ranged armed forces, navy included (with the manning of the Australian Defence Force to go up a third to 80,000, and the navy replacing its little, relatively lightly armed frigates with "frigates" like cruisers, with long-range cruise missile and anti-ballistic missile capabilities part of the package).
Even if one takes entirely for granted the broader political premises of the course the Australian government is taking (a larger subject than I care to discuss here), this does not in and of itself make the nuclear sub decision the right one, of course (there is the cost-effectiveness issue, and the technical problems are vast--especially when one remembers that, in spite of the expectations of local construction, Australia has no nuclear technology sector), but the point is that this is part of a larger complex of decisionmaking in regard to a profoundly shifting military posture that seems to get too little attention in such coverage of the issue as I have seen.
1. I remember a straight-to-video remake of The Land That Time Forgot where a World War I-era German submariner spoke of spending weeks beneath the surface of the sea--something submariners of that time never did.
Nuclear power plants, however, enabled submarines to effectively operate underwater for as long as their crews and their supplies could hold out, while running as fast as any other vessel afloat, over ranges limited only by their speed and endurance, and all that while carrying a far heavier armament. This made them virtually a requirement for large ballistic missile submarines; for any submarine intended to attack them or protect them from attack; for subs simply intended to carry large payloads of tactical weapons for any other purpose, like large loads of cruise missiles for anti-ship or land-attack; and for subs which are simply intended for rapid dispatch to distant regions, whether out in the open ocean or littorals far from home.
To use that horribly overused and misused term, they were a game-changer.
Of course, impressive as the performance afforded by a nuclear power plants is it comes with significant downsides. Those plants are not cheap or easy to build, operate, refuel, maintain—and bring all the safety risks so famously dramatized in, for example, Kathryn Bigelow's K19. And the vessels with all the extra capabilities that are the whole point of going in for a nuclear power plant are not cheap. Even limiting the comparison to attack-type boats a high-quality diesel submarine like the German Type 212 or Swedish Gotland runs about a half billion dollars--while a Virginia-class boat runs about three billion, six times as much. It is even the case that the quietest diesel-electric boats tend to be quieter and therefore stealthier than their nuclear counterparts—-while, as if all that were not enough, air independent propulsion has wrought a great improvement in the underwater endurance of non-nuclear vessels, perhaps to the point of giving conventionally-powered subs with trans-oceanic range while submerged (exemplified by the Ocean-class submarine concept).
The result is that a government with purely local security concerns--which wants its subs mainly for coastal defense purposes--or which has a limited budget, has enormous incentive to stick with the simpler, cheaper conventional boats. Indeed, the attractions have been such that those who follow the naval literature have likely seen over the years many analysts make the case for the long all-nuclear U.S. Navy supplementing its forces with such boats for littoral warfare.
What, then, does it mean that Australia has taken the nuclear submarine path?
One may see the matter in terms of the country's position being fairly unique, starting with the plain facts of its physical geography. Australia has what may be the world's seventh-longest coastline (about 16,000 miles), and its third-largest Exclusive Economic Zone (some 3.4 million square miles). Moreover, there are Australia's additional military commitments across Southeast Asia and the South Pacific (with troops and planes still rotating through Butterworth Air Field in Malaysia, its membership in the Five Power Defence Arrangements tying it in also with Singapore and New Zealand, its preparedness for interventions as far afield as East Timor, the Solomon Islands and Fiji as seen in the past) extending the Australian Defence Force's expected zone of operations considerably beyond that. And Australia undertakes all this with relatively small forces--recruiting from its population of 25 million an armed forces of 60,000, with 15,000 in the navy--with comparatively little military back-up furnished by large allies close at hand (in comparison with other countries with small populations and vast areas of concern like Canada, with its proximity to the U.S., or Norway, with its inclusion within the European NATO space).
Seen from the purely naval perspective that is a lot of "battlespace" to cover, especially with the military resources at hand, with one reflection how, given its combination of unavoidably small forces and desire for a long reach has long played an important part in Australian procurement decisions (the country the only customer for the F-111 strike aircraft besides the U.S. Air Force). With the region ever more intensely militarized it is unsurprising that the tendency is particularly evident now, with the submarines just one element in a shift to a larger, longer-ranged armed forces, navy included (with the manning of the Australian Defence Force to go up a third to 80,000, and the navy replacing its little, relatively lightly armed frigates with "frigates" like cruisers, with long-range cruise missile and anti-ballistic missile capabilities part of the package).
Even if one takes entirely for granted the broader political premises of the course the Australian government is taking (a larger subject than I care to discuss here), this does not in and of itself make the nuclear sub decision the right one, of course (there is the cost-effectiveness issue, and the technical problems are vast--especially when one remembers that, in spite of the expectations of local construction, Australia has no nuclear technology sector), but the point is that this is part of a larger complex of decisionmaking in regard to a profoundly shifting military posture that seems to get too little attention in such coverage of the issue as I have seen.
1. I remember a straight-to-video remake of The Land That Time Forgot where a World War I-era German submariner spoke of spending weeks beneath the surface of the sea--something submariners of that time never did.
Wednesday, April 6, 2022
Battleships, Cruisers, Destroyers, Frigates--What's the Difference? And Why Should We Care Anyway in 2022?
The terminology denoting warship types can, at a glance, seem bewildering--in part because old usages have become profoundly muddled over time.
The term "battleship" derives from "line-of-battleship," the vessels topmost in size, protection and armament (i.e. the biggest ships with the thickest armor and biggest guns) and so intended to "stand" in the line of battle during head on fleet clashes like Trafalgar or Jutland, because they could take and give the heaviest of beatings.
Cruisers were different. They were supposed to "cruise" independently, whether scouting for the fleet, or commerce raiding. (Indeed, it was once common to refer to submarines as "submarine cruisers," precisely because they were submersible vessels that did the cruiser's job of scouting and commerce raiding.) The premium on mobility meant that while they could be "light," "medium" or "heavy," or even "battle cruisers"--packing an armament that could compete with a battleship, but in each and every case less well-armored, and reliant on superior speed and agility to chase down their prey or escape pursuers rather than their ability to endure punishment.
The term destroyer derives from "torpedo boat" destroyer. As the name implies these were intended to fend off attacks against a fleet by the smaller vessels, which could most certainly not stand up to a battleship in a fight, but which nonetheless threatened even the biggest ships with the torpedoes they carried. Of course, torpedo boats only proved to be the beginning in that respect, with those "submersible cruisers" and the advent of aircraft translating to comparable threats to naval and civilian vessels, and in the process, what destroyers were more likely to be fending off in action.
Finally the term "frigate" was originally used in the "age of sail" to refer to swift, agile vessels too small for the line of battle. Fairly general, it had fallen out of favor in the age of mechanical, fossil fuel-driven fleets, and one might add, their more precise classification. However, the Second World War, with its submarine and other attacks, and the emergence of convoys in response to them, saw a vastly increased need for ships fulfilling the destroyer's protective function--to such a degree that there was call for smaller (cheaper and more easily and quickly produced) vessels that did the job. In American usage at the time these were "destroyer escorts," but the British term "frigate" has since become more commonplace.
Increasingly over the course of the Second World War and Cold War, with the carrier assuming the central role in naval warfare over the battleship; missiles supplanting guns in shipborne and aerial armament; with physical armor decreasingly utilized as a way of protecting vessels; and aerial, electronic and even space surveillance coming to the fore; the old combination made less sense. Given their vulnerability and the limited reach of their guns armored, big-gun battleships increasingly increasingly seemed pointless aside from a very few, limited uses for which few navies could not justify keeping them, with the few remaining examples curiosities (like America's battleships, utilized primarily for shore bombardment rather than fighting other ships). Big surface cruisers made less sense as a way of performing the commerce raiding and scouting missions, and so, especially as time wore on, such anomalies as the Soviet Union's Kirov-class vessels aside, "cruisers" were really just big destroyers or frigates. Indeed, the usage of one term or another mainly an indicator of size and armament--with cruisers particularly large and heavily armed, frigates representing the low end of the spectrum in that regard, and destroyers somewhere in the middle.
As all this happened ships of all types got bigger, with destroyers and even frigates becoming cruiser-sized vessels, while designated cruisers became something of a rare anomaly, in part because they seemed superfluous. Certainly the U.S. Navy has been an obvious example. The Ticonderoga-class cruisers (21 of which still serve) were not much bigger than the Arleigh Burke-class destroyers that came along by the late 1980s (9,600 to 8,000 long tons), with the latest edition of the Arleigh Burke class about the same size (9,500); and the Ticos considerably outmassed by the Zumwalt-class destroyers (15,600 tons), with the DDG(X) vessels likewise to also outmass them in their turn (10,000 tons+).
And thus does it even go with frigates. Australia's new Hunter-class "frigates," with their 8,800-ton displacement, 7,000-mile cruising range and AEGIS combat systems (with anti-ballistic missile capability), look not unlike the "Ticos" in size, reach and function--the more so if one compares them with the preceding ANZAC class of frigate (3,600-tonner vessels, closer to the stereotype of such vessels), or even the country's relatively new Hobart-class destroyers (which displace 6,900 tons).
As the Australian example should make clear a country's replacing even a frigate with "another frigate" may well indicate that what the country in question is really buying is a cruiser--and that much greater an increase in capability, less obviously dramatic than the country's shift to a nuclear submarine fleet and Tomahawk cruise missiles, but still significantly indicative of aspirations to a far more formidable naval position, with all it implies for its connections with the U.S. and Britain, the general balance of power in what it is increasingly fashionable to call the "Indo-Pacific," and the sharp acceleration of the already years-long trend of increased militarization and global rearmament in the wake of the war in Ukraine.
The term "battleship" derives from "line-of-battleship," the vessels topmost in size, protection and armament (i.e. the biggest ships with the thickest armor and biggest guns) and so intended to "stand" in the line of battle during head on fleet clashes like Trafalgar or Jutland, because they could take and give the heaviest of beatings.
Cruisers were different. They were supposed to "cruise" independently, whether scouting for the fleet, or commerce raiding. (Indeed, it was once common to refer to submarines as "submarine cruisers," precisely because they were submersible vessels that did the cruiser's job of scouting and commerce raiding.) The premium on mobility meant that while they could be "light," "medium" or "heavy," or even "battle cruisers"--packing an armament that could compete with a battleship, but in each and every case less well-armored, and reliant on superior speed and agility to chase down their prey or escape pursuers rather than their ability to endure punishment.
The term destroyer derives from "torpedo boat" destroyer. As the name implies these were intended to fend off attacks against a fleet by the smaller vessels, which could most certainly not stand up to a battleship in a fight, but which nonetheless threatened even the biggest ships with the torpedoes they carried. Of course, torpedo boats only proved to be the beginning in that respect, with those "submersible cruisers" and the advent of aircraft translating to comparable threats to naval and civilian vessels, and in the process, what destroyers were more likely to be fending off in action.
Finally the term "frigate" was originally used in the "age of sail" to refer to swift, agile vessels too small for the line of battle. Fairly general, it had fallen out of favor in the age of mechanical, fossil fuel-driven fleets, and one might add, their more precise classification. However, the Second World War, with its submarine and other attacks, and the emergence of convoys in response to them, saw a vastly increased need for ships fulfilling the destroyer's protective function--to such a degree that there was call for smaller (cheaper and more easily and quickly produced) vessels that did the job. In American usage at the time these were "destroyer escorts," but the British term "frigate" has since become more commonplace.
Increasingly over the course of the Second World War and Cold War, with the carrier assuming the central role in naval warfare over the battleship; missiles supplanting guns in shipborne and aerial armament; with physical armor decreasingly utilized as a way of protecting vessels; and aerial, electronic and even space surveillance coming to the fore; the old combination made less sense. Given their vulnerability and the limited reach of their guns armored, big-gun battleships increasingly increasingly seemed pointless aside from a very few, limited uses for which few navies could not justify keeping them, with the few remaining examples curiosities (like America's battleships, utilized primarily for shore bombardment rather than fighting other ships). Big surface cruisers made less sense as a way of performing the commerce raiding and scouting missions, and so, especially as time wore on, such anomalies as the Soviet Union's Kirov-class vessels aside, "cruisers" were really just big destroyers or frigates. Indeed, the usage of one term or another mainly an indicator of size and armament--with cruisers particularly large and heavily armed, frigates representing the low end of the spectrum in that regard, and destroyers somewhere in the middle.
As all this happened ships of all types got bigger, with destroyers and even frigates becoming cruiser-sized vessels, while designated cruisers became something of a rare anomaly, in part because they seemed superfluous. Certainly the U.S. Navy has been an obvious example. The Ticonderoga-class cruisers (21 of which still serve) were not much bigger than the Arleigh Burke-class destroyers that came along by the late 1980s (9,600 to 8,000 long tons), with the latest edition of the Arleigh Burke class about the same size (9,500); and the Ticos considerably outmassed by the Zumwalt-class destroyers (15,600 tons), with the DDG(X) vessels likewise to also outmass them in their turn (10,000 tons+).
And thus does it even go with frigates. Australia's new Hunter-class "frigates," with their 8,800-ton displacement, 7,000-mile cruising range and AEGIS combat systems (with anti-ballistic missile capability), look not unlike the "Ticos" in size, reach and function--the more so if one compares them with the preceding ANZAC class of frigate (3,600-tonner vessels, closer to the stereotype of such vessels), or even the country's relatively new Hobart-class destroyers (which displace 6,900 tons).
As the Australian example should make clear a country's replacing even a frigate with "another frigate" may well indicate that what the country in question is really buying is a cruiser--and that much greater an increase in capability, less obviously dramatic than the country's shift to a nuclear submarine fleet and Tomahawk cruise missiles, but still significantly indicative of aspirations to a far more formidable naval position, with all it implies for its connections with the U.S. and Britain, the general balance of power in what it is increasingly fashionable to call the "Indo-Pacific," and the sharp acceleration of the already years-long trend of increased militarization and global rearmament in the wake of the war in Ukraine.
Subscribe to:
Posts (Atom)