About a decade ago I was coming around to the conclusion that, certainly as judged by the expectations of the '90s and the unceasing hype inflicted on us by the press, the actual rate of technological change has been wildly overhyped.
I have, of course, found that not only is this not a popular opinion, but that people tend to jump right down your throat when you express it. And sure enough, right after expressing exactly that opinion on a radio show I received an e-mail contesting it. As I had pointed to virtual reality as an example of how technological developments were falling far short of commonly held expectations, they did not fail to dispute that particular point, insisting that all the popular video games "these days" were based on "virtual reality."
Equating virtual reality as I did with what we were promised back in the '90s--full bodily immersion in a more or less convincing virtual reality with which we can interact haptically--I thought this was an extreme lowering of the bar on every count, so much so as to leave us not talking about the subject at all. Later, however, I encountered the arguments of economist Edward Castronova, who in books like Synthetic Worlds distinguished between the "hardware" and the "software" of virtual reality. The hardware is the equipment by way of which we experience a virtual reality, like the gear that would enable that bodily, haptic immersion in a convincing reality; the software what renders the world that we would be so persuasively immersed in. Castronova admitted that the hardware had been oversold--but argued that the software, the creation of artificial worlds which people experience, had most certainly arrived in such forms as Massively Multiplayer Online Role-Playing Games (MMORPGs). As he acknowledged, people experienced them through a small two-dimensional audiovisual display, but already millions happily spent tens of hours a week there, and in the process became quite deeply involved in their avatars and their doings.
As of early 2021 it seems we have had another boom and bust where the hardware is concerned. The technology has certainly become more developed and more accessible. And certainly it has been making a bigger splash in gaming than its '90s-era equivalents ever did. Yet it seems to be falling short of the everyone-will-own-several-of-them ubiquity supposed to be imminent for the last five, six, seven years. And it is not hard to see why. Inadequate technical performance in areas like resolution and frame rate, "clunky" hardware, and the high cost of the requisite gear, make for a prohibitive combination indeed. And so at the very least the technology's affording the kind of physical immersion a standard part of the vision a generation ago, certainly in a way that would make it appealing and accessible to a really wide audience, seems some years off--awaiting, among other things, still lighter, faster, cheaper computers.
Yet there is no question that the software of virtual reality has only gone from strength to strength. When Castronova wrote Synthetic Realities single MMORPGs had been used by hundreds of thousands. Now single games of the type have had hundreds of millions of accounts opened (Runescape the record-holder here, with a quarter of a billion), and many have millions of active users at once. It might be added to this that much of what was once unique to MMORPGs--customizable characters interacting in a shared online world--has expanded far beyond the mostly high fantasy-oriented role-playing game to become a standard feature of gaming from first-person shooters to puzzlers, considerably lengthening the list of users in the process. (It has also helped that the hardware needed to access them in the more modest fashion we take for granted has gotten cheaper and more convenient. Back when Castronova was writing MMORPGs were still something people accessed primarily via desktops. Now they take the 2-D experience of those worlds wherever they go, on the laptops, tablets, smart phones they consider bare necessities, through cityscapes where everyplace anyone goes is expected to afford satisfactory Wi-fi.)
Indeed, it does not seem for nothing that, realizing early on just how consequential virtual reality software running even on primitive hardware could be, Castronova warned that we would see an "exodus" to the virtual world from this one, driving those concerned with the welfare of this world to make reality more palatable. This is not to say that I think such an exodus--or even the fear of such an exodus--prompting reform to make this world a better place, has ever been very plausible. So long as humans have bodies out here in the offline world, with their requirements for food, protection from the elements (e.g. shelter, clothing) and the rest, and the necessity of paying for the gadgets, electricity, Internet connections and the rest without which they cannot have online fun at all, there can be no exodus, really. Instead people have to submit to the demands of this world, regardless of its terms, something of which may be said of the way one "Worst since the '30s" economic catastrophe has piled atop another, the price of everything but one's own labor seems to go up endlessly, and the word "millennial" is equated with a revolution of falling expectations--with, indeed, life expectancy in America falling for years, even before the pandemic.
Consequently, rather than an exodus to wonderful worlds online, or a happier reality offline, what we have got is a rising tension between the allure of the online and the unavoidability of an ever-less rewarding offline existence. Economic necessity prevails in the end--and will so long as the bills have to be paid. But as anyone who gets out much knows only too well, a good many people steal every moment they can from the onerous duties of offline life for online experience--relying on their devices to make their commutes tolerable, and looking at their screens in class, or even on the job, whenever they can get away with it.
Thursday, June 10, 2021
Wednesday, December 16, 2020
RethinkX on Energy: An overview of the Rethinking Energy 2020-2030 Disruption Report
A favorite argument of the detractors of renewable energy-based electricity production is the intermittency of the sources--the fact that the sun does not always shine, and the wind does not always blow. They also point to the impracticality of storage of solar and wind-generated electricity on any significant scale, given sheer cost. And they hold on the basis of these facts that any attempt to meet a significant portion of need from those sources results in there either being a surplus of electricity when it may be unwanted, or a scarcity of electrity when it is needed. The latter problem, more obvious than the former, means that really large-scale use, and certainly the 100 percent reliance the optimists talk about, requires really massive redundancy in generating capacity. That is to say that to produce enough to meet 100 percent of our need, we must produce much more than 100 percent, just to meet our requirement, with the excess going to waste. Driving down efficiency and driving up costs, this makes any such scheme so profligate and so costly that only eco-besotted fools would waste a moment's time on it, the detractors tell us, while also assuring us that this is virtually certain to remain the case throughout the foreseeable future.
Of course, this argument (like just about all the detractors' old standards) has been crumbling for a good, long while. The falling cost of renewable-generated electricity, its becoming competitive with and then increasingly cheaper than such longtime electricity-production mainstays as coal and nuclear, and even natural gas, and all that on a purely "market" basis (which is to say, even without taking into account the subsidies of and externalities caused by them), make the economics look less forbidding than before. Helping, too, is the quite obvious approach of compensating for the intermittency of renewables with strategic combination. (The sun does not always shine, and the wind does not always blow--but not always at the same time, so that having solar and wind working together is at least a partial solution.) And on top of that, battery storage prices have been falling at rates comparable to those of the production of the electricity itself, lowering the cost of storing electricity not immediately used, so that there is less need for redundnacy.
The result has been that at the very least a considerable enlargement of our renewables use looks increasingly practical in the immediate term (as the shift of investment toward it reflects), and the path to the 100 percent renewables-based electricity goal, if not perfectly clear, at least considerably less fantasmic.
The RethinkX think tank, however, has gone not a step, but a giant leap, beyond that, in Adam Dorr and Tony Seba's Rethinking Energy 2020-2030 report, looking at what has for so long been dismissed as a deal-breaking liability--the fact that to meet 100 percent of our electricity needs with renewables we would need a level of capacity generating a great surplus above that level--as instead an epoch-making opportunity. Simply put, in pursuing the 100 percent renewables goal we would not only have the energy we need at far less cost to the physical environment, but in producing the "excess" of energy generate not "waste," but rather an abundance they term "Clean Energy Super Power." In this they see a basis for accomplishing with energy--and clean energy at that--what the digital age has accomplished with information storage and transmission, dropping its marginal cost to nearly zero.
How will this work? The claim warrants some unpacking, the more in as Dorr and Seba spend relatively little of their report discussing it (and in fact relegate their answer to one of what seemed to me the most important possible objections to an endnote rather than treating it in the main text). Simply put, not only is it the case that meeting our energy needs will require the capacity to produce more than a multiple of those energy needs, but that the multiple will grow with the scale of the system. (As they crunch the numbers, a renewables-based system meeting 100 percent of our electricity would generate the equivalent in Super Power, and merely expanding the capacity another twenty percent would double or even triple the quantity of Super Power.) The result is that the margin between the consumption the system is designed to meet, and what it makes available, is always widening, not shrinking.
Of course, more than a difference of perspective is involved in anything like this becoming practical in the next decade. It has to be economically feasible to build all that capacity--and indeed, even when counting in the investment that would produce all the extra, cheaper than the alternatives. By way of a number of case studies subject to deliberately pessimistic assumptions, Dorr and Seba argue precisely that. They specifically consider the feasibility of a 100 percent photovoltaic Solar, onshore Wind and lithium-ion Battery (SWB)-based grid in three diverse localities (sun- and wind-rich Texas, sun-rich but less windy California, and sun- and wind-deprived New England), in a context of no electricity imports, no conventional operating reserves, no distributed generation or storage, no assists from electric vehicles, no peak demand-lowering mechanisms (demand response, load shifting, energy arbitrage and peak shaving), and no financial innovations or government supports (subsidies, carbon taxes). They also assume that there are no breakthroughs in energy production, storage or transmission of any kind other than the mere continuation of the long-observed price drop in the technologies on which they concentrate (SWB) for just a few more years, even allowing for a slowing of progress here (for the lot, a 75 percent price drop over the next decade, versus 85 percent in the past decade). This portion of their argument, comprising about half the length of the report's main text, demonstrates the adequacy of such a system in even the most pessimistic (New England) case, as well as the swiftness with which capacity expansion yields more Supwer Power.
As the think tank's prior report made clear, they anticipate that along with information, and also energy, the resource-intensiveness and price of food, transport and materials will drop by an order of magnitude or more in the coming decade, more produced with less in all these other areas. (The aforementioned footnote, in fact, refers to the way technological advances in other areas will ephemeralize production, preventing any Jevons Paradox-type rebound from soaking up all the extra energy produced, frequently not in spite of but because of the electrification of road transport and industrial processes like metal smelting that they anticipate, and the energy needs of new projects like carbon removal.)
Indeed, with the relevant technologies already almost all the way to the end point they describe (solar's capital costs have dropped 99.9 percent since the 1970s, and the projected drop Dorr and Seba talk about would merely lower the price of this already cheapest source of power to 99.97 percent of the old price), and any really large-scale program launched even now bound to run for years and thus quite easily reap substantial benefits from the projected price drops, the authors argue for the building of 100 percent renewable-based electric capacity not as some theoretical, long-term one, but an endeavor to be mounted immediately. They also hold that there is not only little to be gained from delaying, but much to be lost from doing so, besides the obvious ecological benefits. As noted previously, the cheapening of renewable-produced electricity has already made investment in fossil fuels and nuclear unattractive--and the continuation of the trend they anticipate would mean that not only would building new fossil fuel or nuclear capacity be a money-loser, but that soon merely operating existing plant would be costlier than shutting it down and replacing it with SWB. (Dorr and Seba, in fact, anticipate the oil and gas sectors suffering the same kind of disruption that coal has already suffered by the mid-2020s.) As they also note, any locality that achieves Clean Energy Super Power will have a vast advantage over any locality that does not as a place to do business, given lower production costs that will come quite organically, in contrast with the subsidies states and cities presently hand to big business. (Offering the example of the Volkswagen Golf, the authors point out that building such a car would be $2,000 cheaper per vehicle in an area where Super Power is available.)
I have to admit that after reading all this I found myself left with a good many questions. Where per-kilowatt-hour prices are concerned the authors have been very persuasive, but they say less about other issues, like the required land use. My own readings on the subject have given me the impression that renewables-bashers exaggerate the problem. Still, some address of the issue would have been welcome, the more in as it is one thing to picture vast, sun- and wind-rich Texas meeting its needs on the basis they describe, another to visualize far more densely peopled and less sun- and wind-rich New England doing the same on that purely local basis. (I also saw no case made regarding the availability of the needed material inputs. Again, my experience is that renewables-bashers seize on alleged limitations in order to "debunk" visions of larger-scale renewable energy use, but the report would have been stronger if it addressed this matter, not least because the issue is not simply whether one or another part of the U.S. alone could do this, but whether everyone could do this, given the global market in such materials, and the fact that, were this course as desirable as they say, everyone would be following in it.)
Getting away from the basic issue of the feasibility of 100 percent SWB-based electricity to the still more transformative vision of Clean Energy Super Power, I find myself skeptical of the analogy between electrical power production and the Internet, and the way the logic of its development shifted Internet Service Providers to the current pricing model--such that it seems, at the least, an area for further exploration. Where possible doubts are concerned the strongest that I can verbalize is the question of ephemeralization they raise, which is asserted rather than argued. One may counter that by pointing to RethinkX's prior publications on food and transport (which show how those sectors might achieve a good deal in this respect by themselves), but that, too, shows an important limitation. The hugely important remaining area of materials which supplies our housing, clothing, infrastructure, vehicles and all the machinery enabling all that cheaper information, energy, food and the rest is one about which RethinkX, to my knowledge, has preivously said little, and they make no addition to that here.
Still, if the report falls short of finally settling every last one of its more radical claims, that does not in the slightest detract from those claims it grounds in quite robust, even formidable, fashion. Indeed, its analysis of the history of pricing, local electricity demand, and SWB-based electricity generation potential in a variety of environments lends great credence to the argument that if only on a pure economic cost basis there are ample grounds for a far, far more ambitious effort in this area than has been seriously discussed by any presiding government--up to the "100 percent SWB electricity" goal. Accordingly, anyone concerned with energy markets, and economic developments--to say nothing of climate change and Green New Deals--would do well to attend carefully to the argument Dorr and Seba make. Meanwhile, I will be looking forward eagerly to they and their colleagues' continuation of what is easily one of the most intriguing (and we may yet find, important) lines of thought on the futurological scene today.
Of course, this argument (like just about all the detractors' old standards) has been crumbling for a good, long while. The falling cost of renewable-generated electricity, its becoming competitive with and then increasingly cheaper than such longtime electricity-production mainstays as coal and nuclear, and even natural gas, and all that on a purely "market" basis (which is to say, even without taking into account the subsidies of and externalities caused by them), make the economics look less forbidding than before. Helping, too, is the quite obvious approach of compensating for the intermittency of renewables with strategic combination. (The sun does not always shine, and the wind does not always blow--but not always at the same time, so that having solar and wind working together is at least a partial solution.) And on top of that, battery storage prices have been falling at rates comparable to those of the production of the electricity itself, lowering the cost of storing electricity not immediately used, so that there is less need for redundnacy.
The result has been that at the very least a considerable enlargement of our renewables use looks increasingly practical in the immediate term (as the shift of investment toward it reflects), and the path to the 100 percent renewables-based electricity goal, if not perfectly clear, at least considerably less fantasmic.
The RethinkX think tank, however, has gone not a step, but a giant leap, beyond that, in Adam Dorr and Tony Seba's Rethinking Energy 2020-2030 report, looking at what has for so long been dismissed as a deal-breaking liability--the fact that to meet 100 percent of our electricity needs with renewables we would need a level of capacity generating a great surplus above that level--as instead an epoch-making opportunity. Simply put, in pursuing the 100 percent renewables goal we would not only have the energy we need at far less cost to the physical environment, but in producing the "excess" of energy generate not "waste," but rather an abundance they term "Clean Energy Super Power." In this they see a basis for accomplishing with energy--and clean energy at that--what the digital age has accomplished with information storage and transmission, dropping its marginal cost to nearly zero.
How will this work? The claim warrants some unpacking, the more in as Dorr and Seba spend relatively little of their report discussing it (and in fact relegate their answer to one of what seemed to me the most important possible objections to an endnote rather than treating it in the main text). Simply put, not only is it the case that meeting our energy needs will require the capacity to produce more than a multiple of those energy needs, but that the multiple will grow with the scale of the system. (As they crunch the numbers, a renewables-based system meeting 100 percent of our electricity would generate the equivalent in Super Power, and merely expanding the capacity another twenty percent would double or even triple the quantity of Super Power.) The result is that the margin between the consumption the system is designed to meet, and what it makes available, is always widening, not shrinking.
Of course, more than a difference of perspective is involved in anything like this becoming practical in the next decade. It has to be economically feasible to build all that capacity--and indeed, even when counting in the investment that would produce all the extra, cheaper than the alternatives. By way of a number of case studies subject to deliberately pessimistic assumptions, Dorr and Seba argue precisely that. They specifically consider the feasibility of a 100 percent photovoltaic Solar, onshore Wind and lithium-ion Battery (SWB)-based grid in three diverse localities (sun- and wind-rich Texas, sun-rich but less windy California, and sun- and wind-deprived New England), in a context of no electricity imports, no conventional operating reserves, no distributed generation or storage, no assists from electric vehicles, no peak demand-lowering mechanisms (demand response, load shifting, energy arbitrage and peak shaving), and no financial innovations or government supports (subsidies, carbon taxes). They also assume that there are no breakthroughs in energy production, storage or transmission of any kind other than the mere continuation of the long-observed price drop in the technologies on which they concentrate (SWB) for just a few more years, even allowing for a slowing of progress here (for the lot, a 75 percent price drop over the next decade, versus 85 percent in the past decade). This portion of their argument, comprising about half the length of the report's main text, demonstrates the adequacy of such a system in even the most pessimistic (New England) case, as well as the swiftness with which capacity expansion yields more Supwer Power.
As the think tank's prior report made clear, they anticipate that along with information, and also energy, the resource-intensiveness and price of food, transport and materials will drop by an order of magnitude or more in the coming decade, more produced with less in all these other areas. (The aforementioned footnote, in fact, refers to the way technological advances in other areas will ephemeralize production, preventing any Jevons Paradox-type rebound from soaking up all the extra energy produced, frequently not in spite of but because of the electrification of road transport and industrial processes like metal smelting that they anticipate, and the energy needs of new projects like carbon removal.)
Indeed, with the relevant technologies already almost all the way to the end point they describe (solar's capital costs have dropped 99.9 percent since the 1970s, and the projected drop Dorr and Seba talk about would merely lower the price of this already cheapest source of power to 99.97 percent of the old price), and any really large-scale program launched even now bound to run for years and thus quite easily reap substantial benefits from the projected price drops, the authors argue for the building of 100 percent renewable-based electric capacity not as some theoretical, long-term one, but an endeavor to be mounted immediately. They also hold that there is not only little to be gained from delaying, but much to be lost from doing so, besides the obvious ecological benefits. As noted previously, the cheapening of renewable-produced electricity has already made investment in fossil fuels and nuclear unattractive--and the continuation of the trend they anticipate would mean that not only would building new fossil fuel or nuclear capacity be a money-loser, but that soon merely operating existing plant would be costlier than shutting it down and replacing it with SWB. (Dorr and Seba, in fact, anticipate the oil and gas sectors suffering the same kind of disruption that coal has already suffered by the mid-2020s.) As they also note, any locality that achieves Clean Energy Super Power will have a vast advantage over any locality that does not as a place to do business, given lower production costs that will come quite organically, in contrast with the subsidies states and cities presently hand to big business. (Offering the example of the Volkswagen Golf, the authors point out that building such a car would be $2,000 cheaper per vehicle in an area where Super Power is available.)
I have to admit that after reading all this I found myself left with a good many questions. Where per-kilowatt-hour prices are concerned the authors have been very persuasive, but they say less about other issues, like the required land use. My own readings on the subject have given me the impression that renewables-bashers exaggerate the problem. Still, some address of the issue would have been welcome, the more in as it is one thing to picture vast, sun- and wind-rich Texas meeting its needs on the basis they describe, another to visualize far more densely peopled and less sun- and wind-rich New England doing the same on that purely local basis. (I also saw no case made regarding the availability of the needed material inputs. Again, my experience is that renewables-bashers seize on alleged limitations in order to "debunk" visions of larger-scale renewable energy use, but the report would have been stronger if it addressed this matter, not least because the issue is not simply whether one or another part of the U.S. alone could do this, but whether everyone could do this, given the global market in such materials, and the fact that, were this course as desirable as they say, everyone would be following in it.)
Getting away from the basic issue of the feasibility of 100 percent SWB-based electricity to the still more transformative vision of Clean Energy Super Power, I find myself skeptical of the analogy between electrical power production and the Internet, and the way the logic of its development shifted Internet Service Providers to the current pricing model--such that it seems, at the least, an area for further exploration. Where possible doubts are concerned the strongest that I can verbalize is the question of ephemeralization they raise, which is asserted rather than argued. One may counter that by pointing to RethinkX's prior publications on food and transport (which show how those sectors might achieve a good deal in this respect by themselves), but that, too, shows an important limitation. The hugely important remaining area of materials which supplies our housing, clothing, infrastructure, vehicles and all the machinery enabling all that cheaper information, energy, food and the rest is one about which RethinkX, to my knowledge, has preivously said little, and they make no addition to that here.
Still, if the report falls short of finally settling every last one of its more radical claims, that does not in the slightest detract from those claims it grounds in quite robust, even formidable, fashion. Indeed, its analysis of the history of pricing, local electricity demand, and SWB-based electricity generation potential in a variety of environments lends great credence to the argument that if only on a pure economic cost basis there are ample grounds for a far, far more ambitious effort in this area than has been seriously discussed by any presiding government--up to the "100 percent SWB electricity" goal. Accordingly, anyone concerned with energy markets, and economic developments--to say nothing of climate change and Green New Deals--would do well to attend carefully to the argument Dorr and Seba make. Meanwhile, I will be looking forward eagerly to they and their colleagues' continuation of what is easily one of the most intriguing (and we may yet find, important) lines of thought on the futurological scene today.
Tuesday, December 15, 2020
THE NEOLIBERAL AGE IN AMERICA: FROM CARTER TO TRUMP
As we enter 2020 it seems as if the country's politics are undergoing nothing less than a tectonic shift—one result of which is that the word "neoliberalism" has passed out of the usage of academics, into general parlance. Those trying to make sense of it all find that the market is flooded with public affairs books—but most are longer on political hacks' rants than substance, or too busy telling colorful stories, to offer answers to such obvious and essential questions as
•Just what is neoliberalism anyway? (And why is there so much confusion about this anyway?)
•What did the Reagan administration actually do, and what were the results?
•What was the policy of the Clinton administration, and did it justify its characterization by critics as neoliberal? (Ditto Obama.)
•What was the country's economic record before and after "the neoliberal turn?"
However, THE NEOLIBERAL AGE IN AMERICA: FROM CARTER systematically examines Federal policy from the 1970s through the Presidencies of Carter, Reagan, the two Bushes, Clinton and Obama, emphasizing specifics and hard data to offer a picture of just what happened in these years as a matter of practical policy, and its consequences—answering these questions and more as we confront this era of crisis, and what may be a historic election this upcoming November.
Available in ebook and paperback formats at Amazon and other retailers.
Get your copy today!
Tweet
•Just what is neoliberalism anyway? (And why is there so much confusion about this anyway?)
•What did the Reagan administration actually do, and what were the results?
•What was the policy of the Clinton administration, and did it justify its characterization by critics as neoliberal? (Ditto Obama.)
•What was the country's economic record before and after "the neoliberal turn?"
However, THE NEOLIBERAL AGE IN AMERICA: FROM CARTER systematically examines Federal policy from the 1970s through the Presidencies of Carter, Reagan, the two Bushes, Clinton and Obama, emphasizing specifics and hard data to offer a picture of just what happened in these years as a matter of practical policy, and its consequences—answering these questions and more as we confront this era of crisis, and what may be a historic election this upcoming November.
Available in ebook and paperback formats at Amazon and other retailers.
Get your copy today!
Tweet
Imagining the Twenty-First Century: A Glance Back at the Expectations of the Neoliberal Heyday
In recent years I have devoted more attention to the subject of neoliberalism--defining the concept, examining its practical enactment (particularly in the U.S. and Britain), and considering the resulting economic record from such standpoints as world and national economic growth, and industrialization.
By and large what we see in the world today is profound disappointment in the idea and its application--sufficiently so that any candidate standing for election publicly owns to neoliberal sympathies and intentions at their peril, with the right turning more nationalist, and the center-left taking the other approach of denying it has ever had anything to do with such (even as both right and center-left overwhelmingly remain adherents of neoliberal practice).
In the course of considering the policy record, and the track record with respect to economic growth, I find myself reminded time and again that the late 1990s were the moment when, at least in the Western world and especially in the English-speaking world, neoliberals were most confident in their promises actually coming true, and the societal mainstream most ready to believe them, amid a period of economic boom that at least appeared to be based on a surge of technological innovation consumers experienced in their own lives supposed to be some wonderful new normal.
There were, of course, different ways of thinking about these matters, not all of them totally unfounded. Consider, for instance, the theory of Kondratiev waves, or K- waves for short, which posit a 40 to 60 year cycle of upswing and downswing in the world economy. K-wave theorists commonly see a K-wave beginning some time in the 1940s, with the economy booming for a generation, and clearly slowing down no later than the early 1970s. Afterward, in line with the theory, was a long period of slow growth, quite in evidence amid worldwide recession in the early 1990s, as much of the Third World struggled with post-debt crisis austerity, Japan slipped into a "lost decade," the "reform" of the Soviet bloc actually produced the collapse of the Soviet bloc, and deindustrialization and stagnation characterized the Western performance. With growth proceeding at a faster clip in the later part of the decade, it seemed quite plausible that the downswing was over, that the upswing due by then had arrived. If one took 1995, say, as the start of the next wave, and the next half decade as representative of it, then an adherent of the theory had grounds for expecting twenty to thirty years of rapid growth, boom times continuing, maybe even getting better still, until 2015, 2020, 2025 even.
All this may have seemed more plausible because this was (more or less) what happened in the prior, post-war World War II boom--the 4 percent a year growth clocked in the latter half of the '90s the norm through the '50s and '60s and early '70s. To give one example of what this would have been like, had the U.S. sustained its late '90s-era GDP growth, the country's GDP would be in the vicinity of $33 trillion, or $100,000 per capita (in today's dollars) circa 2020.
Moreover, there was some expectation of fairly wide sharing of such gains with little need for public intervention, not least via the personal asset portfolios that many middle-class people were quitting perfectly good day jobs to manage full-time. Representative of the sensibility was the cover of the July 5, 1999 issue of Newsweek, which featured an old-fashioned comic strip-style drawing of a man with a distressed facial expression alarmed that "Everyone's getting rich but me!"--a thought the cover patronizingly blew off as "the Whine of '99." Inside the magazine Adam Bryant's tauntingly titled article "They're Rich (And You're Not)" proceeded to, at greater length, continue taunting the unfortunates among its readership suffering from what it assured them was the "reality" of "everyone else getting rich."
Of course, contrary to what the idiots who published this mean-spirited garbage would have had their credulous readers believe, the reality was that not everyone was getting rich. Very, very far from it. Still, while the '70s and '80s and early '90s were not great for working people, and even the late '90s were rather less good to them than they were to billionaires, that period saw workers, going by official U.S. Census data on median income, at least, recover the ground they lost in the recessions of those earlier eras. Assuming the growth continued it was quite plausible they would have seen the first real increases in personal income since the end of the post-war boom a generation earlier. And if the gains of the '90s were far from equally distributed round the world, it seemed plausible that other parts of the planet would enjoy similarly accelerated gains, with other industrialized nations following America's lead, and developing nations similarly striving to catch up.
Meanwhile there were those who saw the prospects as even brighter than that, on the basis of the technological possibilities at hand being more fundamentally epochal. Here it seems worth revisiting the prognostications of Ray Kurzweil. In 1999, at the boom's height, he published his book The Age of Spiritual Machines, one chapter in which presents a long list of rather precise technological forecasts, a major theme of which was bullishness about the progress being made with neural networks and the pattern-recognition software based on them, and the pace of improvement in computer hardware, particularly 3-D computer chips. The result was to be a much more rapid advance in areas like artificial intelligence, and new processes and products like virtual reality and (substantially) self-driving cars, with commensurately dazzling macroeconomic consequences--starting with "the ten years leading up to 2009" being a period of "continuous economic expansion and prosperity," the more amazing for its going hand in hand with price deflation as the cost of making things fell. (In contrast with the stagflation of the '70s, this bizarre, unprecedented combination of boom times and falling prices would be, for most, a rather happy perfect opposite.) And things would only go onward and upward from there as technological progress accelerated on the way to a "technological Singularity" that before century's end rendered our old frame of reference meaningless.
It may seem that this outcome would not have been a wholly positive vision. For example, a world where countries have a GDP two-thirds or more higher may seem daunting to the ecologically conscious. However, the increased productivity would, arguably, have not only seen the substitution of "information" for labor, but information replacing natural resource consumption as well. Indeed, many of the technologies Kurzweil specifically discussed, like carbon nanotubes and self-driving cars, could have been powerful contributors to a more sustainable world economy, while higher incomes would have meant more scope for action to save the environment, whether one thinks in terms of the research and development of new energy technologies, or even public action on the more urgent problems as the swift growth translated to budget surpluses and falling debt-to-GDP ratios without painful budget cuts or tax rises. It even seems far from inconceivable that all of this could have translated to a more tolerant society, with social stresses assuaged by an experience of plenty. (It is certainly worth remembering that the civil rights movement won its victories in the boom years of the '60s, that what the right derides as "political correctness" had its heyday in the boom years of the '90s.)
Of course, things did not go as the optimists seem to have hoped. Looking at Kurzweil especially it seems worth noting that much of the technological advance he predicted for 2009 remained unachieved not only in 2009 but in 2019--at which point , and the things Kurzweil talked about once again deferred indefinitely into the future (the neural nets and 3-D chips and virtual reality and self-driving cars and the rest). Indeed, analysts increasingly conceded that the burst of productivity growth improvement evident in the late '90s petered out after a few years, without ever matching the swift gains of the Fordism-dominated mid-century period, and gave way to even slower growth than before.
One can hardly picture much economic growth in a period of feeble technological progress, and indeed, the results that way were all too predictable. The tech boom proved tech bubble mere months after the paperback edition of Kurzweil's book hit the stores, amid an exposure of corporate fraud and accounting scandals, Wall Street's great bull run (1982-2000) clearly over. Price deflation? Quite the contrary, rising energy prices translated to a renewal of inflation, severe enough to cause food riots in the Third World well before oil hit a $150 a barrel. And those surging prices were broken only by another, bigger, financial crisis, the worst since the '30s--from which it seems safe to say, neoliberalism, globalization and the prevailing political culture did not recover, stagnation, unemployment, anger only worsening on the way to the next, still worse, "worst since the '30s" in 2020. Indeed, by the latter date U.S. GDP was about forty percent lower than it would have been had tech boom-style growth continued for those two first decades of the century. (Taken as a drop from where the country would have been in the alternative scenario, a distance in output equivalent to the fall between 1929 and the Depression's rock bottom in 1932.)
Meanwhile the slight gains were concentrated at the top, and the costs went the opposite way, swelling the largest fortunes to heights of which the Gilded Age robber barons could only have dreamed as far, far more people saw themselves fall further and further behind, not least because the public sector was so badly squeezed. Instead of budget surpluses and a falling Federal debt-to-GDP ratio such as might have been hoped for, what the era saw instead was trillion-dollar deficits, and a doubling of the debt-to-GDP ratio, taking it back up to World War II-era levels. The ecological stress we have experienced and its resulting anxiety hardly need enlargement here--while the same goes for the decay of such gains in civility and tolerance as the past two generations had seen.
In short, the neoliberal utopia we were promised was an illusion--in which, it must be admitted, many never believed. To the left, the genuine, economics-minded left, at least, what happened instead was not really a surprise--simply confirmation of what they had been arguing all along. Even many old-style liberals, unconvinced by the neoliberal arguments, and quick to point to its failures from the very beginning, could not be surprised. At the same time the nationalist right was never on board, either, and likewise unsurprised. It might be added that the broader public was, at the very least, less persuaded by the talk than the more genuinely privileged groups. But of course what was mainstream did not include very much input from them, the media, for example, seeming to speak here with only one voice, so long and so loud, and so little perturbed by the lie being given to their promises that they just went on and on with the same line. Naturally the illusions lingered for a long, long while, and looking about even today it seems it has not totally gone away (the way things are these days, wherever one sees techno-hype, there they are apt to find the essentials), the consequences enduring.
By and large what we see in the world today is profound disappointment in the idea and its application--sufficiently so that any candidate standing for election publicly owns to neoliberal sympathies and intentions at their peril, with the right turning more nationalist, and the center-left taking the other approach of denying it has ever had anything to do with such (even as both right and center-left overwhelmingly remain adherents of neoliberal practice).
In the course of considering the policy record, and the track record with respect to economic growth, I find myself reminded time and again that the late 1990s were the moment when, at least in the Western world and especially in the English-speaking world, neoliberals were most confident in their promises actually coming true, and the societal mainstream most ready to believe them, amid a period of economic boom that at least appeared to be based on a surge of technological innovation consumers experienced in their own lives supposed to be some wonderful new normal.
There were, of course, different ways of thinking about these matters, not all of them totally unfounded. Consider, for instance, the theory of Kondratiev waves, or K- waves for short, which posit a 40 to 60 year cycle of upswing and downswing in the world economy. K-wave theorists commonly see a K-wave beginning some time in the 1940s, with the economy booming for a generation, and clearly slowing down no later than the early 1970s. Afterward, in line with the theory, was a long period of slow growth, quite in evidence amid worldwide recession in the early 1990s, as much of the Third World struggled with post-debt crisis austerity, Japan slipped into a "lost decade," the "reform" of the Soviet bloc actually produced the collapse of the Soviet bloc, and deindustrialization and stagnation characterized the Western performance. With growth proceeding at a faster clip in the later part of the decade, it seemed quite plausible that the downswing was over, that the upswing due by then had arrived. If one took 1995, say, as the start of the next wave, and the next half decade as representative of it, then an adherent of the theory had grounds for expecting twenty to thirty years of rapid growth, boom times continuing, maybe even getting better still, until 2015, 2020, 2025 even.
All this may have seemed more plausible because this was (more or less) what happened in the prior, post-war World War II boom--the 4 percent a year growth clocked in the latter half of the '90s the norm through the '50s and '60s and early '70s. To give one example of what this would have been like, had the U.S. sustained its late '90s-era GDP growth, the country's GDP would be in the vicinity of $33 trillion, or $100,000 per capita (in today's dollars) circa 2020.
Moreover, there was some expectation of fairly wide sharing of such gains with little need for public intervention, not least via the personal asset portfolios that many middle-class people were quitting perfectly good day jobs to manage full-time. Representative of the sensibility was the cover of the July 5, 1999 issue of Newsweek, which featured an old-fashioned comic strip-style drawing of a man with a distressed facial expression alarmed that "Everyone's getting rich but me!"--a thought the cover patronizingly blew off as "the Whine of '99." Inside the magazine Adam Bryant's tauntingly titled article "They're Rich (And You're Not)" proceeded to, at greater length, continue taunting the unfortunates among its readership suffering from what it assured them was the "reality" of "everyone else getting rich."
Of course, contrary to what the idiots who published this mean-spirited garbage would have had their credulous readers believe, the reality was that not everyone was getting rich. Very, very far from it. Still, while the '70s and '80s and early '90s were not great for working people, and even the late '90s were rather less good to them than they were to billionaires, that period saw workers, going by official U.S. Census data on median income, at least, recover the ground they lost in the recessions of those earlier eras. Assuming the growth continued it was quite plausible they would have seen the first real increases in personal income since the end of the post-war boom a generation earlier. And if the gains of the '90s were far from equally distributed round the world, it seemed plausible that other parts of the planet would enjoy similarly accelerated gains, with other industrialized nations following America's lead, and developing nations similarly striving to catch up.
Meanwhile there were those who saw the prospects as even brighter than that, on the basis of the technological possibilities at hand being more fundamentally epochal. Here it seems worth revisiting the prognostications of Ray Kurzweil. In 1999, at the boom's height, he published his book The Age of Spiritual Machines, one chapter in which presents a long list of rather precise technological forecasts, a major theme of which was bullishness about the progress being made with neural networks and the pattern-recognition software based on them, and the pace of improvement in computer hardware, particularly 3-D computer chips. The result was to be a much more rapid advance in areas like artificial intelligence, and new processes and products like virtual reality and (substantially) self-driving cars, with commensurately dazzling macroeconomic consequences--starting with "the ten years leading up to 2009" being a period of "continuous economic expansion and prosperity," the more amazing for its going hand in hand with price deflation as the cost of making things fell. (In contrast with the stagflation of the '70s, this bizarre, unprecedented combination of boom times and falling prices would be, for most, a rather happy perfect opposite.) And things would only go onward and upward from there as technological progress accelerated on the way to a "technological Singularity" that before century's end rendered our old frame of reference meaningless.
It may seem that this outcome would not have been a wholly positive vision. For example, a world where countries have a GDP two-thirds or more higher may seem daunting to the ecologically conscious. However, the increased productivity would, arguably, have not only seen the substitution of "information" for labor, but information replacing natural resource consumption as well. Indeed, many of the technologies Kurzweil specifically discussed, like carbon nanotubes and self-driving cars, could have been powerful contributors to a more sustainable world economy, while higher incomes would have meant more scope for action to save the environment, whether one thinks in terms of the research and development of new energy technologies, or even public action on the more urgent problems as the swift growth translated to budget surpluses and falling debt-to-GDP ratios without painful budget cuts or tax rises. It even seems far from inconceivable that all of this could have translated to a more tolerant society, with social stresses assuaged by an experience of plenty. (It is certainly worth remembering that the civil rights movement won its victories in the boom years of the '60s, that what the right derides as "political correctness" had its heyday in the boom years of the '90s.)
Of course, things did not go as the optimists seem to have hoped. Looking at Kurzweil especially it seems worth noting that much of the technological advance he predicted for 2009 remained unachieved not only in 2009 but in 2019--at which point , and the things Kurzweil talked about once again deferred indefinitely into the future (the neural nets and 3-D chips and virtual reality and self-driving cars and the rest). Indeed, analysts increasingly conceded that the burst of productivity growth improvement evident in the late '90s petered out after a few years, without ever matching the swift gains of the Fordism-dominated mid-century period, and gave way to even slower growth than before.
One can hardly picture much economic growth in a period of feeble technological progress, and indeed, the results that way were all too predictable. The tech boom proved tech bubble mere months after the paperback edition of Kurzweil's book hit the stores, amid an exposure of corporate fraud and accounting scandals, Wall Street's great bull run (1982-2000) clearly over. Price deflation? Quite the contrary, rising energy prices translated to a renewal of inflation, severe enough to cause food riots in the Third World well before oil hit a $150 a barrel. And those surging prices were broken only by another, bigger, financial crisis, the worst since the '30s--from which it seems safe to say, neoliberalism, globalization and the prevailing political culture did not recover, stagnation, unemployment, anger only worsening on the way to the next, still worse, "worst since the '30s" in 2020. Indeed, by the latter date U.S. GDP was about forty percent lower than it would have been had tech boom-style growth continued for those two first decades of the century. (Taken as a drop from where the country would have been in the alternative scenario, a distance in output equivalent to the fall between 1929 and the Depression's rock bottom in 1932.)
Meanwhile the slight gains were concentrated at the top, and the costs went the opposite way, swelling the largest fortunes to heights of which the Gilded Age robber barons could only have dreamed as far, far more people saw themselves fall further and further behind, not least because the public sector was so badly squeezed. Instead of budget surpluses and a falling Federal debt-to-GDP ratio such as might have been hoped for, what the era saw instead was trillion-dollar deficits, and a doubling of the debt-to-GDP ratio, taking it back up to World War II-era levels. The ecological stress we have experienced and its resulting anxiety hardly need enlargement here--while the same goes for the decay of such gains in civility and tolerance as the past two generations had seen.
In short, the neoliberal utopia we were promised was an illusion--in which, it must be admitted, many never believed. To the left, the genuine, economics-minded left, at least, what happened instead was not really a surprise--simply confirmation of what they had been arguing all along. Even many old-style liberals, unconvinced by the neoliberal arguments, and quick to point to its failures from the very beginning, could not be surprised. At the same time the nationalist right was never on board, either, and likewise unsurprised. It might be added that the broader public was, at the very least, less persuaded by the talk than the more genuinely privileged groups. But of course what was mainstream did not include very much input from them, the media, for example, seeming to speak here with only one voice, so long and so loud, and so little perturbed by the lie being given to their promises that they just went on and on with the same line. Naturally the illusions lingered for a long, long while, and looking about even today it seems it has not totally gone away (the way things are these days, wherever one sees techno-hype, there they are apt to find the essentials), the consequences enduring.
Defining Neoliberalism
The recent pseudo-debate over whether "neoliberalism" is a meaningful concept in economics and politics, especially in relation to center-left political parties like the Democratic Party of Bill and Hillary Clinton and Barack Obama, or the "New" Labour Party of Tony Blair, has been every bit as cynically manufactured as the debates over whether or not tobacco is a carcinogen, or whether the climate is changing and the change is due to human action, with the mainstream media displaying every bit of the same ignorance, incompetence, venality and cowardice in facilitating it that it has in those other situations.
Still, I must admit that in considering the claims, and answering them, I had to study the issue more closely than before, examining anew the specifics of the policy record, while working out as rigorous a definition of neoliberalism as possible. It seems to me that
Still, I must admit that in considering the claims, and answering them, I had to study the issue more closely than before, examining anew the specifics of the policy record, while working out as rigorous a definition of neoliberalism as possible. It seems to me that
Neoliberalism can be defined as a political reaction against the shift of society away from its approximation of the "classic liberal" (libertarian) model in the nineteenth century, and the associated growth of the state since that time. Liberalism’s response to that trend of state growth is most commonly identified with a variety of specific prescriptions including but not limited to fiscal austerity, deregulation, privatization, deunionization and free trade, especially as ways of redressing the compromises of earlier liberalism on behalf of maximizing industrial development, macroeconomic stability, employment, social welfare and equity. Increasingly important as a political project from the 1970s on, it is particularly identified with the policies of Margaret Thatcher in Britain and Ronald Reagan in the U.S., and associated with an economy which is not simply a more liberalized version of what came before, but unprecedentedly financialized and globally integrated.A longer, more comprehensive and, I hope, clearer, explanation reads as follows:
Neoliberalism refers to a body of political economic theory; its associated thinkers, political movement and policies; and their application and consequences in actual life. It is, above all, a reaction against the shift of society away from what neoliberals see as its natural and optimal centering on individual, private exchange under a minimalist "night watchman" state devoted to the defense of private property, which neoliberals regard as having been best approximated in the nineteenth century West; in favor of a large and highly interventionist state devoted in particular to industrial development, social welfare and macroeconomic stewardship (in particular the combination of full employment-generating economic growth with low inflation), and disposing of as much as half the national Gross Domestic Product in the process.I admit that the second, longer version of the definition is less punchy than the first. But I think that facing it the political hacks who pretend neoliberalism is "not a thing" would have a much harder time playing their inane word games.
While clearly underway with the emergence of the Austrian School of economics in the early twentieth century, this movement came to encompass a loose collection of schools of economic thought broadly sympathetic to this agenda (monetarism, public choice, etc.), and increasingly consolidated into a recognizable intellectual and political movement in the post-World War II period (an important moment in which was the 1947 founding of the Mont Pelerin Society). In developing their theoretical arguments, and promoting their ideas among intellectuals and the general public, neoliberals developed a particular package of prescriptions for dismantling the offending apparatus of the industrial-welfare-macroeconomic state, and recreating, at a higher level, the desired economic order, stressing but not necessarily limited to
Government tax and spending cuts, more stringent and often explicitly legislated fiscal discipline, and "austerity." A related shift away from progressive taxation, social spending and redistribution of income. The deregulation of business activity, and particularly the elimination of regulation which imposed costs or limitations on business, as with regulation to protect workers, consumers and the environment; and legislation limiting forms of financial activity which could be disruptive to the larger economy, as with financial speculation. The privatization of state assets and functions in ways ranging from outright sell-off, to outsourcing, to the shutdown of programs leaving activity to the private sector, with individuals buying the desired good as consumers or not at all. A deemphasis of full employment as a public goal, especially of fiscal and monetary policy. The withdrawal of state protection, and even tolerance, for organized labor. The reduction or elimination of controls on the international flow of goods, capital and investment.
A significant force in practical policymaking by the 1970s, these theories were significantly applied in the critical early "laboratory" of Chile under the dictatorship of President Augusto Pinochet (1973-1990), with the industrialized, Western, world seeing Britain and the United States lead the way, a development most closely identified with the Prime Ministership of Margaret Thatcher (1979-1990) and the Presidency of Ronald Reagan, respectively (1981-1989).
In considering the application of these ideas it is essential to acknowledge that their interaction with a dynamic economic reality has produced a distinctly different economic model from what came before, emphasizing the resourcing and incentivizing of investors over the other goods previously pursued, and their operation in a different manner than was previously the case. Usefully termed "Neoliberal Financialization," it sees an "unleashed" financial sector emphasizing the speculative buying and selling of assets across the worldwide field of activity not only created by the ever-more developed free trade regime, but intensified by loose monetary policy, and the substantially digital technologies enabling whole new productive practices (like "labor cost arbitrage"), and turbo-charged speculation (like the electronic trading of stocks and currencies). Putting it another way, "globalization," "creditism," "digitalism" are all key parts of the package (in contrast with the prior national orientations, gold standard-backed dollar, employment-oriented monetary policy, and treatment of Fordist production methods and goods as the cutting edge of the system).
Key elements of this package were not only unanticipated by the neoliberal vision, but in conflict with it (loose monetary policy and government's "picking winners" with its support for the financial sector contradict the classical liberal principles neoliberals profess). However, proponents of neoliberal thinking, which is today the orthodoxy of academic teaching of economics and predominant in mainstream comment and policy, generally embrace and defend this model in its essentials.
Tuesday, September 1, 2020
Notes on RethinkX's Rethinking Humanity
In June 2020 the RethinkX think tank published an intriguing report by James Arbib and Tony Seba, Rethinking Humanity (which you can download here for free). The report is 89 pages long and heavily documented, but I think that most who have taken any interest in the issues it raises up to now could cope with it fairly easily, and the essential reasoning underlying it is quite simply explained. Arbib and Seba hold that civilization has five foundational sectors--information, energy, transport, food and materials--and that when the unit cost of all five of these things falls by an order of magnitude, civilization is pushed toward a crisis that can drive it to collapse or, should it develop a new "Organizing System" for itself, ascend to a new peak of "capability." They hold that this has happened in the past, but only once, when humanity's invention of agricultural set the stage for its movement from the pre-historic Age of Survival into the Age of Extraction with which we identify with all of recorded history.
The reason all this is of more than purely intellectual interest is that they hold that the five foundational sectors will, in the course of the next decade, see that order of magnitude drop, forcing such a crisis on modern civilization. Simply put, they believe that in the 2020s the price of information, energy, transport, food and metarials will all drop by 90 percent, while more efficient use of them enables a 90 percent drop in unit resource requirements--resulting in the reduction of the waste produced by as much as 99 percent.
The simultaneous cheapening of life's necessities and lightening of the per unit burden of those necessities on the planetary ecosystem imply enormous positive potentials. That society will seize on this potential is not a given, of course--civilizations can and do fail, with the Age of Extraction, of course, tending toward societies of a particularly problematic type. As the name hints--and as anyone familiar with history or sociology can appreciate--they tend to be predatory, centralized, hierarchical, unequal, unfree, exploitative and brutal (to say nothing of lacking in resilience), with an elite regarding these things as features and not bugs of the system committed precisely to keeping things as they are. However, the age of abundance, and the more complex structures it both enables and requires, would mean a world which is less of all those things, with hierarchy giving way to networks, and enough for all, in what they call an Age of Freedom.
Reading all of this, of course, much of this recalled quite familiar ideas, from Karl Marx (with those of his sociological ideas commonly summed up as "substructure, structure, superstructure"), to Carroll Quigley (with his instruments of expansion and vested interests), to Alvin Toffler (who also wrote of a "third wave" of human organization that would see hierarchy give way to networks). Still, I found the provision of a clear quantitative basis for their variant on this body of theory interesting, not least because of the manner in which it enabled them to offer up very explicit and detailed forecasts. In considering their model it also seems to me a significant strength that they did not overlook such basics as energy and food and materials, in contrast with the Kurzweil crowd's single-minded concentration on the performance metrics of computing, and casualness with all else. (Seba is especially noteworthy as having been attentive to renewable energy back when others were dismissing it--the more so as this, rather than anything in computing, is the true technological success story of the past decade.)
However, it also seems necessary to say that the report heavily references prior RethinkX work, about which I have some reservations, in particular two earlier reports, one on transportation, the other on food and agriculture (both also available freely at the site). The transport report anticipates a shift over the 2020s from the current model of private ownership of internal combustion-powered, human-driven, vehicles, to the ride-sharing of electrically-powered self-driving cars they term "Transportation as a Service" (or "Taas"). The food and agriculture report details the consequences of animal protein production shifting to the lab.
Both of those reports were, in their extrapolations of the consequences of their breakthroughs, rigorous and plausible. The problem, more pronounced in the transportation report, which dates back to 2017, is the starting point of that extrapolation--the treatment of Level 5 autonomy ("a key pre-condition for Taas") as imminent at that point in time. To put it mildly, it has not proven to be so. As a result everything that might have followed from their arrival has, at the very least, been delayed by several years amid more muted expectations regarding the technology (even if Elon Musk, no more humble after repeatedly getting it wrong, continues to insist on Level 5 Teslas this year, to increasing sneering from a press that has fallen out of love with him, and the idea).
The more recent report about food and agriculture is less obviously flawed that way, forecasting that "modern food" produced through techniques like "precision fermentation" will reach "cost parity with most animal-derived protein" in 2023-2025, see its price drop 80 percent by 2030, and then halve yet again by 2035, with the result the collapse of the livestock industry, and potentially the extraordinary cheapening of food, as well as the extraordinary relief of the natural environment from the burden of livestock-raising. Given that the point of disruption is still some way off only time will tell if that report is any better-grounded than the one on transport, but I have noted that, as with self-driving cars, there has already been some deferral of the date at which we would be seeing the stuff hit the market. (Not so long ago they said we could expect to see it at the supermarket in 2018. Now they say 2023. "I've heard that one before," I can't help thinking to myself.)
In short, where the cost drops in at least some of the key sectors they detail may be concerned I suspect the authors are overly bullish--though I also admit that the combination of pandemic and economic downturn has seriously confused the situation. (Will economic stress translate to slower R & D and less investment? Or will the problems raised by the pandemic in particular, not least with regard to supply chains, actually accelerate a shift to what the writers call "modern food?") I admit, too, that while I have been less impressed by their recommendations with regard to how society can best adapt to the changes than other aspects of their arguments, their report does offer some grounds for hope that the most dire problems facing us may be remediable with tools substantially in hand or soon to be, and the world of ten, fifteen, twenty years hence a better place than the one we live in now.
The reason all this is of more than purely intellectual interest is that they hold that the five foundational sectors will, in the course of the next decade, see that order of magnitude drop, forcing such a crisis on modern civilization. Simply put, they believe that in the 2020s the price of information, energy, transport, food and metarials will all drop by 90 percent, while more efficient use of them enables a 90 percent drop in unit resource requirements--resulting in the reduction of the waste produced by as much as 99 percent.
The simultaneous cheapening of life's necessities and lightening of the per unit burden of those necessities on the planetary ecosystem imply enormous positive potentials. That society will seize on this potential is not a given, of course--civilizations can and do fail, with the Age of Extraction, of course, tending toward societies of a particularly problematic type. As the name hints--and as anyone familiar with history or sociology can appreciate--they tend to be predatory, centralized, hierarchical, unequal, unfree, exploitative and brutal (to say nothing of lacking in resilience), with an elite regarding these things as features and not bugs of the system committed precisely to keeping things as they are. However, the age of abundance, and the more complex structures it both enables and requires, would mean a world which is less of all those things, with hierarchy giving way to networks, and enough for all, in what they call an Age of Freedom.
Reading all of this, of course, much of this recalled quite familiar ideas, from Karl Marx (with those of his sociological ideas commonly summed up as "substructure, structure, superstructure"), to Carroll Quigley (with his instruments of expansion and vested interests), to Alvin Toffler (who also wrote of a "third wave" of human organization that would see hierarchy give way to networks). Still, I found the provision of a clear quantitative basis for their variant on this body of theory interesting, not least because of the manner in which it enabled them to offer up very explicit and detailed forecasts. In considering their model it also seems to me a significant strength that they did not overlook such basics as energy and food and materials, in contrast with the Kurzweil crowd's single-minded concentration on the performance metrics of computing, and casualness with all else. (Seba is especially noteworthy as having been attentive to renewable energy back when others were dismissing it--the more so as this, rather than anything in computing, is the true technological success story of the past decade.)
However, it also seems necessary to say that the report heavily references prior RethinkX work, about which I have some reservations, in particular two earlier reports, one on transportation, the other on food and agriculture (both also available freely at the site). The transport report anticipates a shift over the 2020s from the current model of private ownership of internal combustion-powered, human-driven, vehicles, to the ride-sharing of electrically-powered self-driving cars they term "Transportation as a Service" (or "Taas"). The food and agriculture report details the consequences of animal protein production shifting to the lab.
Both of those reports were, in their extrapolations of the consequences of their breakthroughs, rigorous and plausible. The problem, more pronounced in the transportation report, which dates back to 2017, is the starting point of that extrapolation--the treatment of Level 5 autonomy ("a key pre-condition for Taas") as imminent at that point in time. To put it mildly, it has not proven to be so. As a result everything that might have followed from their arrival has, at the very least, been delayed by several years amid more muted expectations regarding the technology (even if Elon Musk, no more humble after repeatedly getting it wrong, continues to insist on Level 5 Teslas this year, to increasing sneering from a press that has fallen out of love with him, and the idea).
The more recent report about food and agriculture is less obviously flawed that way, forecasting that "modern food" produced through techniques like "precision fermentation" will reach "cost parity with most animal-derived protein" in 2023-2025, see its price drop 80 percent by 2030, and then halve yet again by 2035, with the result the collapse of the livestock industry, and potentially the extraordinary cheapening of food, as well as the extraordinary relief of the natural environment from the burden of livestock-raising. Given that the point of disruption is still some way off only time will tell if that report is any better-grounded than the one on transport, but I have noted that, as with self-driving cars, there has already been some deferral of the date at which we would be seeing the stuff hit the market. (Not so long ago they said we could expect to see it at the supermarket in 2018. Now they say 2023. "I've heard that one before," I can't help thinking to myself.)
In short, where the cost drops in at least some of the key sectors they detail may be concerned I suspect the authors are overly bullish--though I also admit that the combination of pandemic and economic downturn has seriously confused the situation. (Will economic stress translate to slower R & D and less investment? Or will the problems raised by the pandemic in particular, not least with regard to supply chains, actually accelerate a shift to what the writers call "modern food?") I admit, too, that while I have been less impressed by their recommendations with regard to how society can best adapt to the changes than other aspects of their arguments, their report does offer some grounds for hope that the most dire problems facing us may be remediable with tools substantially in hand or soon to be, and the world of ten, fifteen, twenty years hence a better place than the one we live in now.
Friday, August 21, 2020
From Bubble to Bust--and Perhaps Boom Again? Notes on Technological Hype
At this point I am old enough to have lived through a number of cycles of boom and bust for
technological hype. And I think I have noticed a possible pattern in recent busts, certainly with regard to the "bust" end of the broader cycle. In particular it seems that this tends to combine three features:
Yet as a review of Ray Kurzweil's predictions for 2009 makes clear, much that was widely expected never came to pass. Artificial intelligence, virtual reality, nanotechnology. To put it mildly, progress in those areas, which would have had far more radical consequences, proved . . . slower, so slow that expectation fizzled out in disappointment.
Meanwhile, the New Economy boom of the late '90s turned to bust, a bust which never quite turned into boom again, so that not the crash but the years of growth turned out to be the historical anomaly, and even the most credulous consumers of the conventional wisdom reminded that the idiot fantasy that the economic equivalent of the law of gravity had been suspended was just that, an idiot fantasy. This was all the more painful because, in contrast with the New Economy promises that science was liberating us from our reliance on a finite and frail base of natural resources, we confronted spiking natural resource prices, above all fossil fuel prices, that brought on a global fuel and food crisis (2006-2008). And then the comparatively crummy performance of the early twenty-first century, which was really just another, much less impressive bubble--and that not in any fancy new tech, but old-fashioned real estate and commodity prices--led straightaway to the worst economic disaster since the 1930s, from which we have been reeling ever since.
Science fiction, a useful bellwether for these things, showed the reaction. Where in the wake of the '90s boom and hype the genre showered readers in shiny, ultra-high-tech Singularitarian futures, and through sheer momentum this substantially continued through the early '00s (it can take years to finish a book, years after that to get it into print), it was all post-apocalypse and dystopia--like World War Z and The Hunger Games and Paolo Bacigalupi's stuff. And even when science fiction bothered with the future it was the future as it looked from the standpoint of the past--as with steampunk, which was very popular post-2008 as well.
Of course, the popular mood did not stay permanently down in the dumps. The economy made a recovery of sorts--a very anemic one, but a recovery all the same.
And there was renewed excitement about many of the very technologies that had disappointed, with the companies and the press assuring us that they were finally getting the hang of carbon nanotubes and neural nets and virtual reality and the rest. Soon our cars would drive themselves, while drones filled our skies, bringing us, if we wanted them--and we would--our virtual reality kits, our supermarket orders of clean meat.
Alas, just about none of that came to pass either--and where in the '90s we at least got the PCs and Internet and cell phones, I cannot think of a single real consolation prize for the consumer this time around. Meanwhile a new crisis hit--in the form of a pandemic which underlined just how un-automated the world still was, how reliant on people actually physically doing stuff in person. And underlined, too, that for all the talk of our living in an age of biotechnomedical miracle that has filled the air for as long as I can remember, the war on viruses is very, very, very far from being won. All of this contributed to an even worse economic disaster than the one seen in 2008 (not that things ever normalized after that).
It seems to me not just possible, but probable, that the combination of technological disappointment, crisis and economic downturn will spell another period of lowered expectations with regard to technological progress. Indeed, I have already been struck by how the chatter about the prospect of mass technological unemployment in the near term vanished amid economic crash generating plenty of the old-fashioned, regular kind.
Of course, in considering that one should acknowledge that some are pointing to the current crisis, precisely because of the way in which it has demonstrated certain societal vulnerabilities and needs as spurring further efforts to automate economic operations, or at least permit them to be performed remotely--with implications extending to, among much else, those drones and self-driving cars. A similar logic, some hold, may work in favor of clean meat.
It is not wholly implausible, of course. Crises can and do spur innovation, when the backing is there--when the prevailing institutions elect to treat a crisis as a crisis. However, I have yet to get the impression of any such sensibility among those flattered by the feeble-minded everyday usage of terms like "world leader."
1. The failure of much-hyped technologies to actually materialize.Back in the late '90s there was enormous hype over computers. Of course, this period did produce some genuine, significant technologies of everyday life--personal computing, Internet access and cellular telephony--beginning to become genuinely widespread, and becoming refined considerably in the process, culminating in the extraordinary combination of power, versatility and portability of our smart phones and tablets with their 5G-grade broadband two decades on.
2. Economic downturn.
3. A crisis which gives the lie to self-satisfaction over some particularly significant claim for our technology as a revolutionary problem-solver.
Yet as a review of Ray Kurzweil's predictions for 2009 makes clear, much that was widely expected never came to pass. Artificial intelligence, virtual reality, nanotechnology. To put it mildly, progress in those areas, which would have had far more radical consequences, proved . . . slower, so slow that expectation fizzled out in disappointment.
Meanwhile, the New Economy boom of the late '90s turned to bust, a bust which never quite turned into boom again, so that not the crash but the years of growth turned out to be the historical anomaly, and even the most credulous consumers of the conventional wisdom reminded that the idiot fantasy that the economic equivalent of the law of gravity had been suspended was just that, an idiot fantasy. This was all the more painful because, in contrast with the New Economy promises that science was liberating us from our reliance on a finite and frail base of natural resources, we confronted spiking natural resource prices, above all fossil fuel prices, that brought on a global fuel and food crisis (2006-2008). And then the comparatively crummy performance of the early twenty-first century, which was really just another, much less impressive bubble--and that not in any fancy new tech, but old-fashioned real estate and commodity prices--led straightaway to the worst economic disaster since the 1930s, from which we have been reeling ever since.
Science fiction, a useful bellwether for these things, showed the reaction. Where in the wake of the '90s boom and hype the genre showered readers in shiny, ultra-high-tech Singularitarian futures, and through sheer momentum this substantially continued through the early '00s (it can take years to finish a book, years after that to get it into print), it was all post-apocalypse and dystopia--like World War Z and The Hunger Games and Paolo Bacigalupi's stuff. And even when science fiction bothered with the future it was the future as it looked from the standpoint of the past--as with steampunk, which was very popular post-2008 as well.
Of course, the popular mood did not stay permanently down in the dumps. The economy made a recovery of sorts--a very anemic one, but a recovery all the same.
And there was renewed excitement about many of the very technologies that had disappointed, with the companies and the press assuring us that they were finally getting the hang of carbon nanotubes and neural nets and virtual reality and the rest. Soon our cars would drive themselves, while drones filled our skies, bringing us, if we wanted them--and we would--our virtual reality kits, our supermarket orders of clean meat.
Alas, just about none of that came to pass either--and where in the '90s we at least got the PCs and Internet and cell phones, I cannot think of a single real consolation prize for the consumer this time around. Meanwhile a new crisis hit--in the form of a pandemic which underlined just how un-automated the world still was, how reliant on people actually physically doing stuff in person. And underlined, too, that for all the talk of our living in an age of biotechnomedical miracle that has filled the air for as long as I can remember, the war on viruses is very, very, very far from being won. All of this contributed to an even worse economic disaster than the one seen in 2008 (not that things ever normalized after that).
It seems to me not just possible, but probable, that the combination of technological disappointment, crisis and economic downturn will spell another period of lowered expectations with regard to technological progress. Indeed, I have already been struck by how the chatter about the prospect of mass technological unemployment in the near term vanished amid economic crash generating plenty of the old-fashioned, regular kind.
Of course, in considering that one should acknowledge that some are pointing to the current crisis, precisely because of the way in which it has demonstrated certain societal vulnerabilities and needs as spurring further efforts to automate economic operations, or at least permit them to be performed remotely--with implications extending to, among much else, those drones and self-driving cars. A similar logic, some hold, may work in favor of clean meat.
It is not wholly implausible, of course. Crises can and do spur innovation, when the backing is there--when the prevailing institutions elect to treat a crisis as a crisis. However, I have yet to get the impression of any such sensibility among those flattered by the feeble-minded everyday usage of terms like "world leader."
Because No One Else Seems to be Keeping Tabs--A Glance Back at the Past Decade's Techno-Hype
The vast majority of people, I find, are very "well-trained" consumers. By that I mean that they have been trained in the way marketing hucksters want them to be. They completely swallow the hype about how soon a thing will be here and how much difference it will make in their lives--and then after the product's arriving later, or maybe not making so much difference, or maybe never even arriving at all and therefore making no difference whatsoever, thinking in terms of the hype rather than their own lived experience. They dutifully remember nothing and learn nothing, so that they are just as ready to believe the promises of the next huckster who comes along. And they pour scorn down on the head of anyone who questions what might most politely be called their credulousness--when they are not absorbed in the smart phone they believe is the telos of all human history--adding meanness to their extreme stupidity.
Still, as the words "vast majority" make clear, not everyone falls into this category. Some are a little more alert, a little more critical, than others. And sometimes those with the capacity to get a little more skeptical do so.
I think we are approaching such a period, because so many of the expectations raised in the 2010-2015 period are, at this moment, being deeply disappointed--and not simply because the ill-informed hacks of the press have oversold things far beyond their slight comprehension, but because in many a field those generally presumed to be in a position to know best (like CEOs of companies actually making the stuff in question) have publicly, often with great fanfare, announced specific dates for the unveiling of their promised grand creations, and those dates have come and again, sometimes again and again, as a world in need of the innovations in question goes on waiting.
Consider the Carbon NanoTube (CNT) computer chips that were supposed to keep computing power-per-dollar rising exponentially for a generation as the old silicon-based chips hit their limits.
Back in 2014 IBM announced it would have a commercial CNT chip by 2020--winning what has with only a little melodrama been called a "race against time."
Well, it's 2020. That commercial chip, however, is not here. Instead we are hearing only of breakthroughs that may, if followed up by other breakthroughs, eventually lead to the production of those chips, perhaps sometime this coming decade.
Indeed, the latest report regarding the Gartner Hype Cycle holds that carbon-based transistors are sliding down from the "Peak of Inflated Expectations" into the "Trough of Disillusionment."
Perhaps unsurprisingly, the progress of artificial intelligence, on which so many were so bullish a short while ago, is also slowing down--in part, for lack of computer capacity. It seems, in fact, that even carbon nanotube chips wouldn't get things on track if they were here. Instead the field's spokespersons are talking quantum computers, which, to put it mildly, are a still more remote possibility.
Also unsurprisingly, particularly high-profile applications for that artificial intelligence are proving areas of disappointment as well.
To cite an obvious instance, in 2013 Jeff Bezos said that within five years (by 2018) drone deliveries would be "commonplace."
In considering the absence of such deliveries years after those five years have run their course the press tends to focus on regulatory approval as the essential stumbling block, but, of course, the requisite technology is apparently still "under development."
Perhaps more germane to most people's lives, back in 2015 Elon Musk predicted that fully autonomous cars (Level 5) would hit the market in 2017.
That prediction has fared even more badly, with the result that the self-driving car (certainly to go by the number of articles whose writers smugly use smug phrases like "reality check" in their titles) is starting to look like the flying car. (Or the flying delivery drone?)
The Oculus Rift created quite a sensation back in 2013.
Alas, today the excitement that had surrounded it is even more completely recognized as past.
Clean meat was supposed to be on the market in 2019, if not before the end of 2018.
Now in 2020 the Guardian is talking about clean meat's hitting the market happening "in a few years." (For its part, IDTechEx says, think 2023.)
In area after area, what was supposed to have been here this year or the year before that or even before that is not only not here, but, we are told, still a few more years away--the Innovations talked up by the Silicon Valley Babbitts and their sycophants in the press receding further and further into the future.
Will it necessarily always be so? Of course not. Maybe the dream deferred will be a dream denied only temporarily, and briefly, with the semiconductor factories soon to be mass-producing CNT chips, which maybe along with quicker-than-expected progress in quantum computing will keep the AI spring of the twenty-first century from giving way to a long, cold AI winter, while perhaps even without them the delivery drones and the self-driving cars arrive ahead of schedule. Maybe, if still rough around the edges, next year will be VR's year, while this time it really is true that clean meat will be in our supermarkets "in a few years."
However, as one old enough to remember the extraordinary expectations of the '90s in many of these precise areas--nanotechnology, artificial intelligence, virtual reality--the disappointment is already very familiar, and worse for that familiarity, as well as how little in the way of tangible result we have been left with this time around. (The disappointments of the '90s were colossal--but we did get that explosion of access to personal computing, cellular telephony, the Internet, and those things did improve quite rapidly afterward. What from among the products of this round of techno-hype can compare with any of that, let alone all of it?) And if anything, where the development is less familiar but perhaps potentially more significant, the disappointment is even more galling. (Clean meat could be a very big piece of the puzzle for coping with the demand of a growing population for food, and the environmental crisis, at the same time.) In fact I cannot help wondering if we will not still be waiting for the promised results in twenty years--only to be disappointed yet again, while the hucksters go on with their hucksterism, and a credulous public continues to worship them as gods.
Still, as the words "vast majority" make clear, not everyone falls into this category. Some are a little more alert, a little more critical, than others. And sometimes those with the capacity to get a little more skeptical do so.
I think we are approaching such a period, because so many of the expectations raised in the 2010-2015 period are, at this moment, being deeply disappointed--and not simply because the ill-informed hacks of the press have oversold things far beyond their slight comprehension, but because in many a field those generally presumed to be in a position to know best (like CEOs of companies actually making the stuff in question) have publicly, often with great fanfare, announced specific dates for the unveiling of their promised grand creations, and those dates have come and again, sometimes again and again, as a world in need of the innovations in question goes on waiting.
Consider the Carbon NanoTube (CNT) computer chips that were supposed to keep computing power-per-dollar rising exponentially for a generation as the old silicon-based chips hit their limits.
Back in 2014 IBM announced it would have a commercial CNT chip by 2020--winning what has with only a little melodrama been called a "race against time."
Well, it's 2020. That commercial chip, however, is not here. Instead we are hearing only of breakthroughs that may, if followed up by other breakthroughs, eventually lead to the production of those chips, perhaps sometime this coming decade.
Indeed, the latest report regarding the Gartner Hype Cycle holds that carbon-based transistors are sliding down from the "Peak of Inflated Expectations" into the "Trough of Disillusionment."
Perhaps unsurprisingly, the progress of artificial intelligence, on which so many were so bullish a short while ago, is also slowing down--in part, for lack of computer capacity. It seems, in fact, that even carbon nanotube chips wouldn't get things on track if they were here. Instead the field's spokespersons are talking quantum computers, which, to put it mildly, are a still more remote possibility.
Also unsurprisingly, particularly high-profile applications for that artificial intelligence are proving areas of disappointment as well.
To cite an obvious instance, in 2013 Jeff Bezos said that within five years (by 2018) drone deliveries would be "commonplace."
In considering the absence of such deliveries years after those five years have run their course the press tends to focus on regulatory approval as the essential stumbling block, but, of course, the requisite technology is apparently still "under development."
Perhaps more germane to most people's lives, back in 2015 Elon Musk predicted that fully autonomous cars (Level 5) would hit the market in 2017.
That prediction has fared even more badly, with the result that the self-driving car (certainly to go by the number of articles whose writers smugly use smug phrases like "reality check" in their titles) is starting to look like the flying car. (Or the flying delivery drone?)
The Oculus Rift created quite a sensation back in 2013.
Alas, today the excitement that had surrounded it is even more completely recognized as past.
Clean meat was supposed to be on the market in 2019, if not before the end of 2018.
Now in 2020 the Guardian is talking about clean meat's hitting the market happening "in a few years." (For its part, IDTechEx says, think 2023.)
In area after area, what was supposed to have been here this year or the year before that or even before that is not only not here, but, we are told, still a few more years away--the Innovations talked up by the Silicon Valley Babbitts and their sycophants in the press receding further and further into the future.
Will it necessarily always be so? Of course not. Maybe the dream deferred will be a dream denied only temporarily, and briefly, with the semiconductor factories soon to be mass-producing CNT chips, which maybe along with quicker-than-expected progress in quantum computing will keep the AI spring of the twenty-first century from giving way to a long, cold AI winter, while perhaps even without them the delivery drones and the self-driving cars arrive ahead of schedule. Maybe, if still rough around the edges, next year will be VR's year, while this time it really is true that clean meat will be in our supermarkets "in a few years."
However, as one old enough to remember the extraordinary expectations of the '90s in many of these precise areas--nanotechnology, artificial intelligence, virtual reality--the disappointment is already very familiar, and worse for that familiarity, as well as how little in the way of tangible result we have been left with this time around. (The disappointments of the '90s were colossal--but we did get that explosion of access to personal computing, cellular telephony, the Internet, and those things did improve quite rapidly afterward. What from among the products of this round of techno-hype can compare with any of that, let alone all of it?) And if anything, where the development is less familiar but perhaps potentially more significant, the disappointment is even more galling. (Clean meat could be a very big piece of the puzzle for coping with the demand of a growing population for food, and the environmental crisis, at the same time.) In fact I cannot help wondering if we will not still be waiting for the promised results in twenty years--only to be disappointed yet again, while the hucksters go on with their hucksterism, and a credulous public continues to worship them as gods.
Wednesday, August 19, 2020
Contextualizing the French War in the Sahel
When we hear about the French operations in Mali and surrounding countries, I suppose few have much sense of how extraordinary the action is. I suspect that those who follow the news casually take it for granted that France has long been involved militarily in sub-Saharan Africa, without much sense of history or the details. This is all the more significant because, certainly where an American news audience is concerned, the commitment of 3,000, or even 5,000, troops to the region does not sound like very much, used as it is to thinking in terms of tens or hundreds of thousands of troops in overseas action. And Americans who have seen their forces almost continuously engaged against or in Iraq since 1990--for thirty years--might not be too struck by a commitment that began only in 2013. And so what France is doing in the Sahel does not seem like anything out of the ordinary.
Still, it is worth remembering that if France remained militarily active in Africa after decolonization, with its bases numerous and its interventions frequent, it has during that last half century been very sensitive to the scale and length of operations, especially where they have involved "boots on the ground." (By the end of the '60s France's sub-Saharan presence was down to 7,000 troops, total, and trended downward afterward.) The French government preferred brief actions emphasizing air power rather than ground troops (its '70s-era interventions sometimes referred to as "Jaguar diplomacy" for that reason), while its '80s-era confrontation with Libya over Chad, was exceptionally taxing--scarcely feasible without considerable American support.
Indeed, for the whole generation afterward no operation was comparable to the '80s action in Chad in its combination of scale and duration. Given the difference in population and the size of its armed forces (one-fifth and one-seventh of the U.S. figures, respectively), France's deployment has been comparable to a commitment of 15-35,000 American troops, equal to what the U.S. deployed in Afghanistan for much of that war--and likewise fulfilling an evolving mission over a far vaster area. What had originally been an action to recover specific ground from a specific enemy (recovery of northern Mali from the National Movement for the Liberation of Azawad) turned into a broader regional alliance/counter-terrorism operation (the Joint Force of the Group of Five Sahel/Operation Barkhane) against a multiplicity of groups extending across the Sahel, from Mauritania to Chad (an area the size of Western Europe)--overlapping with but separate from the ongoing peacekeeping mission in north Mali that picked up after the original French operation, the "United Nations Multidimensional Integrated Stabilization Mission in Mali" that quickly acquired the dubious distinction of being the world's most dangerous peacekeeping operation. Moreover, in contrast with the direct clash-avoiding, selective, minimalist use of force seen against Libya three decades ago, combat, if comparatively low in intensity, has been a continuous feature of the operation, which increasingly looks like an indefinite commitment to the general policing of this vast and still unstable region.
Consequently it is not for nothing that a recent New York Times article called it "France's Forever War." One might add, moreover, that the Sahel military operation(s) are just one way in which French policy has become more militarized, with France pursuing new overseas bases, and talking about sixth generation fighter jets, and French Presidents even fantasizing about (and perhaps even taking small steps toward) reviving conscription. And that, in turn, bespeaks how the conduct of every last major power has become increasingly militarized this past decade, supposedly pacific "Old Europe" included.
Still, it is worth remembering that if France remained militarily active in Africa after decolonization, with its bases numerous and its interventions frequent, it has during that last half century been very sensitive to the scale and length of operations, especially where they have involved "boots on the ground." (By the end of the '60s France's sub-Saharan presence was down to 7,000 troops, total, and trended downward afterward.) The French government preferred brief actions emphasizing air power rather than ground troops (its '70s-era interventions sometimes referred to as "Jaguar diplomacy" for that reason), while its '80s-era confrontation with Libya over Chad, was exceptionally taxing--scarcely feasible without considerable American support.
Indeed, for the whole generation afterward no operation was comparable to the '80s action in Chad in its combination of scale and duration. Given the difference in population and the size of its armed forces (one-fifth and one-seventh of the U.S. figures, respectively), France's deployment has been comparable to a commitment of 15-35,000 American troops, equal to what the U.S. deployed in Afghanistan for much of that war--and likewise fulfilling an evolving mission over a far vaster area. What had originally been an action to recover specific ground from a specific enemy (recovery of northern Mali from the National Movement for the Liberation of Azawad) turned into a broader regional alliance/counter-terrorism operation (the Joint Force of the Group of Five Sahel/Operation Barkhane) against a multiplicity of groups extending across the Sahel, from Mauritania to Chad (an area the size of Western Europe)--overlapping with but separate from the ongoing peacekeeping mission in north Mali that picked up after the original French operation, the "United Nations Multidimensional Integrated Stabilization Mission in Mali" that quickly acquired the dubious distinction of being the world's most dangerous peacekeeping operation. Moreover, in contrast with the direct clash-avoiding, selective, minimalist use of force seen against Libya three decades ago, combat, if comparatively low in intensity, has been a continuous feature of the operation, which increasingly looks like an indefinite commitment to the general policing of this vast and still unstable region.
Consequently it is not for nothing that a recent New York Times article called it "France's Forever War." One might add, moreover, that the Sahel military operation(s) are just one way in which French policy has become more militarized, with France pursuing new overseas bases, and talking about sixth generation fighter jets, and French Presidents even fantasizing about (and perhaps even taking small steps toward) reviving conscription. And that, in turn, bespeaks how the conduct of every last major power has become increasingly militarized this past decade, supposedly pacific "Old Europe" included.
Yes, Tony Blair Was a Neoliberal
Recently surveying Tony Blair's record as party leader and prime minister I saw that the pretense of Blair not being a neoliberal is just as risible as Bill Clinton's not being a neoliberal, given his not only acquiescing in the profound changes wrought in English economic and social life by his predecessors (privatization, union-breaking, financialization, etc.), but his particular brand of budgetary austerity with its tax breaks and deregulation for corporations and stringency with and hardness toward the poor, his backdoor privatization of basic services such as health and education (with college tuition running up from zero into the thousands of pounds a year on his watch), his hostility to government regulation of business, his inane New Economy vision of Cool Britannia (groan), and the rest. (Indeed, examining his record, and reexamining that of his predecessors, I was staggered by how much of it I had seen before reviewing the comparable history in the United States.)
That said, even considering the ways in which offended and disappointed many on the left, can seem halcyon in comparison with what has been seen since. The economic disasters and brutal austerity seen since his departure from office, which really does seem to spell the final doom of the post-war welfare state--the shift to an American-style regime with regard to higher education, a slower but still advancing shift in the same direction with the country's health care system, the raising of the regressive Value Added Tax yet again to 20 percent, the renewed assault on the social safety net of yet another Welfare Reform Act (2012) that delivered Universal Credit and bedroom tax, the hundreds of thousands of "excess deaths" in recent years traceable to cuts in care facilities, the plans to raise the retirement age (perhaps all the way to age 75, effectively abolishing retirement for most)--and all that, before the current public health/economic crisis.
I admit that next to that Blair's tenure does not look quite so bad--until one remembers the extent to which his policies did so much to pave the way for all of it, in carrying forward what apologists for New Labour tend to think of as Conservative projects, and his general lowering of the bar for what constitutes tolerable government. That led to this. And so lends the question "Was Tony Blair's Prime Ministership neoliberal?" an additional, very contemporary, significance, the more so with the Labour Party, for the time being, still walking the Blairite road.
That said, even considering the ways in which offended and disappointed many on the left, can seem halcyon in comparison with what has been seen since. The economic disasters and brutal austerity seen since his departure from office, which really does seem to spell the final doom of the post-war welfare state--the shift to an American-style regime with regard to higher education, a slower but still advancing shift in the same direction with the country's health care system, the raising of the regressive Value Added Tax yet again to 20 percent, the renewed assault on the social safety net of yet another Welfare Reform Act (2012) that delivered Universal Credit and bedroom tax, the hundreds of thousands of "excess deaths" in recent years traceable to cuts in care facilities, the plans to raise the retirement age (perhaps all the way to age 75, effectively abolishing retirement for most)--and all that, before the current public health/economic crisis.
I admit that next to that Blair's tenure does not look quite so bad--until one remembers the extent to which his policies did so much to pave the way for all of it, in carrying forward what apologists for New Labour tend to think of as Conservative projects, and his general lowering of the bar for what constitutes tolerable government. That led to this. And so lends the question "Was Tony Blair's Prime Ministership neoliberal?" an additional, very contemporary, significance, the more so with the Labour Party, for the time being, still walking the Blairite road.
Subscribe to:
Posts (Atom)