Monday, September 19, 2022

What Ever Happened to Prefabricated Housing?

Those who tell us that ours is an age of unprecedented technological change making science fiction into reality never tire of talking about their cell phone--but somehow never seem to have anything to say about a great many other things that are fundamental to daily living.

Like housing.

It remains far and away the norm to build homes one at a time, in a process largely consisting of craftsmen (carpenters, etc.) transforming raw materials at best only slightly processed (like planks of wooded) into the elements from which they laboriously construct the structure.

It is an artisanal process--remote from the mechanized mass manufacturing of the Industrial Revolution, which in regard to much of home construction seems never to have happened.

A person might wonder if this has not been because, for some reason, this style of housing has "withstood the test of time" as clearly superior to any industrial alternatives.

As is often the case with those who speak pompously of the "test of time" they would be wrong to do so--not least because the provision of adequate amounts of affordable, quality, housing has so long been recognized as beyond that older method. Indeed, writing of the lacks of America's "affluent society" in the 1950s John Kenneth Galbraith specifically noted housing as one area where the richest country in history, riding high on the post-war boom, was poor--and few would care to argue the point today. As any homebuyer is likely to learn we have the artisanal method's drawbacks without the individuality, beauty, durability it offered at its best--living instead in generic boxes that are as obscenely high-maintenance as they are expensive.

Moreover, the reality is that practical industrialization of house-building--through the "prefabrication" of structures--is at this point a generations-old practice that has consistently proven superior from the standpoint of building time and cost, even without the great economies of scale (and general productivity improvements) that might be achieved were the production of such houses carried out on a really widespread basis. (They have also been known to have numerous advantages over conventionally-built homes in such important respects as structural strength and energy-efficiency.)

Given both the failure of the old way, and the existence of a proven solution to it, one can only wonder why prefabricated housing--about which it was once common to hear as the "wave of the future"--never became more than the marginal thing it is within today's construction market. Those discussing the issue sometimes talk about consumers finding them off-putting because of poor "reputation"--but I must admit that this explanation has never struck me as really satisfying. People buy what they know about, and I am not sure so many are aware the product exists, let alone its having any reputation with the broad public. And where what they know about is concerned people buy what is made available to them within their price range--while it is clear that builders are not going out of their way to make prefabricated homes widely available. Instead, as anyone perusing the discussions of the pros and cons of such homes quickly finds that such homes, rather than being produced in large numbers for purchase like any other home, are something individuals must personally arrange to have constructed on land they buy, suffering through a great many expenses and hassles they would not have to suffer when purchasing an already existing, conventionally built home.

Of course, that raises the question of why the construction sector has not taken more interest. The most plausible answer seems to be that, contrary to what those besotted with words like "ENTREPRENEURSHIP!" and "INNOVATION!" tell us, the failure of prefab home-building to make much headway is yet another story of a disruptive technology being successfully warded off by established businesses making the most of their position to stick with their established practices.

Nevertheless, it is far from inconceivable that prefabricated housing may have the benefit of significant tailwinds in the coming years. The combination of shortages of skilled labor across the range of building trades (carpenters, framing crews, etc.), and the tougher situation faced by consumers, provides the construction business with more incentive to pursue cost-saving options--while there may be significant synergies between prefabricated housing and other new technologies. Certainly Carl Benedikt Frey and Michael Osborne speculated in their study The Future of Employment that the prefabrication of buildings may play an important role in the automation of construction by simplifying the on-site activity. Meanwhile the construction sector may be facing increasing disruption from another technology--the 3-D printer and its potential to "print" homes. Certainly that technology could prove a competitor to prefabrication--but it is also possible that by disrupting the industry it can also create openings for a greater use of prefabricated structures, especially if each technology proves to be more useful than the other in some tasks. If so then we may belatedly see this very important industry, which has left so few pleased with its delivery of the goods, finally join the modern world, and in the process take a long overdue step toward turning scarcity in this very important area of life into abundance.

Earth4All Deep Dive Paper #8 and the RethinkX Vision of Sustainability

The Earth4All Project was announced at the United Nations' Framework Convention on Climate Change back in November 2020. Led by teams at the Club of Rome, the Norwegian Business School and the Potsdam Institute for Climate Impact Research, its activities includes the monthly series of "Deep Dive" papers, the August 2022 edition of which ("The Clean Energy Transformation: A New Paradigm for Social Progress Within Planetary Boundaries") comes from the RethinkX think tank's Director of Global Research Communications, Nafeez Ahmed.

Those who are already acquainted with RethinkX's work, and particularly its Rethinking Humanity report, will recognize much in the Deep Dive paper as familiar from that earlier publication. Once again this paper presents the think tank's argument that the world is going through a historic transition as the cost and material throughput of five essentials for human living--information, energy, transport, food and materials--drop by an order of magnitude or more, which will potentially be as radical as the rise of human civilization itself (shifting us from the exploitation-based "Age of Extraction" with which civilization has been synonymous to, one may hope, an "Age of Freedom"). Where energy in particular is concerned RethinkX's argument goes that the sharp decline in cost of solar, wind and battery storage relative to the alternatives (which has already left trillions of dollars' worth of recent investment in fossil fuel apparatus "stranded") holds out the possibility not only of a successful transition to a post-fossil fuels, carbon-neutral (or even carbon-negative) energy base, but by way of a deliberate construction of "excess" capacity along the lines they characterize as "Clean Energy Super Power," "green" abundance that will make the economics of energy look like what the economics of information have become in the age of the Internet.

All of that having been presented before the question is what is new in this specific report. What impressed me on that level was its treatment of two issues relevant to this energy transition, namely the Energy Return On Investment (EROI) that renewables yield, and the material throughput required to build a renewable energy base on a global scale--both points of particular interest because renewables-bashers have made so much of these issues. As Ahmed shows here, EROI is not the obstacle some make it out to be--the more in as he now finds that the EROI for fossil fuels has consistently been overestimated, while that for renewables has consistently been underestimated. That the EROI for fossil fuels tends to be "measured right at the well-head rather than the most relevant point, which is where the energy enters the economy as electricity or petrol" in itself leaves fossil fuels with a lower EROI than the estimates of renewables' EROI that Ahmed considers to be unfairly low. As he argues, estimates for the useful life of photovoltaics tend to lowball the figure (estimating twenty to thirty years when more realistically they may be good for forty to fifty years), and to treat batteries as a deduction from renewables' EROI when they can easily boost it (one empirical examination showing they "actually increased EROI by making available energy that would otherwise be lost to curtailment"), even before one gets into such "phase-changing" possibilities as he anticipates from a shift to renewables on a really large scale (epitomized by the Clean Energy Super Power concept). Where material throughput is concerned he observes that the construction of the requisite base would be coming not on top of, but instead of, the resource demands of sustaining and extending the existing fossil fuel base, the existing (and increasingly uneconomical) apparatus of which can be regarded as a "vast global repository" of materials for use (the steel in old offshore oil rigs, for example, convertible into raw material for new windmill towers), all while the recycling of the requisite materials would be quite ample to close the supply gaps--the bottlenecks here, again, wildly exaggerated by those euphemistically called "skeptics" of the transition.

The result is that Ahmed's case bolsters further still what has for a long time been the strongest aspect of the RethinkX analysis--the trend in the energy market, and the increasing technical feasibility of a world of "green" yet abundant energy. However, it seems worth acknowledging that it does not bolster what may have most needed bolstering, namely the think tank's treatment of other dimensions of the matter, not least the transportation and food sectors, whose interaction with the energy transition is crucial to their vision of sustainability, to say nothing of a new civilization. Transportation, after all, is a major user of energy, and the shift from human-driven, gasoline-burning cars generally operated on an individual basis to self-driving electric cars (Electric-Autonomous Vehicles, or E-AVs) that make possible "Transportation as a Service" (TAAS) is crucial to their vision of making transport cheaply and conveniently available to all. The RethinkX analysts also look to cellular agriculture to sharply cut greenhouse gas emissions from that quarter and open up vast amounts of land to climate change-offsetting reforestation. Where all that is concerned RethinkX's prior reports rigorously work out just what could be expected to happen were safe, reliable, affordable E-AVs and cost-competitive cellular agriculture to hit the market. However, that arrival in the market is another matter. Their expectations in the area of transport are based not on the kind of robust analysis of easily observable (and increasingly widely acknowledged) price trends on which RethinkX's claims regarding energy have been based, but comparatively opaque expert pronouncements that have already proved overoptimistic. (In RethinkX's 2017 report 2021 was supposed to be the year of the great disruption in which TAAS began the displacement of our current transport model. Alas, it has not been so--with many expecting no such displacement for a good long while to come.) If somewhat more data-based their predictions regarding food may similarly prove overoptimistic. (Their 2019 report on the matter had the first cellular meat products hitting the market in 2022--while as of July 2022 not only had no such thing happened, but it remained uncertain when it actually would.)

Rather than reassessment of the earlier analysis on that score what RethinkX offers here are the same essential predictions, albeit with a greater vagueness about the time frame (inclining to reference to significant movements over the broader 10-15 year time frame they predicted for the more general transformation, rather than predicting more specific developments at points throughout it). Additionally the report has nothing to add about that area RethinkX has had least to say about, materials (uncovered by a report of its own). Nevertheless, if "The Clean Energy Transformation" document falls short of a completion and update of the broad RethinkX vision, it remains a useful summation of that vision, and well warrants attention from those who would like an accessible introduction to it, as well as those interested in the think tank's most recent word on the progress of renewable energy that is the document's principal concern.

Friday, September 16, 2022

What Are the Odds That Teaching Will Be Automated in the Very Near Term?

Recent months have brought a great wave of news stories about a shortage of teachers approaching crisis levels--and the possibility that even if such a shortage is not already underway (a difficult thing to establish one way or the other given the scarcity of really comprehensive educational statistics) it may be imminent as exhausted instructors leave the profession much more quickly than anticipated, new entrants are deterred from joining the profession in the expected numbers by the conditions of the job, or the combination of the two widening the gap between need and supply.

One question I have found myself wondering about, given the talk we have been hearing of automation, has been the expectations regarding the automation of teaching specifically. Not long ago I considered Ray Kurzweil's thoughts about the matter at the turn of the century--which, as with many of his predictions in the relevant areas, were premised on forecasts of advance in particular technological areas that have since appeared overoptimistic (notably the speed at which pattern-recognizing neural nets and all premised on them would develop) and a naiveté regarding the social dimensions of the subjects about which he wrote (in this case, the school's function as "babysitter").

However, not everyone has been so optimistic--even those who have, by any reasonable measure, been optimists about automation. Exemplary is the study Carl Benedikt Frey and Michael Osborne produced back in 2013, which played so important a part in the conversation about automation and employment in the '10s. That study included in its appendix a table listing over 700 occupations and the chances of their being "computerized"--"potentially automatable over some unspecified number of years, perhaps a decade or two."

The authors determined that the jobs of data entry keyers, telemarketers and new accounts clerks had a 99 percent chance of being "computerizable." Contrary to what might be expected by those who make much of "high-knowledge" occupations, Frey and Osborne even anticipated fairly high odds of a great deal of scientific work becoming automated (with atmospheric and space scientists having a 67 percent chance of having their jobs automated), with, in spite of what may be thought from the popularity of the sneer "Learn to code," a near-even chance of the same happening with computer programming (48 percent). But, teaching assistants apart, they put the odds of computerizing any teaching occupation at not much better than 1 in 4 (a 27 percent chance of middle school technical teachers), while the odds of computerizing postsecondary school (college) teaching they put at 3 percent, the odds of computerizing preschool, elementary and secondary school teaching at under 1 percent.

In short, far from being easy to automate, their analysis suggests that teaching will, to go by their assessment of the potential for computerizing the task, be exceptionally difficult to automate satisfactorily. The result is that even if a great wave of automation swept through the rest of the economy—for what it is worth, Frey and Osborne calculated that nearly half of U.S. jobs were, in the absence of significant political or economic obstacles (legal barriers, particularly poor investment conditions, etc.), at "high" (70 percent-plus) risk of such computerization by the early 2030s--automation would have little impact on a great many teaching jobs. The result is that one can easily picture a situation in which job-seekers would find themselves with fewer alternatives to teaching--meaning relatively more people pursuing such positions, not less (at a time in which an aging population structure would likely mean fewer students, and fewer job openings for that reason). In the nearer term, in the absence of any such pressure sending people toward the occupation, it seems additional reason to think automation unlikely to be a solution to the problem.

Revisiting Carl Benedikt Frey and Michael Osborne's The Future of Employment

Back in September 2013 Carl Benedikt Frey and Michael Osborne presented the working paper The Future of Employment. Subsequently republished as an article in the January 2017 edition of the journal Technological Forecasting and Social Change, the item played a significant part in galvanizing the debate about automation--and indeed produced panic in some circles. (It certainly says something that it got former Treasury Secretary Larry Summers, a consistent opponent of government action to redress downturn and joblessness--not least during the Great Recession, with highly controversial result--talking about how in the face of automation governments would "need to take a more explicit role in ensuring full employment than has been the practice in the U.S.," considering such possibilities as "targeted wage subsidies," "major investments in infrastructure" and even "direct public employment programmes.")

Where the Frey-Osborne study is specifically concerned I suspect most of those who talked about it paid attention mainly to the authors' conclusion, and indeed an oversimplified version of that conclusion that gives the impression that much of the awareness among those who should have had it firsthand was actually secondhand. (Specifically they turned the authors' declaration that "According to our estimate, 47 percent of total U.S. employment is" at 70 percent-plus risk of being "potentially automatable over some unspecified number of years, perhaps a decade or two"--potentially because economic conditions and the political response to the possibility were outside their study's purview--into "Your job is going to disappear very soon. Start panicking now, losers!")

This is, in part, because of how the media tends to work--not only favoring what will grab attention and ignoring the "boring" stuff, but because of how it treats those whom it regards as worth citing, with Carl Sagan worth citing by way of background. As he observed in science there are at best experts (people who have studied an issue more than others and whom it may be hoped know more than others), not authorities (people whose "Because I said so" is a final judgment that decides how the situation actually is for everyone else). However the mainstream media--not exactly accomplished at understanding the scientific method, let alone the culture of science shaped by that method and necessary for its application--does not even understand the distinction, let alone respect it. Accordingly it treats those persons it consults not as experts who can help explain the world to its readers, listeners and viewers so as to help them learn about it, think about it, understand it and form their own conclusions, but authorities whose pronouncements are to be heeded unquestioningly, like latterday oracles. And, of course, in a society gone insane with the Cult of the Good School, and regarding "Oxford" as the only school on Earth that can outdo "Harvard" in the snob stakes, dropping the name in connection with the pronouncement counts for a lot with people of crude and conventional mind. (People from Oxford said it, so it must be true!)

However, part of it is the character of the report itself. The main text is 48 pages long, and written in that jargon-heavy and parenthetical reference-crammed style that screams "Look how scientific I'm being!" It also contains some rather involved equations that, on top of including those Greek symbols that I suspect immediately scare most people off (the dreaded sigma makes an appearance), are not explained as accessibly as they might be, or even as fully as they might be. (The mathematical/machine learning jargon gets particularly thick here--"feature vector," "discriminant function," "Gaussian process classifier," "covariance matrix," "logit regression," etc.--while explaining their formulas the authors do not work through a single example such as might show how they worked out the probability for a particular job, even as they left the reader with plenty of questions about just how they quantified all that O*NET data. Certainly I don't think anyone would find attempting to replicate the authors' results would be a straightforward thing on the basis of their explanations.) Accordingly it is not what even the highly literate and mathematically competent would call "light reading"--and unsurprisingly, few seem to have really tried to read it, or make sense of what they did read, or ask any questions. (This is even as, alas, what they did not understand made them more credulous rather than less so--because not only did people from Oxford say it, but they said it with equations!)

Still, the fact remains that one need not be a specialist in this field to get much more of what is essential than the press generally bothered with. Simply put, Frey and Osborne argued (verbally) that progress in pattern recognition and big data, in combination with improvements in the price and performance of sensors, and the mobility and "manual dexterity" of robots, were making it possible to move automation beyond routine tasks that can be reduced to explicit rules by computerizing non-routine cognitive and physical tasks--with an example of which they made much the ability of a self-driving car to navigate a cityscape (Google's progress at the time of their report's writing apparently a touchstone for them). Indeed, the authors go so far as to claim that "it is largely already technologically possible to automate almost any task, provided that sufficient amounts of data are gathered for pattern recognition," apart from situations where three particular sets of "inhibiting engineering bottlenecks" ("perception and manipulation tasks, creative intelligence tasks, and social intelligence tasks") interfere, and workarounds prove inadequate to overcome the interference. (The possibility of taking a task and "designing the difficult bits out"--of, for example, replacing the non-routine with the routine, as by relying on prefabrication to simplify the work done at a construction site--is a significant theme of the paper.)

How did the authors determine just where those bottlenecks became significant, and how much so? Working with a group of machine learning specialists they took descriptions of 70 occupations from the U.S. Department of Labor Occupational Information Network (O*NET) online database and "subjectively hand-labelled" them as automatable or non-automatable. They then checked their subjective assessments against what they intended to be a more "objective" process to confirm that their assessments were "systematically and consistently related to the O*NET information. This consisted of

1. Dividing the three broad bottlenecks into nine more discrete requirements for task performance (e.g. rather than "perception and manipulation," the ability to "work in a cramped space," or "manual dexterity").

2. On the basis of the O*NET information, working out just how important the trait was, and how high the level of competence in it, for the performance of the task (for instance, whether a very high level of manual dexterity was very important in a task, or a low level of such importance), and

3. Using an algorithm (basically, running these inputs through the formulas I mentioned earlier) to validate the subjective assessments - and it would seem, use those assessments to validate the algorithm.

They then used the algorithm to establish the probability of the other 632 jobs under study, on the basis of their features, being similarly computerizable over the time frame with which they concerned themselves (unspecified, but inclining to the one-to-two decade range), with the threshold for "medium" risk set at 30 percent, that for "high" risk at 70 percent.

Seeing the reasoning laid out in this way one can argue that it proceeded from a set of assumptions that were very much open to question. Even before one gets into the nuances of the methodology they used the assumption that pattern recognition + big data had already laid the groundwork for a great transformation of the economy can seem overoptimistic, the more in as we consider the conclusions to which it led them. Given that the study was completed in 2013, a decade or two works out to (more or less) the 2023-2033 time frame, more or less--in which they thought there was an 89 percent chance of the job of the taxi driver and chauffeur being automatable, and a 79 percent chance of the same going for heavy truck drivers (very high odds indeed, and this, again, without any great breakthroughs). Alas, in 2022, with more perspective on such matters, not least the inadequacies of the neural nets controlling self-driving vehicles even after truly vast amounts of machine learning, there still seems considerable room for doubt about that. Meanwhile a good many of the authors' assessments can in themselves leave one wondering at the methods that produced the results. (For instance, while they generally conclude that teaching is particularly hard to automate--they put the odds of elementary and high school teaching being computerized at under 1 percent--they put the odds of middle school teaching being computerized at 17 percent. This is still near the bottom of the list from the standpoint of susceptibility, and well inside the low-risk category, but an order of magnitude higher than the possibility of computerizing teaching at those other levels. What about middle school makes so much difference? I have no clue.) The result is that while hindsight is always sharper than foresight, it seems that had more people actually tried to understand the premises of the paper we would have seen more skepticism toward its more spectacular claims.

The Poverty of Our Educational Statistics

Some years ago Business Insider called the U.S. Federal Reserve Bank of St. Louis' Federal Reserve Economic Data (FRED) database "The Most Amazing Economics Website in the World." Want to have your choice of measurements of inflation in June 1953? How about manufacturing employment in Michigan--or maybe just auto manufacturing in the Detroit-Warren-Dearborn metro area--in December 1999? Or how post-tax corporate profits in the fourth quarter of 2008 compared with those of the same quarter in the preceding five years? Offering 800,000+ time series FRED may not quite offer the answers to every question a researcher may have, for whom simply having access to statistics is likely to be a starting point, but putting so much a quick keyword search away it sure is handy.

One might think that in this age of relentless data-hoovering, ever more abundant computing power and widespread statistical training one would, on examining any public issue, especially one as hotly contested as education (they even made the first season of House of Cards about it!), easily find a web site that, in at least some degree, does for American education what the Federal Reserve does with FRED.

If one thinks that then they are wrong. Very, very wrong. Someone looking for something so readily countable as, for example, the number of unfilled openings in the country's schools is likely to have a hard time getting even the most elementary data (never mind a FRED-like wealth of time series)--as the recent arguments about teacher shortages show. (Simply put, people give us numbers about unfilled positions in this school system or that state--but no one seems to have anything to compare them to, to tell us if things are normal, getting worse, even getting better.)

That this is so little talked about--that so few realize that this is the case--can seem to imply that not many people have gone looking for such numbers; that in fact those who have gone looking for what can seem very basic information for anyone trying to come to any conclusions about these matters are much fewer in number than those crowding the media and the Web with their "opinions."

Economic Opportunity and the Demographic Trend

In discussing Japan's low fertility rate and aging population the media (certainly in the U.S., though so far as I can tell, elsewhere as well) has inclined to the crude and the simple-minded, the hot-button and sensationalist--and per usual overlooked important aspects of the matter.

Thus we have stories about Japan as a country of "forty year old virgins," and tales of young people abandoning the prospect of human contact in preference of "virtual" love. But we hear little of what those who look at the business and economics pages know very well--namely that Japan has been in a bad way here for decades, with the most significant consequences for this matter.

After all, in a modern society where economic life is individualistic and setting up a household and having children is very expensive and little help will be forthcoming from any source the responsible thing to do is to only attempt to do so when one has, among much else, a reasonable expectation of an income over the long-term that will be at least adequate to raise children "decently"--which is to say, at a middle-class standard--the prerequisite for which has been a "good job" offering sufficient security of tenure in a position paying a middle class wage that they can expect to go on receiving one for a good long time to come. And that is exactly what has become very elusive in recent decades--as the economic growth engine that looked so impressive in the 1950s and 1960s stalled out, and as Fordism's vague promises of generalized middle-classness faded Japan has been a signal case as the fastest-growing of the large industrialized nations became the slowest-growing nearly overnight, and, to go by one calculation, per capita Gross Domestic Product fell by half during this past generation, all as the old notion of "lifetime employment" waned what can seem a lifetime ago.

Quite naturally those who would have been inclined (and of course, everyone is not necessarily inclined) refrain, with any impression that this is the case reinforced by what we see overseas, not least in those two oldest of the major Western nations, Italy and Germany--their similarities and their differences. Italy comes closer than any other Group of Seven nation to Japan in its shift from brisk growth to stagnation (and, even when we use the more conventional numbers, economic contraction) in another spectacle of a modern country with modern attitudes to these things seeing its birth rate fall. (Indeed, in every single year from 2013 forward Italy's fertility rate has been lower than Japan's, averaging 1.3 against Japan's 1.4 for 2013-2020, with the 2020 rate 1.2 against Japan's 1.3.)

Of course, Germany may not seem to fit the profile so neatly given its image as an economic success story. However, it is worth noting that, even apart from the qualified nature of its success (Germany remains a manufacturing power, but is also a long way away from its "Wirtschaftswunder"-era dynamism), and the fact that its social model is moving in the same direction as everyone else's (with all that means for young people starting their lives), its figures vary significantly by region. In particular Germany's high average age obscures the cleavage between what tend to be the older (and less prosperous) eastern regions as against the more youthful (and more prosperous) western regions.

Alas, a media which makes a curse of the word "millennial," and sneers at the idea of working people wanting any security at all as "entitlement" on their part, has little interest and less sympathy in such matters--while knowing full well that stories about it are less likely to do well in the "attention economy" than stories about "virtual girlfriends." Boding poorly for our understanding of the matter in the recent past, it also bodes poorly for our ability to understand it in the future--in which the ability of young people to get along economically in the world may not be the only factor, but nevertheless a hugely important one, however much the opinion makers of today would like one to think otherwise.

Thursday, September 8, 2022

Has the Theory of Economic Long Waves Ceased to Be Relevant?

The economic theory of "long waves" holds that economic growth has a 40-60 year cycle with the first half of the cycle, an "upward" wave of 20-30 years--a period of strong growth with recessions few and mild--followed by a "downward" wave that is the opposite, with growth weak and downturns frequent and severe for two to three decades until it is followed in its turn by a new upward wave beginning the next cycle.

First suggested in the 1920s by the Soviet economist Nikolai Kondratiev (indeed, long waves are often called "Kondratiev waves" in his honor) the idea is controversial in outline and detail (e.g. just what causes them), but nevertheless has numerous, noteworthy adherents across the spectrum of economic theory and ideology who have made considerable use of it in their work, from figures like Joseph Schumpeter on the right to a Michael Roberts on the left. This is, in part, because the theory seemed to be borne out by the events of mid-century. In hindsight the period from the turn of the century to World War I looks like an upward wave, the period of the '20s and '30s and '40s a downward wave, but then the period that followed it, the big post-war boom of the '50s and '60s another upward wave--which was followed by yet another downward wave absolutely no later than the '70s.

So far, so good--but the years since have been another matter. Assuming a downward wave in the '70s we ought to expect another upward wave in the '90s and certainly the early twenty-first century. Indeed, we might expect to have already run through a whole such wave and, just now, find ourselves in, entering or at least approaching another downward wave.

As it happens the U.S. did have a boom in the late '90s. However, in contrast with the wide expectations that this boom was the beginning of something lasting and epochal (remember how Clinton was going to pay down the national debt with that exploding tax revenue?), that boom petered out fast--and so did the associated seeds of growth, like labor productivity growth, which pretty much fell into the toilet in the twenty-first century, and stayed there. Meanwhile the same years were less than booming for the rest of the world--with the Soviet bloc's output collapse bottoming out, with Europe Eurosclerotic and Japan in its lost decade amid an Asia hard-hit by financial crisis, and the Third World generally struggling with the Commodity Depression, the aftereffects of the Volcker shock/debt crisis, and the new frustrations the decade brought (with the "Asian" crisis tipping Brazil over into default).

Of course, as the American boom waned the rest of the world did somewhat better--indeed, depending on which figures one consults--the 2002-2008 period saw some really impressive growth at the global level. But again this was short-lived, cut off by the 2007-2008 financial crisis, from which the world never really recovered before it got kicked while it was down again by pandemic, recession, war. (The numbers, as measured in any manner, have been lousy, but if one uses the Consumer Price Index rather than chained-dollar-based deflators to adjust the "current" figures for inflation then it seems we saw economic collapse in a large part of the world, partially obscured by China's still doing fairly well, but the Chinese miracle was slowing down too.)

The result is that as of the early 2020s, almost a half century after the downturn (commonly dated to 1973), there simply has been no long boom to speak of. Of course, some analysts remain optimistic, with Swiss financial giant UBS recently suggesting that the latter part of the decade, helped by business investment in digital technologies to enable them to keep operating during the pandemic that will work out to long efficiency gains, public investments in infrastructure and R & D, and a green energy boom, may mean better times ahead. Perhaps. Yet it has seemed to me that there has been more hype than substance in the talk of an automation boom (indeed, business investment seems to have mainly been about short-term crisis management--shoring up supply chains, stocking up on inventory, etc., while their success in "digitizing" remains arguable); government action remains a long way from really boom-starting levels (the Inflation Reduction Act, of which only part is devoted to green investment, devotes $400 billion or so to such matters over a decade, a comparative drop in the bucket); and while I remain optimistic about the potentials of renewable energy there is room for doubt that the investment we get in it will be anywhere near enough to make for a long upward movement.

In short, far from finding myself bullish about the prospect of a new long wave, I find myself remembering that the theory was a conclusion drawn from a very small sample (these cycles not generally traced further back than the late eighteenth century), which especially after the experience of the last half century can leave us the more doubtful that there was ever much to the theory to the begin with. However, I also find myself considering another possibility, namely that for that period of history such a cycle may have actually been operative--and that cycle since broken, perhaps permanently, along with the predictive value that it once seemed to possess.

Tuesday, September 6, 2022

The Vision of Japan as the Future: A Reflection

Back in the '80s it was common for Americans to think of Japan as "the future"--the country on the leading edge of technological development and business practice, the industrial world-beater that was emerging as pace-setter, model and maybe even hegemon.

A few years later, as Japan's economic boom was revealed as substantially a real estate-and-stock bubble that had been as much a function of American weakness as Japanese strength (as America's exploding Reagan-era trade deficit filled the country's bank vaults with dollars, and American devaluation translated to a massive strengthening of the yen); as Japan's supremacy in areas like chip-making proved fragile, and its prospects for leaping ahead of the rest of the world in others (as through breakthroughs in fifth-generation computing, or room-temperature superconductors) proved illusory; and the country went from being the fastest-growing to the most stagnant of the major industrial economies; all that faded, reinforced by the illusions and delusions of America's late '90s tech boom, which shifted the tendency of America's dialogue away from hand-wringing over declinism to "irrational exuberance" (at least, for a short but critical period after the debut of Windows '95).

Yet in hindsight it can seem that Japan never really did stop being the image of the future. It was just the case that observers of conventional mind failed to recognize it because the future was not what people thought it was at the time. They pictured technological dynamism and economic boom--but the future, since that time, has really been technological and economic stagnation, with Japan's "lost decade" turned "lost decades" turned "lost generation" matched by the world's own "lost generation" these past many years. And the same goes for that stgantion's effects, like social withdrawal--Americans, certainly, seeming to notice the phenomenon of the "hikikomori" in Japan long before they noticed it at home.

Thus has it also gone with Japan's demography--the country's people less often marrying and having children, such that even by the standards of a world on the whole going through the "demographic transition" the country's situation has been extreme. According to the Central Intelligence Agency's World Factbook tiny and ultra-rich Monaco apart, Japan is the oldest country on Earth, with a median age of almost 49 years, and only 12 percent of its population under age 14. Still, others are not so far behind, with, according to the very same sources, dozens of other countries, including every major advanced industrial country but the U.S., having a median age of over forty (and the U.S. not far behind, with a median age of 39), and similarly dwindling numbers of youth (the percentage who are 0-14 in age likewise 12 percent in South Korea, 13 percent in Italy, 14 percent in Germany, with 15 percent the Euro area average).

Considering the last it seems fitting that the trend was already evident at the very peak of that preoccupation with Japan as industrial paragon, 1989, the year of the "1.57 shock" (when the country recorded a Total Fertility Rate of 1.57--at the time regarded as a shockingly number, though the government would probably be ecstatic if it was that high today). The result is that those interested in the difficulties of an aging society are looking at Japan wondering how it will deal with these difficulties as they manifest there first--and what the country does here as likely to inform others' thought about how to cope with contemporary reality as much as it did back when business "experts" seemed transfixed by "Japan Inc." as the epitome of industrial competence.

Thursday, June 30, 2022

A Generation On: Clifford Stoll's 1995 Essay on the Internet

Clifford Stoll's 1995 Newsweek piece "Why the Web Won't Be Nirvana" has been the butt of many a joke over the years, but not because of its title. Had Stoll limited himself to merely arguing what his title claims we might well remember him as having been clearer-eyed than his contemporaries. Had he somewhat more ambititously argued that the advent of this technology would not in itself translate to not merely nirvana, but not even utopia, we might have accorded him yet greater plaudits. And had he, in more nuanced fashion, argued that some of the much-hyped developments might not come for a long time, if ever, he would also have been right, as we are all too aware looking at exactly some of those things of which he was so dismissive, like telecommuting or the substitution of online for in-person education, or some radical advance for democracy.

However, he was dismissive of the whole thing, not only in the very near term, but, it could seem, any time frame meaningful to people of the 1990s, and on very particular grounds that seem to me more telling than the prediction itself. While paying the limits of the technology as it stood at the time some heed (noting the sheer messiness of what was online, or the awkwardness of reading a CD-ROM while on a '90s-era desktop), he did not stress the limits of the technology as it was then, and likely to remain for some time, even though he could have very easily done so. (What we are in 2022 vaguely talking about as the "Metaverse" was, at the time, widely portrayed as imminent amid the then-insane hyping of Virtual Reality--while what we really had was pay-by-the-hour dial-up in a time in which Amazon had scarcely been founded, and Google, Facebook, Netflix were far from realization.) Nor did Stoll acknowledge the hard facts of economics and politics and power that would a generation on see even those bosses who have made the biggest fortunes in the history of the world out of technological hype broadcast to the whole world their extreme hostility to the very idea of telecommuting, or make the Internet a weapon in the arsenal of Authority against Dissent as much or more than the reverse. (That was not the kind of thing one was likely to get in Newsweek then any more than now.)

Rather what Stoll based his argument on was the need for "human contact," which he was sure the Internet would fail to provide. The result was that where his predictions were correct he was far off the mark in regard to the reasons why (those matters of economics, politics, power), and totally wrong about other points, like his dismissal of online retail and the possibility that it might threaten the business of brick-and-mortar stores, or the viability of online publishing. The truth is that when it comes to mundane tasks like buying cornflakes and underwear convenience, and cheapness, counts for infinitely more than "human contact" with the hassled, time- and cash-strapped great majority of us--while where their performance is concerned human contact is, to put it mildly, overrated. Indeed, it is often a thing many, not all of them introverts, would take some trouble to avoid. (Do you really love encountering pushy salespersons? Long checkout lines where you encounter more rude people? Sales clerks of highly variable competence and personability? For any and all of whom dealing with you may not exactly be the highlight of their own day, one might add?) Indeed, looking at a college classroom in recent years one sees two of his predictions belied, as they are reminded that, while Stoll may indeed be right that "[a] network chat line is a limp substitute for meeting friends over coffee," the average college student much prefers that "limp substitute" to chatting with their neighbors, let alone attending to that instructor right there in the room with them, whom large numbers of them happily replace with an online equivalent whenever this becomes practical.

Thus does it go with other "entertainments." Stoll may well be right that "no interactive multimedia display comes close to the excitement of a live concert," but how often do most people get to go to those? In the meantime the multimedia display has something to commend it against the other substitutes (like the Walkman of Stoll's day). And this is even more the case with his remark that no one would "prefer cybersex to the real thing." After all, the "real thing" isn't so easy for many to come by (even when they aren't coping with pandemic and economic collapse), while even for those for whom it might be an option it seems that not merely cybersex with the real, but "love with the virtual," is competitive enough with the real kind to make many a social critic wag their tongues (with, I suspect, what is treated as a Japanese phenomenon today, like the "hikikomori," likely to prove far from unique to that country in the years ahead).

Far more than Stoll, Edward Castronova and Jane McGonigal seem to have been on the right track when writing about how poorly our workaday world comes off next to the freedoms, stimulation, satisfaction of virtuality, especially when we consider what that reality is like not for the elite who generally get to make a living offering their opinions in public, but the vast majority of the population on the lower rungs of the social hierarchy, facing a deeply unequal, sneering world which every day and in every possible way tells them "I don't care about your problems." Indeed, while a certain sort of person will smugly dismiss any remark about how the world is changing with a brazen a priori confidence that things are always pretty much the same, it seems far from implausible that things are getting worse that way (it's hard to argue with a thing like falling life expectancy!), while it seems there is reason to think that the virtual is only getting more alluring, with people actually wanting it more, not less, as it becomes more familiar to them--a familiar friend rather than something they know only from the Luddite nightmares of so much bottom-feeding sci-fi. In fact, it does not seem too extreme to suspect that many have as little as they can to do with the real offline world--and that only because of the unavoidable physical necessities of dealing with it, on terms that make it any more attractive, only underlining the superiority of the virtual in life as they have lived it.

Wednesday, June 29, 2022

The Pandemic and Automation: Where Are We Now?

In the years preceding the pandemic there was enormous hype about automation, particularly in the wake of the 2013 Frey-Osborne study The Future of Employment . Following the pandemic the effort was supposedly in overdrive.

However, a close reading of even the few items about the matter that serve up actual examples of such automation (like the installation of a voice recognition-equipped system for receiving customers' orders as they pass through some fast food outlet's drive-thru lane) reveals that they are clearer on intentions and possibilities than actual developments--like polls telling us that "50% of employers are expecting to accelerate the automation of some roles in their companies" (never mind how much employment those employers account for, or how serious their expectations are, or what "accelerate" and "some" actually mean here). Meanwhile rather more prominent when we look at discussion of actualities rather than possibilities we find ourselves reading about the discontents of the humans. We read of how workers in jobs that have them dealing with the general public face-to-face are burned-out and fed-up, not of how bosses are replacing those workers with new devices--a decade after robot waiters and the like were supposed to be on the verge of becoming commonplace. We read that industrial robot orders are up--but (as, perhaps, we note that the actual number of robots ordered is not really so staggering) we read far more of supposed "labor shortages" than we do of automation filling in the gaps. We also know that, as seen from the standpoint of the whole economy, productive investment--without which no one is automating anything--remains depressed compared with what it was pre-crisis (and remember, the world never really got over that Great Recession), while it also does not seem terribly likely to get much better very soon, with that media darling and icon of techno-hype Elon Musk, even as he promises a humanoid Teslabot by September, publicly raving about recession just around the corner and preemptively slashing his work force in anticipation, not in the expectation of employing fewer humans, just fewer humans with actual salaries (while those Teslabots do not seem to be part of the story, go figure).

Why do we see such a disparity between the expectations and the reality? A major reason, I think, is that those who vaguely anticipated some colossal rush to automate the economy imagined vast numbers of ready, or nearly ready, systems ready to do the job--a natural result of the press tending to imagine that "innovations" at Technology Readiness Level 1 are actually on Level 9, with the truth coming out when push comes to shove, as it so clearly has amid the crisis, the requisite means are not nearly so far along as some would have had us believe. Those observers also underestimated, just as government and the media have generally done, just how disruptive the pandemic was to be--how long the pandemic and its direct disruptions would last, to say nothing of the indirect, and how much all this would matter. In line with the familiar prejudices of the media lockdowns, strikes and the war in Ukraine get plenty of time and space as sources of economic troubles--but critics of central bank monetary policies get very, very little, with one result that the upsurge in inflation took so many "experts" by surprise. And that inflation, and the tightening of credit that has inevitably followed it, however belated and gradual compared with the talk of latterday Volcker shock it may be, are hardly the kind of thing that encourages investors. Nor are the unceasing supply chain problems. (If the country can't even keep itself in baby formula, how can it keep itself in the inputs required for a drastic expansion of revolutionary automation?) The result is that those of us watching this scene would do well to take those reports of some drastic increase in the rate of automation with a measure of skepticism.

Subscribe Now: Feed Icon