When there is mention of cellular agriculture meat is the first thing of which people think, and not without reason. It seems that the earliest, largest, most publicized efforts have been in that area, and not accidentally. The high consumption of meat, its relatively great resource-intensiveness and general environmental impact, and the ethical issues it raises in regard to the treatment of animals, make the idea of being able to produce meat while relieving or eliminating the problems very attractive--with the possibility of more reliable supplies and lower prices making it more attractive still.
Yet it is far from being the case that meat, or animal products generally (similar initiatives exist in the areas of dairy, eggs, seafood, and even leather), are the sole object of such efforts. We are also seeing them in the production of plant-based foods (with word of cellular cocoa recently grabbing headlines), and even non-food items like textile fibers (such as cotton) and building materials (such as wood), in the hope of achieving similar environmental and economic advantages.
One may take the proliferation of such efforts, and their expansion into seemingly ever more areas, for a sign of confidence in the technology's progress. However, as one looks over the headlines one also notices that we hear far more about laboratory achievements and start-ups raising money than we do about actual products hitting the actual market. Indeed, after many, many years of being told that the consumer would be able to try "clean meat" for themselves not in some limited-scale special event in some faraway place but by buying it off the shelf at their local grocery store "before the end of this year" all that anyone looking for a burger or chicken nuggets or anything else made the conventional way still finds on offer are plant-based concoctions being sold (for now, at least) for rather more than "the real thing." Meanwhile the press, reflecting its longstanding prejudices (especially where anything that might alleviate environmental stress is concerned), gives the Malthusian-Luddite brigade ample platform space from which to sneer at the possibility and denounce the idea even if it were feasible--and with them, those vegans determined that carnivores desirous of "meat without guilt" shall have no escape from an all plant-based diet, forever.
I cannot say whether those trying to make cellular agriculture happen, or the naysayers, will prove right about the chances of clean meat becoming available to the consumer any time soon. I have simply seen too many technologies that looked promising, and even worked in the lab, fail to prove practical as a consumer good, and there is no doubt that we have already seen hopes raised and quashed here so many times in that way that has so often preceded interest in some concept fizzling out for a long while that I am put in mind of the self-driving car hype of recent years. Still, there is also no denying that those pursuing cellular agriculture have made enormous strides in reaching this point (the price of a burger made from cultured beef has fallen from $330,000 to $10 in a decade's time), while the good the technology can potentially do for a world in which it should never be forgotten that, contrary to what some seem to think, the problem of the vast majority of those living on the planet is that they have not too much but too little, is far too great to be dismissed.
Wednesday, September 21, 2022
Tuesday, September 20, 2022
Will Apartment Buildings Have a Bigger Role in American Housing in the Future?
In discussing the future of housing we hear a great deal about houses, but much less about apartments--and this is not at all accidental. A mere 1 in 6 Americans live in apartments, and while the numbers vary greatly by region, even in highly urbanized New York state, with the highest proportion of people residing in such homes of any state in the country, only 1 in 4 residents of that state do so. Moreover, the comparative fewness of apartment dwellers is reinforced by the tendency to think of people who do so doing so only temporarily (like single people who have yet to settle down); or doing so because they are, frankly, socioeconomically marginal--and therefore of no interest to the media, politicians or other opinion-makers. (Indeed, the marginalization of apartment living goes along with the marginalization of rental in a culture devoted to the ideal of "home ownership," apartment dwellers disproportionately accounting for the country's renters.)
Still, as a glance at Europe's situation makes clear this is not the only possibility in even a "First World," Western, country. Half of the European Union resides in apartments, and while the proportion is admittedly greater in the less affluent east and south of the Union, even in wealthy Germany over half do so--all as the proportion living in apartments in even richer Switzerland is still higher.
Is it possible that the U.S. could be more like Europe in the future in this respect?
There seem to me some reason for thinking so, not least in the prospects for technological innovation. There is, for example, the possible effect of technologies like prefabricated homes and the 3-D printing of structures--which, while mostly identified with small buildings, may be extendable well beyond that (with a 5-story building produced through the method several years ago). It may well prove the case that such technologies will achieve economies of scale in the construction of large multiunit structures relative to detached houses, to the advantage of apartments over houses in price.
There is also the prospect of apartment living itself being made more attractive than it has been to date--its disadvantages diminished. One can, for example, imagine that apartments themselves might be improved in such ways as interior design economizing the use of space, or improvements in soundproofing reducing the annoyances caused by noisy neighbors.
Of course, "innovation" has a tendency to materialize in significant fashion where it "sustains" rather than "disrupts" established businesses--while business is more enthusiastic about chasing the dollars of those who have most rather than those who have least, adding to the comfort of the rich rather than relieving the discomfort of the non-rich. However, the differing situation in Europe and elsewhere suggests that even where the U.S. may not be particularly fertile territory for it can still happen.
Meanwhile the shifts in the demography of the United States may suggest a population more open to apartment living than its predecessors. Young people, we are told, are less car-oriented and less city-averse than their elders, while, perhaps reflecting the harder economic times which have been so formative for them, also leerier of financial commitments. They are also less inclined to marry and raise families--conventionally the moment when people decide to buy houses. And many of them have gone on living at home, in part because of a lack of affordable housing. At the same time an aging world is moving toward an older age structure in which we would have more older persons, who not incidentally have had a harder time saving for retirement, who might find it a good move to sell off their family home after the nest has been emptied and move into an apartment to relieve themselves of the hassles of the high-maintenance housing we have, especially if they could improve their financial situation doing so. Between those young persons, and those older persons, one could picture a greater demand for affordable apartments, reinforced by other shifts in daily living--for example, the ascent of Transportation-as-a-Service making residence in a dense urban center more attractive than before, by making it that much easier to get along without cars.
One can picture the two lines of development (technological innovation improving the attractiveness of apartment living, more singles and older people looking for cheaper and more hassle-free units) converging, and in the process possibly remaking one of the most fundamental aspects of daily life in the United States, how we provide ourselves with shelter.
Still, as a glance at Europe's situation makes clear this is not the only possibility in even a "First World," Western, country. Half of the European Union resides in apartments, and while the proportion is admittedly greater in the less affluent east and south of the Union, even in wealthy Germany over half do so--all as the proportion living in apartments in even richer Switzerland is still higher.
Is it possible that the U.S. could be more like Europe in the future in this respect?
There seem to me some reason for thinking so, not least in the prospects for technological innovation. There is, for example, the possible effect of technologies like prefabricated homes and the 3-D printing of structures--which, while mostly identified with small buildings, may be extendable well beyond that (with a 5-story building produced through the method several years ago). It may well prove the case that such technologies will achieve economies of scale in the construction of large multiunit structures relative to detached houses, to the advantage of apartments over houses in price.
There is also the prospect of apartment living itself being made more attractive than it has been to date--its disadvantages diminished. One can, for example, imagine that apartments themselves might be improved in such ways as interior design economizing the use of space, or improvements in soundproofing reducing the annoyances caused by noisy neighbors.
Of course, "innovation" has a tendency to materialize in significant fashion where it "sustains" rather than "disrupts" established businesses--while business is more enthusiastic about chasing the dollars of those who have most rather than those who have least, adding to the comfort of the rich rather than relieving the discomfort of the non-rich. However, the differing situation in Europe and elsewhere suggests that even where the U.S. may not be particularly fertile territory for it can still happen.
Meanwhile the shifts in the demography of the United States may suggest a population more open to apartment living than its predecessors. Young people, we are told, are less car-oriented and less city-averse than their elders, while, perhaps reflecting the harder economic times which have been so formative for them, also leerier of financial commitments. They are also less inclined to marry and raise families--conventionally the moment when people decide to buy houses. And many of them have gone on living at home, in part because of a lack of affordable housing. At the same time an aging world is moving toward an older age structure in which we would have more older persons, who not incidentally have had a harder time saving for retirement, who might find it a good move to sell off their family home after the nest has been emptied and move into an apartment to relieve themselves of the hassles of the high-maintenance housing we have, especially if they could improve their financial situation doing so. Between those young persons, and those older persons, one could picture a greater demand for affordable apartments, reinforced by other shifts in daily living--for example, the ascent of Transportation-as-a-Service making residence in a dense urban center more attractive than before, by making it that much easier to get along without cars.
One can picture the two lines of development (technological innovation improving the attractiveness of apartment living, more singles and older people looking for cheaper and more hassle-free units) converging, and in the process possibly remaking one of the most fundamental aspects of daily life in the United States, how we provide ourselves with shelter.
Monday, September 19, 2022
What Ever Happened to Prefabricated Housing?
Those who tell us that ours is an age of unprecedented technological change making science fiction into reality never tire of talking about their cell phone--but somehow never seem to have anything to say about a great many other things that are fundamental to daily living.
Like housing.
It remains far and away the norm to build homes one at a time, in a process largely consisting of craftsmen (carpenters, etc.) transforming raw materials at best only slightly processed (like planks of wooded) into the elements from which they laboriously construct the structure.
It is an artisanal process--remote from the mechanized mass manufacturing of the Industrial Revolution, which in regard to much of home construction seems never to have happened.
A person might wonder if this has not been because, for some reason, this style of housing has "withstood the test of time" as clearly superior to any industrial alternatives.
As is often the case with those who speak pompously of the "test of time" they would be wrong to do so--not least because the provision of adequate amounts of affordable, quality, housing has so long been recognized as beyond that older method. Indeed, writing of the lacks of America's "affluent society" in the 1950s John Kenneth Galbraith specifically noted housing as one area where the richest country in history, riding high on the post-war boom, was poor--and few would care to argue the point today. As any homebuyer is likely to learn we have the artisanal method's drawbacks without the individuality, beauty, durability it offered at its best--living instead in generic boxes that are as obscenely high-maintenance as they are expensive.
Moreover, the reality is that practical industrialization of house-building--through the "prefabrication" of structures--is at this point a generations-old practice that has consistently proven superior from the standpoint of building time and cost, even without the great economies of scale (and general productivity improvements) that might be achieved were the production of such houses carried out on a really widespread basis. (They have also been known to have numerous advantages over conventionally-built homes in such important respects as structural strength and energy-efficiency.)
Given both the failure of the old way, and the existence of a proven solution to it, one can only wonder why prefabricated housing--about which it was once common to hear as the "wave of the future"--never became more than the marginal thing it is within today's construction market. Those discussing the issue sometimes talk about consumers finding them off-putting because of poor "reputation"--but I must admit that this explanation has never struck me as really satisfying. People buy what they know about, and I am not sure so many are aware the product exists, let alone its having any reputation with the broad public. And where what they know about is concerned people buy what is made available to them within their price range--while it is clear that builders are not going out of their way to make prefabricated homes widely available. Instead, as anyone perusing the discussions of the pros and cons of such homes quickly finds that such homes, rather than being produced in large numbers for purchase like any other home, are something individuals must personally arrange to have constructed on land they buy, suffering through a great many expenses and hassles they would not have to suffer when purchasing an already existing, conventionally built home.
Of course, that raises the question of why the construction sector has not taken more interest. The most plausible answer seems to be that, contrary to what those besotted with words like "ENTREPRENEURSHIP!" and "INNOVATION!" tell us, the failure of prefab home-building to make much headway is yet another story of a disruptive technology being successfully warded off by established businesses making the most of their position to stick with their established practices.
Nevertheless, it is far from inconceivable that prefabricated housing may have the benefit of significant tailwinds in the coming years. The combination of shortages of skilled labor across the range of building trades (carpenters, framing crews, etc.), and the tougher situation faced by consumers, provides the construction business with more incentive to pursue cost-saving options--while there may be significant synergies between prefabricated housing and other new technologies. Certainly Carl Benedikt Frey and Michael Osborne speculated in their study The Future of Employment that the prefabrication of buildings may play an important role in the automation of construction by simplifying the on-site activity. Meanwhile the construction sector may be facing increasing disruption from another technology--the 3-D printer and its potential to "print" homes. Certainly that technology could prove a competitor to prefabrication--but it is also possible that by disrupting the industry it can also create openings for a greater use of prefabricated structures, especially if each technology proves to be more useful than the other in some tasks. If so then we may belatedly see this very important industry, which has left so few pleased with its delivery of the goods, finally join the modern world, and in the process take a long overdue step toward turning scarcity in this very important area of life into abundance.
Like housing.
It remains far and away the norm to build homes one at a time, in a process largely consisting of craftsmen (carpenters, etc.) transforming raw materials at best only slightly processed (like planks of wooded) into the elements from which they laboriously construct the structure.
It is an artisanal process--remote from the mechanized mass manufacturing of the Industrial Revolution, which in regard to much of home construction seems never to have happened.
A person might wonder if this has not been because, for some reason, this style of housing has "withstood the test of time" as clearly superior to any industrial alternatives.
As is often the case with those who speak pompously of the "test of time" they would be wrong to do so--not least because the provision of adequate amounts of affordable, quality, housing has so long been recognized as beyond that older method. Indeed, writing of the lacks of America's "affluent society" in the 1950s John Kenneth Galbraith specifically noted housing as one area where the richest country in history, riding high on the post-war boom, was poor--and few would care to argue the point today. As any homebuyer is likely to learn we have the artisanal method's drawbacks without the individuality, beauty, durability it offered at its best--living instead in generic boxes that are as obscenely high-maintenance as they are expensive.
Moreover, the reality is that practical industrialization of house-building--through the "prefabrication" of structures--is at this point a generations-old practice that has consistently proven superior from the standpoint of building time and cost, even without the great economies of scale (and general productivity improvements) that might be achieved were the production of such houses carried out on a really widespread basis. (They have also been known to have numerous advantages over conventionally-built homes in such important respects as structural strength and energy-efficiency.)
Given both the failure of the old way, and the existence of a proven solution to it, one can only wonder why prefabricated housing--about which it was once common to hear as the "wave of the future"--never became more than the marginal thing it is within today's construction market. Those discussing the issue sometimes talk about consumers finding them off-putting because of poor "reputation"--but I must admit that this explanation has never struck me as really satisfying. People buy what they know about, and I am not sure so many are aware the product exists, let alone its having any reputation with the broad public. And where what they know about is concerned people buy what is made available to them within their price range--while it is clear that builders are not going out of their way to make prefabricated homes widely available. Instead, as anyone perusing the discussions of the pros and cons of such homes quickly finds that such homes, rather than being produced in large numbers for purchase like any other home, are something individuals must personally arrange to have constructed on land they buy, suffering through a great many expenses and hassles they would not have to suffer when purchasing an already existing, conventionally built home.
Of course, that raises the question of why the construction sector has not taken more interest. The most plausible answer seems to be that, contrary to what those besotted with words like "ENTREPRENEURSHIP!" and "INNOVATION!" tell us, the failure of prefab home-building to make much headway is yet another story of a disruptive technology being successfully warded off by established businesses making the most of their position to stick with their established practices.
Nevertheless, it is far from inconceivable that prefabricated housing may have the benefit of significant tailwinds in the coming years. The combination of shortages of skilled labor across the range of building trades (carpenters, framing crews, etc.), and the tougher situation faced by consumers, provides the construction business with more incentive to pursue cost-saving options--while there may be significant synergies between prefabricated housing and other new technologies. Certainly Carl Benedikt Frey and Michael Osborne speculated in their study The Future of Employment that the prefabrication of buildings may play an important role in the automation of construction by simplifying the on-site activity. Meanwhile the construction sector may be facing increasing disruption from another technology--the 3-D printer and its potential to "print" homes. Certainly that technology could prove a competitor to prefabrication--but it is also possible that by disrupting the industry it can also create openings for a greater use of prefabricated structures, especially if each technology proves to be more useful than the other in some tasks. If so then we may belatedly see this very important industry, which has left so few pleased with its delivery of the goods, finally join the modern world, and in the process take a long overdue step toward turning scarcity in this very important area of life into abundance.
Earth4All Deep Dive Paper #8 and the RethinkX Vision of Sustainability
The Earth4All Project was announced at the United Nations' Framework Convention on Climate Change back in November 2020. Led by teams at the Club of Rome, the Norwegian Business School and the Potsdam Institute for Climate Impact Research, its activities includes the monthly series of "Deep Dive" papers, the August 2022 edition of which ("The Clean Energy Transformation: A New Paradigm for Social Progress Within Planetary Boundaries") comes from the RethinkX think tank's Director of Global Research Communications, Nafeez Ahmed.
Those who are already acquainted with RethinkX's work, and particularly its Rethinking Humanity report, will recognize much in the Deep Dive paper as familiar from that earlier publication. Once again this paper presents the think tank's argument that the world is going through a historic transition as the cost and material throughput of five essentials for human living--information, energy, transport, food and materials--drop by an order of magnitude or more, which will potentially be as radical as the rise of human civilization itself (shifting us from the exploitation-based "Age of Extraction" with which civilization has been synonymous to, one may hope, an "Age of Freedom"). Where energy in particular is concerned RethinkX's argument goes that the sharp decline in cost of solar, wind and battery storage relative to the alternatives (which has already left trillions of dollars' worth of recent investment in fossil fuel apparatus "stranded") holds out the possibility not only of a successful transition to a post-fossil fuels, carbon-neutral (or even carbon-negative) energy base, but by way of a deliberate construction of "excess" capacity along the lines they characterize as "Clean Energy Super Power," "green" abundance that will make the economics of energy look like what the economics of information have become in the age of the Internet.
All of that having been presented before the question is what is new in this specific report. What impressed me on that level was its treatment of two issues relevant to this energy transition, namely the Energy Return On Investment (EROI) that renewables yield, and the material throughput required to build a renewable energy base on a global scale--both points of particular interest because renewables-bashers have made so much of these issues. As Ahmed shows here, EROI is not the obstacle some make it out to be--the more in as he now finds that the EROI for fossil fuels has consistently been overestimated, while that for renewables has consistently been underestimated. That the EROI for fossil fuels tends to be "measured right at the well-head rather than the most relevant point, which is where the energy enters the economy as electricity or petrol" in itself leaves fossil fuels with a lower EROI than the estimates of renewables' EROI that Ahmed considers to be unfairly low. As he argues, estimates for the useful life of photovoltaics tend to lowball the figure (estimating twenty to thirty years when more realistically they may be good for forty to fifty years), and to treat batteries as a deduction from renewables' EROI when they can easily boost it (one empirical examination showing they "actually increased EROI by making available energy that would otherwise be lost to curtailment"), even before one gets into such "phase-changing" possibilities as he anticipates from a shift to renewables on a really large scale (epitomized by the Clean Energy Super Power concept). Where material throughput is concerned he observes that the construction of the requisite base would be coming not on top of, but instead of, the resource demands of sustaining and extending the existing fossil fuel base, the existing (and increasingly uneconomical) apparatus of which can be regarded as a "vast global repository" of materials for use (the steel in old offshore oil rigs, for example, convertible into raw material for new windmill towers), all while the recycling of the requisite materials would be quite ample to close the supply gaps--the bottlenecks here, again, wildly exaggerated by those euphemistically called "skeptics" of the transition.
The result is that Ahmed's case bolsters further still what has for a long time been the strongest aspect of the RethinkX analysis--the trend in the energy market, and the increasing technical feasibility of a world of "green" yet abundant energy. However, it seems worth acknowledging that it does not bolster what may have most needed bolstering, namely the think tank's treatment of other dimensions of the matter, not least the transportation and food sectors, whose interaction with the energy transition is crucial to their vision of sustainability, to say nothing of a new civilization. Transportation, after all, is a major user of energy, and the shift from human-driven, gasoline-burning cars generally operated on an individual basis to self-driving electric cars (Electric-Autonomous Vehicles, or E-AVs) that make possible "Transportation as a Service" (TAAS) is crucial to their vision of making transport cheaply and conveniently available to all. The RethinkX analysts also look to cellular agriculture to sharply cut greenhouse gas emissions from that quarter and open up vast amounts of land to climate change-offsetting reforestation. Where all that is concerned RethinkX's prior reports rigorously work out just what could be expected to happen were safe, reliable, affordable E-AVs and cost-competitive cellular agriculture to hit the market. However, that arrival in the market is another matter. Their expectations in the area of transport are based not on the kind of robust analysis of easily observable (and increasingly widely acknowledged) price trends on which RethinkX's claims regarding energy have been based, but comparatively opaque expert pronouncements that have already proved overoptimistic. (In RethinkX's 2017 report 2021 was supposed to be the year of the great disruption in which TAAS began the displacement of our current transport model. Alas, it has not been so--with many expecting no such displacement for a good long while to come.) If somewhat more data-based their predictions regarding food may similarly prove overoptimistic. (Their 2019 report on the matter had the first cellular meat products hitting the market in 2022--while as of July 2022 not only had no such thing happened, but it remained uncertain when it actually would.)
Rather than reassessment of the earlier analysis on that score what RethinkX offers here are the same essential predictions, albeit with a greater vagueness about the time frame (inclining to reference to significant movements over the broader 10-15 year time frame they predicted for the more general transformation, rather than predicting more specific developments at points throughout it). Additionally the report has nothing to add about that area RethinkX has had least to say about, materials (uncovered by a report of its own). Nevertheless, if "The Clean Energy Transformation" document falls short of a completion and update of the broad RethinkX vision, it remains a useful summation of that vision, and well warrants attention from those who would like an accessible introduction to it, as well as those interested in the think tank's most recent word on the progress of renewable energy that is the document's principal concern.
Those who are already acquainted with RethinkX's work, and particularly its Rethinking Humanity report, will recognize much in the Deep Dive paper as familiar from that earlier publication. Once again this paper presents the think tank's argument that the world is going through a historic transition as the cost and material throughput of five essentials for human living--information, energy, transport, food and materials--drop by an order of magnitude or more, which will potentially be as radical as the rise of human civilization itself (shifting us from the exploitation-based "Age of Extraction" with which civilization has been synonymous to, one may hope, an "Age of Freedom"). Where energy in particular is concerned RethinkX's argument goes that the sharp decline in cost of solar, wind and battery storage relative to the alternatives (which has already left trillions of dollars' worth of recent investment in fossil fuel apparatus "stranded") holds out the possibility not only of a successful transition to a post-fossil fuels, carbon-neutral (or even carbon-negative) energy base, but by way of a deliberate construction of "excess" capacity along the lines they characterize as "Clean Energy Super Power," "green" abundance that will make the economics of energy look like what the economics of information have become in the age of the Internet.
All of that having been presented before the question is what is new in this specific report. What impressed me on that level was its treatment of two issues relevant to this energy transition, namely the Energy Return On Investment (EROI) that renewables yield, and the material throughput required to build a renewable energy base on a global scale--both points of particular interest because renewables-bashers have made so much of these issues. As Ahmed shows here, EROI is not the obstacle some make it out to be--the more in as he now finds that the EROI for fossil fuels has consistently been overestimated, while that for renewables has consistently been underestimated. That the EROI for fossil fuels tends to be "measured right at the well-head rather than the most relevant point, which is where the energy enters the economy as electricity or petrol" in itself leaves fossil fuels with a lower EROI than the estimates of renewables' EROI that Ahmed considers to be unfairly low. As he argues, estimates for the useful life of photovoltaics tend to lowball the figure (estimating twenty to thirty years when more realistically they may be good for forty to fifty years), and to treat batteries as a deduction from renewables' EROI when they can easily boost it (one empirical examination showing they "actually increased EROI by making available energy that would otherwise be lost to curtailment"), even before one gets into such "phase-changing" possibilities as he anticipates from a shift to renewables on a really large scale (epitomized by the Clean Energy Super Power concept). Where material throughput is concerned he observes that the construction of the requisite base would be coming not on top of, but instead of, the resource demands of sustaining and extending the existing fossil fuel base, the existing (and increasingly uneconomical) apparatus of which can be regarded as a "vast global repository" of materials for use (the steel in old offshore oil rigs, for example, convertible into raw material for new windmill towers), all while the recycling of the requisite materials would be quite ample to close the supply gaps--the bottlenecks here, again, wildly exaggerated by those euphemistically called "skeptics" of the transition.
The result is that Ahmed's case bolsters further still what has for a long time been the strongest aspect of the RethinkX analysis--the trend in the energy market, and the increasing technical feasibility of a world of "green" yet abundant energy. However, it seems worth acknowledging that it does not bolster what may have most needed bolstering, namely the think tank's treatment of other dimensions of the matter, not least the transportation and food sectors, whose interaction with the energy transition is crucial to their vision of sustainability, to say nothing of a new civilization. Transportation, after all, is a major user of energy, and the shift from human-driven, gasoline-burning cars generally operated on an individual basis to self-driving electric cars (Electric-Autonomous Vehicles, or E-AVs) that make possible "Transportation as a Service" (TAAS) is crucial to their vision of making transport cheaply and conveniently available to all. The RethinkX analysts also look to cellular agriculture to sharply cut greenhouse gas emissions from that quarter and open up vast amounts of land to climate change-offsetting reforestation. Where all that is concerned RethinkX's prior reports rigorously work out just what could be expected to happen were safe, reliable, affordable E-AVs and cost-competitive cellular agriculture to hit the market. However, that arrival in the market is another matter. Their expectations in the area of transport are based not on the kind of robust analysis of easily observable (and increasingly widely acknowledged) price trends on which RethinkX's claims regarding energy have been based, but comparatively opaque expert pronouncements that have already proved overoptimistic. (In RethinkX's 2017 report 2021 was supposed to be the year of the great disruption in which TAAS began the displacement of our current transport model. Alas, it has not been so--with many expecting no such displacement for a good long while to come.) If somewhat more data-based their predictions regarding food may similarly prove overoptimistic. (Their 2019 report on the matter had the first cellular meat products hitting the market in 2022--while as of July 2022 not only had no such thing happened, but it remained uncertain when it actually would.)
Rather than reassessment of the earlier analysis on that score what RethinkX offers here are the same essential predictions, albeit with a greater vagueness about the time frame (inclining to reference to significant movements over the broader 10-15 year time frame they predicted for the more general transformation, rather than predicting more specific developments at points throughout it). Additionally the report has nothing to add about that area RethinkX has had least to say about, materials (uncovered by a report of its own). Nevertheless, if "The Clean Energy Transformation" document falls short of a completion and update of the broad RethinkX vision, it remains a useful summation of that vision, and well warrants attention from those who would like an accessible introduction to it, as well as those interested in the think tank's most recent word on the progress of renewable energy that is the document's principal concern.
Friday, September 16, 2022
What Are the Odds That Teaching Will Be Automated in the Very Near Term?
Recent months have brought a great wave of news stories about a shortage of teachers approaching crisis levels--and the possibility that even if such a shortage is not already underway (a difficult thing to establish one way or the other given the scarcity of really comprehensive educational statistics) it may be imminent as exhausted instructors leave the profession much more quickly than anticipated, new entrants are deterred from joining the profession in the expected numbers by the conditions of the job, or the combination of the two widening the gap between need and supply.
One question I have found myself wondering about, given the talk we have been hearing of automation, has been the expectations regarding the automation of teaching specifically. Not long ago I considered Ray Kurzweil's thoughts about the matter at the turn of the century--which, as with many of his predictions in the relevant areas, were premised on forecasts of advance in particular technological areas that have since appeared overoptimistic (notably the speed at which pattern-recognizing neural nets and all premised on them would develop) and a naiveté regarding the social dimensions of the subjects about which he wrote (in this case, the school's function as "babysitter").
However, not everyone has been so optimistic--even those who have, by any reasonable measure, been optimists about automation. Exemplary is the study Carl Benedikt Frey and Michael Osborne produced back in 2013, which played so important a part in the conversation about automation and employment in the '10s. That study included in its appendix a table listing over 700 occupations and the chances of their being "computerized"--"potentially automatable over some unspecified number of years, perhaps a decade or two."
The authors determined that the jobs of data entry keyers, telemarketers and new accounts clerks had a 99 percent chance of being "computerizable." Contrary to what might be expected by those who make much of "high-knowledge" occupations, Frey and Osborne even anticipated fairly high odds of a great deal of scientific work becoming automated (with atmospheric and space scientists having a 67 percent chance of having their jobs automated), with, in spite of what may be thought from the popularity of the sneer "Learn to code," a near-even chance of the same happening with computer programming (48 percent). But, teaching assistants apart, they put the odds of computerizing any teaching occupation at not much better than 1 in 4 (a 27 percent chance of middle school technical teachers), while the odds of computerizing postsecondary school (college) teaching they put at 3 percent, the odds of computerizing preschool, elementary and secondary school teaching at under 1 percent.
In short, far from being easy to automate, their analysis suggests that teaching will, to go by their assessment of the potential for computerizing the task, be exceptionally difficult to automate satisfactorily. The result is that even if a great wave of automation swept through the rest of the economy—for what it is worth, Frey and Osborne calculated that nearly half of U.S. jobs were, in the absence of significant political or economic obstacles (legal barriers, particularly poor investment conditions, etc.), at "high" (70 percent-plus) risk of such computerization by the early 2030s--automation would have little impact on a great many teaching jobs. The result is that one can easily picture a situation in which job-seekers would find themselves with fewer alternatives to teaching--meaning relatively more people pursuing such positions, not less (at a time in which an aging population structure would likely mean fewer students, and fewer job openings for that reason). In the nearer term, in the absence of any such pressure sending people toward the occupation, it seems additional reason to think automation unlikely to be a solution to the problem.
One question I have found myself wondering about, given the talk we have been hearing of automation, has been the expectations regarding the automation of teaching specifically. Not long ago I considered Ray Kurzweil's thoughts about the matter at the turn of the century--which, as with many of his predictions in the relevant areas, were premised on forecasts of advance in particular technological areas that have since appeared overoptimistic (notably the speed at which pattern-recognizing neural nets and all premised on them would develop) and a naiveté regarding the social dimensions of the subjects about which he wrote (in this case, the school's function as "babysitter").
However, not everyone has been so optimistic--even those who have, by any reasonable measure, been optimists about automation. Exemplary is the study Carl Benedikt Frey and Michael Osborne produced back in 2013, which played so important a part in the conversation about automation and employment in the '10s. That study included in its appendix a table listing over 700 occupations and the chances of their being "computerized"--"potentially automatable over some unspecified number of years, perhaps a decade or two."
The authors determined that the jobs of data entry keyers, telemarketers and new accounts clerks had a 99 percent chance of being "computerizable." Contrary to what might be expected by those who make much of "high-knowledge" occupations, Frey and Osborne even anticipated fairly high odds of a great deal of scientific work becoming automated (with atmospheric and space scientists having a 67 percent chance of having their jobs automated), with, in spite of what may be thought from the popularity of the sneer "Learn to code," a near-even chance of the same happening with computer programming (48 percent). But, teaching assistants apart, they put the odds of computerizing any teaching occupation at not much better than 1 in 4 (a 27 percent chance of middle school technical teachers), while the odds of computerizing postsecondary school (college) teaching they put at 3 percent, the odds of computerizing preschool, elementary and secondary school teaching at under 1 percent.
In short, far from being easy to automate, their analysis suggests that teaching will, to go by their assessment of the potential for computerizing the task, be exceptionally difficult to automate satisfactorily. The result is that even if a great wave of automation swept through the rest of the economy—for what it is worth, Frey and Osborne calculated that nearly half of U.S. jobs were, in the absence of significant political or economic obstacles (legal barriers, particularly poor investment conditions, etc.), at "high" (70 percent-plus) risk of such computerization by the early 2030s--automation would have little impact on a great many teaching jobs. The result is that one can easily picture a situation in which job-seekers would find themselves with fewer alternatives to teaching--meaning relatively more people pursuing such positions, not less (at a time in which an aging population structure would likely mean fewer students, and fewer job openings for that reason). In the nearer term, in the absence of any such pressure sending people toward the occupation, it seems additional reason to think automation unlikely to be a solution to the problem.
Revisiting Carl Benedikt Frey and Michael Osborne's The Future of Employment
Back in September 2013 Carl Benedikt Frey and Michael Osborne presented the working paper The Future of Employment. Subsequently republished as an article in the January 2017 edition of the journal Technological Forecasting and Social Change, the item played a significant part in galvanizing the debate about automation--and indeed produced panic in some circles. (It certainly says something that it got former Treasury Secretary Larry Summers, a consistent opponent of government action to redress downturn and joblessness--not least during the Great Recession, with highly controversial result--talking about how in the face of automation governments would "need to take a more explicit role in ensuring full employment than has been the practice in the U.S.," considering such possibilities as "targeted wage subsidies," "major investments in infrastructure" and even "direct public employment programmes.")
Where the Frey-Osborne study is specifically concerned I suspect most of those who talked about it paid attention mainly to the authors' conclusion, and indeed an oversimplified version of that conclusion that gives the impression that much of the awareness among those who should have had it firsthand was actually secondhand. (Specifically they turned the authors' declaration that "According to our estimate, 47 percent of total U.S. employment is" at 70 percent-plus risk of being "potentially automatable over some unspecified number of years, perhaps a decade or two"--potentially because economic conditions and the political response to the possibility were outside their study's purview--into "Your job is going to disappear very soon. Start panicking now, losers!")
This is, in part, because of how the media tends to work--not only favoring what will grab attention and ignoring the "boring" stuff, but because of how it treats those whom it regards as worth citing, with Carl Sagan worth citing by way of background. As he observed in science there are at best experts (people who have studied an issue more than others and whom it may be hoped know more than others), not authorities (people whose "Because I said so" is a final judgment that decides how the situation actually is for everyone else). However the mainstream media--not exactly accomplished at understanding the scientific method, let alone the culture of science shaped by that method and necessary for its application--does not even understand the distinction, let alone respect it. Accordingly it treats those persons it consults not as experts who can help explain the world to its readers, listeners and viewers so as to help them learn about it, think about it, understand it and form their own conclusions, but authorities whose pronouncements are to be heeded unquestioningly, like latterday oracles. And, of course, in a society gone insane with the Cult of the Good School, and regarding "Oxford" as the only school on Earth that can outdo "Harvard" in the snob stakes, dropping the name in connection with the pronouncement counts for a lot with people of crude and conventional mind. (People from Oxford said it, so it must be true!)
However, part of it is the character of the report itself. The main text is 48 pages long, and written in that jargon-heavy and parenthetical reference-crammed style that screams "Look how scientific I'm being!" It also contains some rather involved equations that, on top of including those Greek symbols that I suspect immediately scare most people off (the dreaded sigma makes an appearance), are not explained as accessibly as they might be, or even as fully as they might be. (The mathematical/machine learning jargon gets particularly thick here--"feature vector," "discriminant function," "Gaussian process classifier," "covariance matrix," "logit regression," etc.--while explaining their formulas the authors do not work through a single example such as might show how they worked out the probability for a particular job, even as they left the reader with plenty of questions about just how they quantified all that O*NET data. Certainly I don't think anyone would find attempting to replicate the authors' results would be a straightforward thing on the basis of their explanations.) Accordingly it is not what even the highly literate and mathematically competent would call "light reading"--and unsurprisingly, few seem to have really tried to read it, or make sense of what they did read, or ask any questions. (This is even as, alas, what they did not understand made them more credulous rather than less so--because not only did people from Oxford say it, but they said it with equations!)
Still, the fact remains that one need not be a specialist in this field to get much more of what is essential than the press generally bothered with. Simply put, Frey and Osborne argued (verbally) that progress in pattern recognition and big data, in combination with improvements in the price and performance of sensors, and the mobility and "manual dexterity" of robots, were making it possible to move automation beyond routine tasks that can be reduced to explicit rules by computerizing non-routine cognitive and physical tasks--with an example of which they made much the ability of a self-driving car to navigate a cityscape (Google's progress at the time of their report's writing apparently a touchstone for them). Indeed, the authors go so far as to claim that "it is largely already technologically possible to automate almost any task, provided that sufficient amounts of data are gathered for pattern recognition," apart from situations where three particular sets of "inhibiting engineering bottlenecks" ("perception and manipulation tasks, creative intelligence tasks, and social intelligence tasks") interfere, and workarounds prove inadequate to overcome the interference. (The possibility of taking a task and "designing the difficult bits out"--of, for example, replacing the non-routine with the routine, as by relying on prefabrication to simplify the work done at a construction site--is a significant theme of the paper.)
How did the authors determine just where those bottlenecks became significant, and how much so? Working with a group of machine learning specialists they took descriptions of 70 occupations from the U.S. Department of Labor Occupational Information Network (O*NET) online database and "subjectively hand-labelled" them as automatable or non-automatable. They then checked their subjective assessments against what they intended to be a more "objective" process to confirm that their assessments were "systematically and consistently related to the O*NET information. This consisted of
1. Dividing the three broad bottlenecks into nine more discrete requirements for task performance (e.g. rather than "perception and manipulation," the ability to "work in a cramped space," or "manual dexterity").
2. On the basis of the O*NET information, working out just how important the trait was, and how high the level of competence in it, for the performance of the task (for instance, whether a very high level of manual dexterity was very important in a task, or a low level of such importance), and
3. Using an algorithm (basically, running these inputs through the formulas I mentioned earlier) to validate the subjective assessments - and it would seem, use those assessments to validate the algorithm.
They then used the algorithm to establish the probability of the other 632 jobs under study, on the basis of their features, being similarly computerizable over the time frame with which they concerned themselves (unspecified, but inclining to the one-to-two decade range), with the threshold for "medium" risk set at 30 percent, that for "high" risk at 70 percent.
Seeing the reasoning laid out in this way one can argue that it proceeded from a set of assumptions that were very much open to question. Even before one gets into the nuances of the methodology they used the assumption that pattern recognition + big data had already laid the groundwork for a great transformation of the economy can seem overoptimistic, the more in as we consider the conclusions to which it led them. Given that the study was completed in 2013, a decade or two works out to (more or less) the 2023-2033 time frame, more or less--in which they thought there was an 89 percent chance of the job of the taxi driver and chauffeur being automatable, and a 79 percent chance of the same going for heavy truck drivers (very high odds indeed, and this, again, without any great breakthroughs). Alas, in 2022, with more perspective on such matters, not least the inadequacies of the neural nets controlling self-driving vehicles even after truly vast amounts of machine learning, there still seems considerable room for doubt about that. Meanwhile a good many of the authors' assessments can in themselves leave one wondering at the methods that produced the results. (For instance, while they generally conclude that teaching is particularly hard to automate--they put the odds of elementary and high school teaching being computerized at under 1 percent--they put the odds of middle school teaching being computerized at 17 percent. This is still near the bottom of the list from the standpoint of susceptibility, and well inside the low-risk category, but an order of magnitude higher than the possibility of computerizing teaching at those other levels. What about middle school makes so much difference? I have no clue.) The result is that while hindsight is always sharper than foresight, it seems that had more people actually tried to understand the premises of the paper we would have seen more skepticism toward its more spectacular claims.
Where the Frey-Osborne study is specifically concerned I suspect most of those who talked about it paid attention mainly to the authors' conclusion, and indeed an oversimplified version of that conclusion that gives the impression that much of the awareness among those who should have had it firsthand was actually secondhand. (Specifically they turned the authors' declaration that "According to our estimate, 47 percent of total U.S. employment is" at 70 percent-plus risk of being "potentially automatable over some unspecified number of years, perhaps a decade or two"--potentially because economic conditions and the political response to the possibility were outside their study's purview--into "Your job is going to disappear very soon. Start panicking now, losers!")
This is, in part, because of how the media tends to work--not only favoring what will grab attention and ignoring the "boring" stuff, but because of how it treats those whom it regards as worth citing, with Carl Sagan worth citing by way of background. As he observed in science there are at best experts (people who have studied an issue more than others and whom it may be hoped know more than others), not authorities (people whose "Because I said so" is a final judgment that decides how the situation actually is for everyone else). However the mainstream media--not exactly accomplished at understanding the scientific method, let alone the culture of science shaped by that method and necessary for its application--does not even understand the distinction, let alone respect it. Accordingly it treats those persons it consults not as experts who can help explain the world to its readers, listeners and viewers so as to help them learn about it, think about it, understand it and form their own conclusions, but authorities whose pronouncements are to be heeded unquestioningly, like latterday oracles. And, of course, in a society gone insane with the Cult of the Good School, and regarding "Oxford" as the only school on Earth that can outdo "Harvard" in the snob stakes, dropping the name in connection with the pronouncement counts for a lot with people of crude and conventional mind. (People from Oxford said it, so it must be true!)
However, part of it is the character of the report itself. The main text is 48 pages long, and written in that jargon-heavy and parenthetical reference-crammed style that screams "Look how scientific I'm being!" It also contains some rather involved equations that, on top of including those Greek symbols that I suspect immediately scare most people off (the dreaded sigma makes an appearance), are not explained as accessibly as they might be, or even as fully as they might be. (The mathematical/machine learning jargon gets particularly thick here--"feature vector," "discriminant function," "Gaussian process classifier," "covariance matrix," "logit regression," etc.--while explaining their formulas the authors do not work through a single example such as might show how they worked out the probability for a particular job, even as they left the reader with plenty of questions about just how they quantified all that O*NET data. Certainly I don't think anyone would find attempting to replicate the authors' results would be a straightforward thing on the basis of their explanations.) Accordingly it is not what even the highly literate and mathematically competent would call "light reading"--and unsurprisingly, few seem to have really tried to read it, or make sense of what they did read, or ask any questions. (This is even as, alas, what they did not understand made them more credulous rather than less so--because not only did people from Oxford say it, but they said it with equations!)
Still, the fact remains that one need not be a specialist in this field to get much more of what is essential than the press generally bothered with. Simply put, Frey and Osborne argued (verbally) that progress in pattern recognition and big data, in combination with improvements in the price and performance of sensors, and the mobility and "manual dexterity" of robots, were making it possible to move automation beyond routine tasks that can be reduced to explicit rules by computerizing non-routine cognitive and physical tasks--with an example of which they made much the ability of a self-driving car to navigate a cityscape (Google's progress at the time of their report's writing apparently a touchstone for them). Indeed, the authors go so far as to claim that "it is largely already technologically possible to automate almost any task, provided that sufficient amounts of data are gathered for pattern recognition," apart from situations where three particular sets of "inhibiting engineering bottlenecks" ("perception and manipulation tasks, creative intelligence tasks, and social intelligence tasks") interfere, and workarounds prove inadequate to overcome the interference. (The possibility of taking a task and "designing the difficult bits out"--of, for example, replacing the non-routine with the routine, as by relying on prefabrication to simplify the work done at a construction site--is a significant theme of the paper.)
How did the authors determine just where those bottlenecks became significant, and how much so? Working with a group of machine learning specialists they took descriptions of 70 occupations from the U.S. Department of Labor Occupational Information Network (O*NET) online database and "subjectively hand-labelled" them as automatable or non-automatable. They then checked their subjective assessments against what they intended to be a more "objective" process to confirm that their assessments were "systematically and consistently related to the O*NET information. This consisted of
1. Dividing the three broad bottlenecks into nine more discrete requirements for task performance (e.g. rather than "perception and manipulation," the ability to "work in a cramped space," or "manual dexterity").
2. On the basis of the O*NET information, working out just how important the trait was, and how high the level of competence in it, for the performance of the task (for instance, whether a very high level of manual dexterity was very important in a task, or a low level of such importance), and
3. Using an algorithm (basically, running these inputs through the formulas I mentioned earlier) to validate the subjective assessments - and it would seem, use those assessments to validate the algorithm.
They then used the algorithm to establish the probability of the other 632 jobs under study, on the basis of their features, being similarly computerizable over the time frame with which they concerned themselves (unspecified, but inclining to the one-to-two decade range), with the threshold for "medium" risk set at 30 percent, that for "high" risk at 70 percent.
Seeing the reasoning laid out in this way one can argue that it proceeded from a set of assumptions that were very much open to question. Even before one gets into the nuances of the methodology they used the assumption that pattern recognition + big data had already laid the groundwork for a great transformation of the economy can seem overoptimistic, the more in as we consider the conclusions to which it led them. Given that the study was completed in 2013, a decade or two works out to (more or less) the 2023-2033 time frame, more or less--in which they thought there was an 89 percent chance of the job of the taxi driver and chauffeur being automatable, and a 79 percent chance of the same going for heavy truck drivers (very high odds indeed, and this, again, without any great breakthroughs). Alas, in 2022, with more perspective on such matters, not least the inadequacies of the neural nets controlling self-driving vehicles even after truly vast amounts of machine learning, there still seems considerable room for doubt about that. Meanwhile a good many of the authors' assessments can in themselves leave one wondering at the methods that produced the results. (For instance, while they generally conclude that teaching is particularly hard to automate--they put the odds of elementary and high school teaching being computerized at under 1 percent--they put the odds of middle school teaching being computerized at 17 percent. This is still near the bottom of the list from the standpoint of susceptibility, and well inside the low-risk category, but an order of magnitude higher than the possibility of computerizing teaching at those other levels. What about middle school makes so much difference? I have no clue.) The result is that while hindsight is always sharper than foresight, it seems that had more people actually tried to understand the premises of the paper we would have seen more skepticism toward its more spectacular claims.
The Poverty of Our Educational Statistics
Some years ago Business Insider called the U.S. Federal Reserve Bank of St. Louis' Federal Reserve Economic Data (FRED) database "The Most Amazing Economics Website in the World." Want to have your choice of measurements of inflation in June 1953? How about manufacturing employment in Michigan--or maybe just auto manufacturing in the Detroit-Warren-Dearborn metro area--in December 1999? Or how post-tax corporate profits in the fourth quarter of 2008 compared with those of the same quarter in the preceding five years? Offering 800,000+ time series FRED may not quite offer the answers to every question a researcher may have, for whom simply having access to statistics is likely to be a starting point, but putting so much a quick keyword search away it sure is handy.
One might think that in this age of relentless data-hoovering, ever more abundant computing power and widespread statistical training one would, on examining any public issue, especially one as hotly contested as education (they even made the first season of House of Cards about it!), easily find a web site that, in at least some degree, does for American education what the Federal Reserve does with FRED.
If one thinks that then they are wrong. Very, very wrong. Someone looking for something so readily countable as, for example, the number of unfilled openings in the country's schools is likely to have a hard time getting even the most elementary data (never mind a FRED-like wealth of time series)--as the recent arguments about teacher shortages show. (Simply put, people give us numbers about unfilled positions in this school system or that state--but no one seems to have anything to compare them to, to tell us if things are normal, getting worse, even getting better.)
That this is so little talked about--that so few realize that this is the case--can seem to imply that not many people have gone looking for such numbers; that in fact those who have gone looking for what can seem very basic information for anyone trying to come to any conclusions about these matters are much fewer in number than those crowding the media and the Web with their "opinions."
One might think that in this age of relentless data-hoovering, ever more abundant computing power and widespread statistical training one would, on examining any public issue, especially one as hotly contested as education (they even made the first season of House of Cards about it!), easily find a web site that, in at least some degree, does for American education what the Federal Reserve does with FRED.
If one thinks that then they are wrong. Very, very wrong. Someone looking for something so readily countable as, for example, the number of unfilled openings in the country's schools is likely to have a hard time getting even the most elementary data (never mind a FRED-like wealth of time series)--as the recent arguments about teacher shortages show. (Simply put, people give us numbers about unfilled positions in this school system or that state--but no one seems to have anything to compare them to, to tell us if things are normal, getting worse, even getting better.)
That this is so little talked about--that so few realize that this is the case--can seem to imply that not many people have gone looking for such numbers; that in fact those who have gone looking for what can seem very basic information for anyone trying to come to any conclusions about these matters are much fewer in number than those crowding the media and the Web with their "opinions."
Economic Opportunity and the Demographic Trend
In discussing Japan's low fertility rate and aging population the media (certainly in the U.S., though so far as I can tell, elsewhere as well) has inclined to the crude and the simple-minded, the hot-button and sensationalist--and per usual overlooked important aspects of the matter.
Thus we have stories about Japan as a country of "forty year old virgins," and tales of young people abandoning the prospect of human contact in preference of "virtual" love. But we hear little of what those who look at the business and economics pages know very well--namely that Japan has been in a bad way here for decades, with the most significant consequences for this matter.
After all, in a modern society where economic life is individualistic and setting up a household and having children is very expensive and little help will be forthcoming from any source the responsible thing to do is to only attempt to do so when one has, among much else, a reasonable expectation of an income over the long-term that will be at least adequate to raise children "decently"--which is to say, at a middle-class standard--the prerequisite for which has been a "good job" offering sufficient security of tenure in a position paying a middle class wage that they can expect to go on receiving one for a good long time to come. And that is exactly what has become very elusive in recent decades--as the economic growth engine that looked so impressive in the 1950s and 1960s stalled out, and as Fordism's vague promises of generalized middle-classness faded Japan has been a signal case as the fastest-growing of the large industrialized nations became the slowest-growing nearly overnight, and, to go by one calculation, per capita Gross Domestic Product fell by half during this past generation, all as the old notion of "lifetime employment" waned what can seem a lifetime ago.
Quite naturally those who would have been inclined (and of course, everyone is not necessarily inclined) refrain, with any impression that this is the case reinforced by what we see overseas, not least in those two oldest of the major Western nations, Italy and Germany--their similarities and their differences. Italy comes closer than any other Group of Seven nation to Japan in its shift from brisk growth to stagnation (and, even when we use the more conventional numbers, economic contraction) in another spectacle of a modern country with modern attitudes to these things seeing its birth rate fall. (Indeed, in every single year from 2013 forward Italy's fertility rate has been lower than Japan's, averaging 1.3 against Japan's 1.4 for 2013-2020, with the 2020 rate 1.2 against Japan's 1.3.)
Of course, Germany may not seem to fit the profile so neatly given its image as an economic success story. However, it is worth noting that, even apart from the qualified nature of its success (Germany remains a manufacturing power, but is also a long way away from its "Wirtschaftswunder"-era dynamism), and the fact that its social model is moving in the same direction as everyone else's (with all that means for young people starting their lives), its figures vary significantly by region. In particular Germany's high average age obscures the cleavage between what tend to be the older (and less prosperous) eastern regions as against the more youthful (and more prosperous) western regions.
Alas, a media which makes a curse of the word "millennial," and sneers at the idea of working people wanting any security at all as "entitlement" on their part, has little interest and less sympathy in such matters--while knowing full well that stories about it are less likely to do well in the "attention economy" than stories about "virtual girlfriends." Boding poorly for our understanding of the matter in the recent past, it also bodes poorly for our ability to understand it in the future--in which the ability of young people to get along economically in the world may not be the only factor, but nevertheless a hugely important one, however much the opinion makers of today would like one to think otherwise.
Thus we have stories about Japan as a country of "forty year old virgins," and tales of young people abandoning the prospect of human contact in preference of "virtual" love. But we hear little of what those who look at the business and economics pages know very well--namely that Japan has been in a bad way here for decades, with the most significant consequences for this matter.
After all, in a modern society where economic life is individualistic and setting up a household and having children is very expensive and little help will be forthcoming from any source the responsible thing to do is to only attempt to do so when one has, among much else, a reasonable expectation of an income over the long-term that will be at least adequate to raise children "decently"--which is to say, at a middle-class standard--the prerequisite for which has been a "good job" offering sufficient security of tenure in a position paying a middle class wage that they can expect to go on receiving one for a good long time to come. And that is exactly what has become very elusive in recent decades--as the economic growth engine that looked so impressive in the 1950s and 1960s stalled out, and as Fordism's vague promises of generalized middle-classness faded Japan has been a signal case as the fastest-growing of the large industrialized nations became the slowest-growing nearly overnight, and, to go by one calculation, per capita Gross Domestic Product fell by half during this past generation, all as the old notion of "lifetime employment" waned what can seem a lifetime ago.
Quite naturally those who would have been inclined (and of course, everyone is not necessarily inclined) refrain, with any impression that this is the case reinforced by what we see overseas, not least in those two oldest of the major Western nations, Italy and Germany--their similarities and their differences. Italy comes closer than any other Group of Seven nation to Japan in its shift from brisk growth to stagnation (and, even when we use the more conventional numbers, economic contraction) in another spectacle of a modern country with modern attitudes to these things seeing its birth rate fall. (Indeed, in every single year from 2013 forward Italy's fertility rate has been lower than Japan's, averaging 1.3 against Japan's 1.4 for 2013-2020, with the 2020 rate 1.2 against Japan's 1.3.)
Of course, Germany may not seem to fit the profile so neatly given its image as an economic success story. However, it is worth noting that, even apart from the qualified nature of its success (Germany remains a manufacturing power, but is also a long way away from its "Wirtschaftswunder"-era dynamism), and the fact that its social model is moving in the same direction as everyone else's (with all that means for young people starting their lives), its figures vary significantly by region. In particular Germany's high average age obscures the cleavage between what tend to be the older (and less prosperous) eastern regions as against the more youthful (and more prosperous) western regions.
Alas, a media which makes a curse of the word "millennial," and sneers at the idea of working people wanting any security at all as "entitlement" on their part, has little interest and less sympathy in such matters--while knowing full well that stories about it are less likely to do well in the "attention economy" than stories about "virtual girlfriends." Boding poorly for our understanding of the matter in the recent past, it also bodes poorly for our ability to understand it in the future--in which the ability of young people to get along economically in the world may not be the only factor, but nevertheless a hugely important one, however much the opinion makers of today would like one to think otherwise.
Thursday, September 8, 2022
Has the Theory of Economic Long Waves Ceased to Be Relevant?
The economic theory of "long waves" holds that economic growth has a 40-60 year cycle with the first half of the cycle, an "upward" wave of 20-30 years--a period of strong growth with recessions few and mild--followed by a "downward" wave that is the opposite, with growth weak and downturns frequent and severe for two to three decades until it is followed in its turn by a new upward wave beginning the next cycle.
First suggested in the 1920s by the Soviet economist Nikolai Kondratiev (indeed, long waves are often called "Kondratiev waves" in his honor) the idea is controversial in outline and detail (e.g. just what causes them), but nevertheless has numerous, noteworthy adherents across the spectrum of economic theory and ideology who have made considerable use of it in their work, from figures like Joseph Schumpeter on the right to a Michael Roberts on the left. This is, in part, because the theory seemed to be borne out by the events of mid-century. In hindsight the period from the turn of the century to World War I looks like an upward wave, the period of the '20s and '30s and '40s a downward wave, but then the period that followed it, the big post-war boom of the '50s and '60s another upward wave--which was followed by yet another downward wave absolutely no later than the '70s.
So far, so good--but the years since have been another matter. Assuming a downward wave in the '70s we ought to expect another upward wave in the '90s and certainly the early twenty-first century. Indeed, we might expect to have already run through a whole such wave and, just now, find ourselves in, entering or at least approaching another downward wave.
As it happens the U.S. did have a boom in the late '90s. However, in contrast with the wide expectations that this boom was the beginning of something lasting and epochal (remember how Clinton was going to pay down the national debt with that exploding tax revenue?), that boom petered out fast--and so did the associated seeds of growth, like labor productivity growth, which pretty much fell into the toilet in the twenty-first century, and stayed there. Meanwhile the same years were less than booming for the rest of the world--with the Soviet bloc's output collapse bottoming out, with Europe Eurosclerotic and Japan in its lost decade amid an Asia hard-hit by financial crisis, and the Third World generally struggling with the Commodity Depression, the aftereffects of the Volcker shock/debt crisis, and the new frustrations the decade brought (with the "Asian" crisis tipping Brazil over into default).
Of course, as the American boom waned the rest of the world did somewhat better--indeed, depending on which figures one consults--the 2002-2008 period saw some really impressive growth at the global level. But again this was short-lived, cut off by the 2007-2008 financial crisis, from which the world never really recovered before it got kicked while it was down again by pandemic, recession, war. (The numbers, as measured in any manner, have been lousy, but if one uses the Consumer Price Index rather than chained-dollar-based deflators to adjust the "current" figures for inflation then it seems we saw economic collapse in a large part of the world, partially obscured by China's still doing fairly well, but the Chinese miracle was slowing down too.)
The result is that as of the early 2020s, almost a half century after the downturn (commonly dated to 1973), there simply has been no long boom to speak of. Of course, some analysts remain optimistic, with Swiss financial giant UBS recently suggesting that the latter part of the decade, helped by business investment in digital technologies to enable them to keep operating during the pandemic that will work out to long efficiency gains, public investments in infrastructure and R & D, and a green energy boom, may mean better times ahead. Perhaps. Yet it has seemed to me that there has been more hype than substance in the talk of an automation boom (indeed, business investment seems to have mainly been about short-term crisis management--shoring up supply chains, stocking up on inventory, etc., while their success in "digitizing" remains arguable); government action remains a long way from really boom-starting levels (the Inflation Reduction Act, of which only part is devoted to green investment, devotes $400 billion or so to such matters over a decade, a comparative drop in the bucket); and while I remain optimistic about the potentials of renewable energy there is room for doubt that the investment we get in it will be anywhere near enough to make for a long upward movement.
In short, far from finding myself bullish about the prospect of a new long wave, I find myself remembering that the theory was a conclusion drawn from a very small sample (these cycles not generally traced further back than the late eighteenth century), which especially after the experience of the last half century can leave us the more doubtful that there was ever much to the theory to the begin with. However, I also find myself considering another possibility, namely that for that period of history such a cycle may have actually been operative--and that cycle since broken, perhaps permanently, along with the predictive value that it once seemed to possess.
First suggested in the 1920s by the Soviet economist Nikolai Kondratiev (indeed, long waves are often called "Kondratiev waves" in his honor) the idea is controversial in outline and detail (e.g. just what causes them), but nevertheless has numerous, noteworthy adherents across the spectrum of economic theory and ideology who have made considerable use of it in their work, from figures like Joseph Schumpeter on the right to a Michael Roberts on the left. This is, in part, because the theory seemed to be borne out by the events of mid-century. In hindsight the period from the turn of the century to World War I looks like an upward wave, the period of the '20s and '30s and '40s a downward wave, but then the period that followed it, the big post-war boom of the '50s and '60s another upward wave--which was followed by yet another downward wave absolutely no later than the '70s.
So far, so good--but the years since have been another matter. Assuming a downward wave in the '70s we ought to expect another upward wave in the '90s and certainly the early twenty-first century. Indeed, we might expect to have already run through a whole such wave and, just now, find ourselves in, entering or at least approaching another downward wave.
As it happens the U.S. did have a boom in the late '90s. However, in contrast with the wide expectations that this boom was the beginning of something lasting and epochal (remember how Clinton was going to pay down the national debt with that exploding tax revenue?), that boom petered out fast--and so did the associated seeds of growth, like labor productivity growth, which pretty much fell into the toilet in the twenty-first century, and stayed there. Meanwhile the same years were less than booming for the rest of the world--with the Soviet bloc's output collapse bottoming out, with Europe Eurosclerotic and Japan in its lost decade amid an Asia hard-hit by financial crisis, and the Third World generally struggling with the Commodity Depression, the aftereffects of the Volcker shock/debt crisis, and the new frustrations the decade brought (with the "Asian" crisis tipping Brazil over into default).
Of course, as the American boom waned the rest of the world did somewhat better--indeed, depending on which figures one consults--the 2002-2008 period saw some really impressive growth at the global level. But again this was short-lived, cut off by the 2007-2008 financial crisis, from which the world never really recovered before it got kicked while it was down again by pandemic, recession, war. (The numbers, as measured in any manner, have been lousy, but if one uses the Consumer Price Index rather than chained-dollar-based deflators to adjust the "current" figures for inflation then it seems we saw economic collapse in a large part of the world, partially obscured by China's still doing fairly well, but the Chinese miracle was slowing down too.)
The result is that as of the early 2020s, almost a half century after the downturn (commonly dated to 1973), there simply has been no long boom to speak of. Of course, some analysts remain optimistic, with Swiss financial giant UBS recently suggesting that the latter part of the decade, helped by business investment in digital technologies to enable them to keep operating during the pandemic that will work out to long efficiency gains, public investments in infrastructure and R & D, and a green energy boom, may mean better times ahead. Perhaps. Yet it has seemed to me that there has been more hype than substance in the talk of an automation boom (indeed, business investment seems to have mainly been about short-term crisis management--shoring up supply chains, stocking up on inventory, etc., while their success in "digitizing" remains arguable); government action remains a long way from really boom-starting levels (the Inflation Reduction Act, of which only part is devoted to green investment, devotes $400 billion or so to such matters over a decade, a comparative drop in the bucket); and while I remain optimistic about the potentials of renewable energy there is room for doubt that the investment we get in it will be anywhere near enough to make for a long upward movement.
In short, far from finding myself bullish about the prospect of a new long wave, I find myself remembering that the theory was a conclusion drawn from a very small sample (these cycles not generally traced further back than the late eighteenth century), which especially after the experience of the last half century can leave us the more doubtful that there was ever much to the theory to the begin with. However, I also find myself considering another possibility, namely that for that period of history such a cycle may have actually been operative--and that cycle since broken, perhaps permanently, along with the predictive value that it once seemed to possess.
Tuesday, September 6, 2022
The Vision of Japan as the Future: A Reflection
Back in the '80s it was common for Americans to think of Japan as "the future"--the country on the leading edge of technological development and business practice, the industrial world-beater that was emerging as pace-setter, model and maybe even hegemon.
A few years later, as Japan's economic boom was revealed as substantially a real estate-and-stock bubble that had been as much a function of American weakness as Japanese strength (as America's exploding Reagan-era trade deficit filled the country's bank vaults with dollars, and American devaluation translated to a massive strengthening of the yen); as Japan's supremacy in areas like chip-making proved fragile, and its prospects for leaping ahead of the rest of the world in others (as through breakthroughs in fifth-generation computing, or room-temperature superconductors) proved illusory; and the country went from being the fastest-growing to the most stagnant of the major industrial economies; all that faded, reinforced by the illusions and delusions of America's late '90s tech boom, which shifted the tendency of America's dialogue away from hand-wringing over declinism to "irrational exuberance" (at least, for a short but critical period after the debut of Windows '95).
Yet in hindsight it can seem that Japan never really did stop being the image of the future. It was just the case that observers of conventional mind failed to recognize it because the future was not what people thought it was at the time. They pictured technological dynamism and economic boom--but the future, since that time, has really been technological and economic stagnation, with Japan's "lost decade" turned "lost decades" turned "lost generation" matched by the world's own "lost generation" these past many years. And the same goes for that stgantion's effects, like social withdrawal--Americans, certainly, seeming to notice the phenomenon of the "hikikomori" in Japan long before they noticed it at home.
Thus has it also gone with Japan's demography--the country's people less often marrying and having children, such that even by the standards of a world on the whole going through the "demographic transition" the country's situation has been extreme. According to the Central Intelligence Agency's World Factbook tiny and ultra-rich Monaco apart, Japan is the oldest country on Earth, with a median age of almost 49 years, and only 12 percent of its population under age 14. Still, others are not so far behind, with, according to the very same sources, dozens of other countries, including every major advanced industrial country but the U.S., having a median age of over forty (and the U.S. not far behind, with a median age of 39), and similarly dwindling numbers of youth (the percentage who are 0-14 in age likewise 12 percent in South Korea, 13 percent in Italy, 14 percent in Germany, with 15 percent the Euro area average).
Considering the last it seems fitting that the trend was already evident at the very peak of that preoccupation with Japan as industrial paragon, 1989, the year of the "1.57 shock" (when the country recorded a Total Fertility Rate of 1.57--at the time regarded as a shockingly number, though the government would probably be ecstatic if it was that high today). The result is that those interested in the difficulties of an aging society are looking at Japan wondering how it will deal with these difficulties as they manifest there first--and what the country does here as likely to inform others' thought about how to cope with contemporary reality as much as it did back when business "experts" seemed transfixed by "Japan Inc." as the epitome of industrial competence.
A few years later, as Japan's economic boom was revealed as substantially a real estate-and-stock bubble that had been as much a function of American weakness as Japanese strength (as America's exploding Reagan-era trade deficit filled the country's bank vaults with dollars, and American devaluation translated to a massive strengthening of the yen); as Japan's supremacy in areas like chip-making proved fragile, and its prospects for leaping ahead of the rest of the world in others (as through breakthroughs in fifth-generation computing, or room-temperature superconductors) proved illusory; and the country went from being the fastest-growing to the most stagnant of the major industrial economies; all that faded, reinforced by the illusions and delusions of America's late '90s tech boom, which shifted the tendency of America's dialogue away from hand-wringing over declinism to "irrational exuberance" (at least, for a short but critical period after the debut of Windows '95).
Yet in hindsight it can seem that Japan never really did stop being the image of the future. It was just the case that observers of conventional mind failed to recognize it because the future was not what people thought it was at the time. They pictured technological dynamism and economic boom--but the future, since that time, has really been technological and economic stagnation, with Japan's "lost decade" turned "lost decades" turned "lost generation" matched by the world's own "lost generation" these past many years. And the same goes for that stgantion's effects, like social withdrawal--Americans, certainly, seeming to notice the phenomenon of the "hikikomori" in Japan long before they noticed it at home.
Thus has it also gone with Japan's demography--the country's people less often marrying and having children, such that even by the standards of a world on the whole going through the "demographic transition" the country's situation has been extreme. According to the Central Intelligence Agency's World Factbook tiny and ultra-rich Monaco apart, Japan is the oldest country on Earth, with a median age of almost 49 years, and only 12 percent of its population under age 14. Still, others are not so far behind, with, according to the very same sources, dozens of other countries, including every major advanced industrial country but the U.S., having a median age of over forty (and the U.S. not far behind, with a median age of 39), and similarly dwindling numbers of youth (the percentage who are 0-14 in age likewise 12 percent in South Korea, 13 percent in Italy, 14 percent in Germany, with 15 percent the Euro area average).
Considering the last it seems fitting that the trend was already evident at the very peak of that preoccupation with Japan as industrial paragon, 1989, the year of the "1.57 shock" (when the country recorded a Total Fertility Rate of 1.57--at the time regarded as a shockingly number, though the government would probably be ecstatic if it was that high today). The result is that those interested in the difficulties of an aging society are looking at Japan wondering how it will deal with these difficulties as they manifest there first--and what the country does here as likely to inform others' thought about how to cope with contemporary reality as much as it did back when business "experts" seemed transfixed by "Japan Inc." as the epitome of industrial competence.
Subscribe to:
Posts (Atom)