Friday, September 16, 2022

What Are the Odds That Teaching Will Be Automated in the Very Near Term?

Recent months have brought a great wave of news stories about a shortage of teachers approaching crisis levels--and the possibility that even if such a shortage is not already underway (a difficult thing to establish one way or the other given the scarcity of really comprehensive educational statistics) it may be imminent as exhausted instructors leave the profession much more quickly than anticipated, new entrants are deterred from joining the profession in the expected numbers by the conditions of the job, or the combination of the two widening the gap between need and supply.

One question I have found myself wondering about, given the talk we have been hearing of automation, has been the expectations regarding the automation of teaching specifically. Not long ago I considered Ray Kurzweil's thoughts about the matter at the turn of the century--which, as with many of his predictions in the relevant areas, were premised on forecasts of advance in particular technological areas that have since appeared overoptimistic (notably the speed at which pattern-recognizing neural nets and all premised on them would develop) and a naiveté regarding the social dimensions of the subjects about which he wrote (in this case, the school's function as "babysitter").

However, not everyone has been so optimistic--even those who have, by any reasonable measure, been optimists about automation. Exemplary is the study Carl Benedikt Frey and Michael Osborne produced back in 2013, which played so important a part in the conversation about automation and employment in the '10s. That study included in its appendix a table listing over 700 occupations and the chances of their being "computerized"--"potentially automatable over some unspecified number of years, perhaps a decade or two."

The authors determined that the jobs of data entry keyers, telemarketers and new accounts clerks had a 99 percent chance of being "computerizable." Contrary to what might be expected by those who make much of "high-knowledge" occupations, Frey and Osborne even anticipated fairly high odds of a great deal of scientific work becoming automated (with atmospheric and space scientists having a 67 percent chance of having their jobs automated), with, in spite of what may be thought from the popularity of the sneer "Learn to code," a near-even chance of the same happening with computer programming (48 percent). But, teaching assistants apart, they put the odds of computerizing any teaching occupation at not much better than 1 in 4 (a 27 percent chance of middle school technical teachers), while the odds of computerizing postsecondary school (college) teaching they put at 3 percent, the odds of computerizing preschool, elementary and secondary school teaching at under 1 percent.

In short, far from being easy to automate, their analysis suggests that teaching will, to go by their assessment of the potential for computerizing the task, be exceptionally difficult to automate satisfactorily. The result is that even if a great wave of automation swept through the rest of the economy—for what it is worth, Frey and Osborne calculated that nearly half of U.S. jobs were, in the absence of significant political or economic obstacles (legal barriers, particularly poor investment conditions, etc.), at "high" (70 percent-plus) risk of such computerization by the early 2030s--automation would have little impact on a great many teaching jobs. The result is that one can easily picture a situation in which job-seekers would find themselves with fewer alternatives to teaching--meaning relatively more people pursuing such positions, not less (at a time in which an aging population structure would likely mean fewer students, and fewer job openings for that reason). In the nearer term, in the absence of any such pressure sending people toward the occupation, it seems additional reason to think automation unlikely to be a solution to the problem.

Revisiting Carl Benedikt Frey and Michael Osborne's The Future of Employment

Back in September 2013 Carl Benedikt Frey and Michael Osborne presented the working paper The Future of Employment. Subsequently republished as an article in the January 2017 edition of the journal Technological Forecasting and Social Change, the item played a significant part in galvanizing the debate about automation--and indeed produced panic in some circles. (It certainly says something that it got former Treasury Secretary Larry Summers, a consistent opponent of government action to redress downturn and joblessness--not least during the Great Recession, with highly controversial result--talking about how in the face of automation governments would "need to take a more explicit role in ensuring full employment than has been the practice in the U.S.," considering such possibilities as "targeted wage subsidies," "major investments in infrastructure" and even "direct public employment programmes.")

Where the Frey-Osborne study is specifically concerned I suspect most of those who talked about it paid attention mainly to the authors' conclusion, and indeed an oversimplified version of that conclusion that gives the impression that much of the awareness among those who should have had it firsthand was actually secondhand. (Specifically they turned the authors' declaration that "According to our estimate, 47 percent of total U.S. employment is" at 70 percent-plus risk of being "potentially automatable over some unspecified number of years, perhaps a decade or two"--potentially because economic conditions and the political response to the possibility were outside their study's purview--into "Your job is going to disappear very soon. Start panicking now, losers!")

This is, in part, because of how the media tends to work--not only favoring what will grab attention and ignoring the "boring" stuff, but because of how it treats those whom it regards as worth citing, with Carl Sagan worth citing by way of background. As he observed in science there are at best experts (people who have studied an issue more than others and whom it may be hoped know more than others), not authorities (people whose "Because I said so" is a final judgment that decides how the situation actually is for everyone else). However the mainstream media--not exactly accomplished at understanding the scientific method, let alone the culture of science shaped by that method and necessary for its application--does not even understand the distinction, let alone respect it. Accordingly it treats those persons it consults not as experts who can help explain the world to its readers, listeners and viewers so as to help them learn about it, think about it, understand it and form their own conclusions, but authorities whose pronouncements are to be heeded unquestioningly, like latterday oracles. And, of course, in a society gone insane with the Cult of the Good School, and regarding "Oxford" as the only school on Earth that can outdo "Harvard" in the snob stakes, dropping the name in connection with the pronouncement counts for a lot with people of crude and conventional mind. (People from Oxford said it, so it must be true!)

However, part of it is the character of the report itself. The main text is 48 pages long, and written in that jargon-heavy and parenthetical reference-crammed style that screams "Look how scientific I'm being!" It also contains some rather involved equations that, on top of including those Greek symbols that I suspect immediately scare most people off (the dreaded sigma makes an appearance), are not explained as accessibly as they might be, or even as fully as they might be. (The mathematical/machine learning jargon gets particularly thick here--"feature vector," "discriminant function," "Gaussian process classifier," "covariance matrix," "logit regression," etc.--while explaining their formulas the authors do not work through a single example such as might show how they worked out the probability for a particular job, even as they left the reader with plenty of questions about just how they quantified all that O*NET data. Certainly I don't think anyone would find attempting to replicate the authors' results would be a straightforward thing on the basis of their explanations.) Accordingly it is not what even the highly literate and mathematically competent would call "light reading"--and unsurprisingly, few seem to have really tried to read it, or make sense of what they did read, or ask any questions. (This is even as, alas, what they did not understand made them more credulous rather than less so--because not only did people from Oxford say it, but they said it with equations!)

Still, the fact remains that one need not be a specialist in this field to get much more of what is essential than the press generally bothered with. Simply put, Frey and Osborne argued (verbally) that progress in pattern recognition and big data, in combination with improvements in the price and performance of sensors, and the mobility and "manual dexterity" of robots, were making it possible to move automation beyond routine tasks that can be reduced to explicit rules by computerizing non-routine cognitive and physical tasks--with an example of which they made much the ability of a self-driving car to navigate a cityscape (Google's progress at the time of their report's writing apparently a touchstone for them). Indeed, the authors go so far as to claim that "it is largely already technologically possible to automate almost any task, provided that sufficient amounts of data are gathered for pattern recognition," apart from situations where three particular sets of "inhibiting engineering bottlenecks" ("perception and manipulation tasks, creative intelligence tasks, and social intelligence tasks") interfere, and workarounds prove inadequate to overcome the interference. (The possibility of taking a task and "designing the difficult bits out"--of, for example, replacing the non-routine with the routine, as by relying on prefabrication to simplify the work done at a construction site--is a significant theme of the paper.)

How did the authors determine just where those bottlenecks became significant, and how much so? Working with a group of machine learning specialists they took descriptions of 70 occupations from the U.S. Department of Labor Occupational Information Network (O*NET) online database and "subjectively hand-labelled" them as automatable or non-automatable. They then checked their subjective assessments against what they intended to be a more "objective" process to confirm that their assessments were "systematically and consistently related to the O*NET information. This consisted of

1. Dividing the three broad bottlenecks into nine more discrete requirements for task performance (e.g. rather than "perception and manipulation," the ability to "work in a cramped space," or "manual dexterity").

2. On the basis of the O*NET information, working out just how important the trait was, and how high the level of competence in it, for the performance of the task (for instance, whether a very high level of manual dexterity was very important in a task, or a low level of such importance), and

3. Using an algorithm (basically, running these inputs through the formulas I mentioned earlier) to validate the subjective assessments - and it would seem, use those assessments to validate the algorithm.

They then used the algorithm to establish the probability of the other 632 jobs under study, on the basis of their features, being similarly computerizable over the time frame with which they concerned themselves (unspecified, but inclining to the one-to-two decade range), with the threshold for "medium" risk set at 30 percent, that for "high" risk at 70 percent.

Seeing the reasoning laid out in this way one can argue that it proceeded from a set of assumptions that were very much open to question. Even before one gets into the nuances of the methodology they used the assumption that pattern recognition + big data had already laid the groundwork for a great transformation of the economy can seem overoptimistic, the more in as we consider the conclusions to which it led them. Given that the study was completed in 2013, a decade or two works out to (more or less) the 2023-2033 time frame, more or less--in which they thought there was an 89 percent chance of the job of the taxi driver and chauffeur being automatable, and a 79 percent chance of the same going for heavy truck drivers (very high odds indeed, and this, again, without any great breakthroughs). Alas, in 2022, with more perspective on such matters, not least the inadequacies of the neural nets controlling self-driving vehicles even after truly vast amounts of machine learning, there still seems considerable room for doubt about that. Meanwhile a good many of the authors' assessments can in themselves leave one wondering at the methods that produced the results. (For instance, while they generally conclude that teaching is particularly hard to automate--they put the odds of elementary and high school teaching being computerized at under 1 percent--they put the odds of middle school teaching being computerized at 17 percent. This is still near the bottom of the list from the standpoint of susceptibility, and well inside the low-risk category, but an order of magnitude higher than the possibility of computerizing teaching at those other levels. What about middle school makes so much difference? I have no clue.) The result is that while hindsight is always sharper than foresight, it seems that had more people actually tried to understand the premises of the paper we would have seen more skepticism toward its more spectacular claims.

The Poverty of Our Educational Statistics

Some years ago Business Insider called the U.S. Federal Reserve Bank of St. Louis' Federal Reserve Economic Data (FRED) database "The Most Amazing Economics Website in the World." Want to have your choice of measurements of inflation in June 1953? How about manufacturing employment in Michigan--or maybe just auto manufacturing in the Detroit-Warren-Dearborn metro area--in December 1999? Or how post-tax corporate profits in the fourth quarter of 2008 compared with those of the same quarter in the preceding five years? Offering 800,000+ time series FRED may not quite offer the answers to every question a researcher may have, for whom simply having access to statistics is likely to be a starting point, but putting so much a quick keyword search away it sure is handy.

One might think that in this age of relentless data-hoovering, ever more abundant computing power and widespread statistical training one would, on examining any public issue, especially one as hotly contested as education (they even made the first season of House of Cards about it!), easily find a web site that, in at least some degree, does for American education what the Federal Reserve does with FRED.

If one thinks that then they are wrong. Very, very wrong. Someone looking for something so readily countable as, for example, the number of unfilled openings in the country's schools is likely to have a hard time getting even the most elementary data (never mind a FRED-like wealth of time series)--as the recent arguments about teacher shortages show. (Simply put, people give us numbers about unfilled positions in this school system or that state--but no one seems to have anything to compare them to, to tell us if things are normal, getting worse, even getting better.)

That this is so little talked about--that so few realize that this is the case--can seem to imply that not many people have gone looking for such numbers; that in fact those who have gone looking for what can seem very basic information for anyone trying to come to any conclusions about these matters are much fewer in number than those crowding the media and the Web with their "opinions."

Economic Opportunity and the Demographic Trend

In discussing Japan's low fertility rate and aging population the media (certainly in the U.S., though so far as I can tell, elsewhere as well) has inclined to the crude and the simple-minded, the hot-button and sensationalist--and per usual overlooked important aspects of the matter.

Thus we have stories about Japan as a country of "forty year old virgins," and tales of young people abandoning the prospect of human contact in preference of "virtual" love. But we hear little of what those who look at the business and economics pages know very well--namely that Japan has been in a bad way here for decades, with the most significant consequences for this matter.

After all, in a modern society where economic life is individualistic and setting up a household and having children is very expensive and little help will be forthcoming from any source the responsible thing to do is to only attempt to do so when one has, among much else, a reasonable expectation of an income over the long-term that will be at least adequate to raise children "decently"--which is to say, at a middle-class standard--the prerequisite for which has been a "good job" offering sufficient security of tenure in a position paying a middle class wage that they can expect to go on receiving one for a good long time to come. And that is exactly what has become very elusive in recent decades--as the economic growth engine that looked so impressive in the 1950s and 1960s stalled out, and as Fordism's vague promises of generalized middle-classness faded Japan has been a signal case as the fastest-growing of the large industrialized nations became the slowest-growing nearly overnight, and, to go by one calculation, per capita Gross Domestic Product fell by half during this past generation, all as the old notion of "lifetime employment" waned what can seem a lifetime ago.

Quite naturally those who would have been inclined (and of course, everyone is not necessarily inclined) refrain, with any impression that this is the case reinforced by what we see overseas, not least in those two oldest of the major Western nations, Italy and Germany--their similarities and their differences. Italy comes closer than any other Group of Seven nation to Japan in its shift from brisk growth to stagnation (and, even when we use the more conventional numbers, economic contraction) in another spectacle of a modern country with modern attitudes to these things seeing its birth rate fall. (Indeed, in every single year from 2013 forward Italy's fertility rate has been lower than Japan's, averaging 1.3 against Japan's 1.4 for 2013-2020, with the 2020 rate 1.2 against Japan's 1.3.)

Of course, Germany may not seem to fit the profile so neatly given its image as an economic success story. However, it is worth noting that, even apart from the qualified nature of its success (Germany remains a manufacturing power, but is also a long way away from its "Wirtschaftswunder"-era dynamism), and the fact that its social model is moving in the same direction as everyone else's (with all that means for young people starting their lives), its figures vary significantly by region. In particular Germany's high average age obscures the cleavage between what tend to be the older (and less prosperous) eastern regions as against the more youthful (and more prosperous) western regions.

Alas, a media which makes a curse of the word "millennial," and sneers at the idea of working people wanting any security at all as "entitlement" on their part, has little interest and less sympathy in such matters--while knowing full well that stories about it are less likely to do well in the "attention economy" than stories about "virtual girlfriends." Boding poorly for our understanding of the matter in the recent past, it also bodes poorly for our ability to understand it in the future--in which the ability of young people to get along economically in the world may not be the only factor, but nevertheless a hugely important one, however much the opinion makers of today would like one to think otherwise.

Thursday, September 8, 2022

Has the Theory of Economic Long Waves Ceased to Be Relevant?

The economic theory of "long waves" holds that economic growth has a 40-60 year cycle with the first half of the cycle, an "upward" wave of 20-30 years--a period of strong growth with recessions few and mild--followed by a "downward" wave that is the opposite, with growth weak and downturns frequent and severe for two to three decades until it is followed in its turn by a new upward wave beginning the next cycle.

First suggested in the 1920s by the Soviet economist Nikolai Kondratiev (indeed, long waves are often called "Kondratiev waves" in his honor) the idea is controversial in outline and detail (e.g. just what causes them), but nevertheless has numerous, noteworthy adherents across the spectrum of economic theory and ideology who have made considerable use of it in their work, from figures like Joseph Schumpeter on the right to a Michael Roberts on the left. This is, in part, because the theory seemed to be borne out by the events of mid-century. In hindsight the period from the turn of the century to World War I looks like an upward wave, the period of the '20s and '30s and '40s a downward wave, but then the period that followed it, the big post-war boom of the '50s and '60s another upward wave--which was followed by yet another downward wave absolutely no later than the '70s.

So far, so good--but the years since have been another matter. Assuming a downward wave in the '70s we ought to expect another upward wave in the '90s and certainly the early twenty-first century. Indeed, we might expect to have already run through a whole such wave and, just now, find ourselves in, entering or at least approaching another downward wave.

As it happens the U.S. did have a boom in the late '90s. However, in contrast with the wide expectations that this boom was the beginning of something lasting and epochal (remember how Clinton was going to pay down the national debt with that exploding tax revenue?), that boom petered out fast--and so did the associated seeds of growth, like labor productivity growth, which pretty much fell into the toilet in the twenty-first century, and stayed there. Meanwhile the same years were less than booming for the rest of the world--with the Soviet bloc's output collapse bottoming out, with Europe Eurosclerotic and Japan in its lost decade amid an Asia hard-hit by financial crisis, and the Third World generally struggling with the Commodity Depression, the aftereffects of the Volcker shock/debt crisis, and the new frustrations the decade brought (with the "Asian" crisis tipping Brazil over into default).

Of course, as the American boom waned the rest of the world did somewhat better--indeed, depending on which figures one consults--the 2002-2008 period saw some really impressive growth at the global level. But again this was short-lived, cut off by the 2007-2008 financial crisis, from which the world never really recovered before it got kicked while it was down again by pandemic, recession, war. (The numbers, as measured in any manner, have been lousy, but if one uses the Consumer Price Index rather than chained-dollar-based deflators to adjust the "current" figures for inflation then it seems we saw economic collapse in a large part of the world, partially obscured by China's still doing fairly well, but the Chinese miracle was slowing down too.)

The result is that as of the early 2020s, almost a half century after the downturn (commonly dated to 1973), there simply has been no long boom to speak of. Of course, some analysts remain optimistic, with Swiss financial giant UBS recently suggesting that the latter part of the decade, helped by business investment in digital technologies to enable them to keep operating during the pandemic that will work out to long efficiency gains, public investments in infrastructure and R & D, and a green energy boom, may mean better times ahead. Perhaps. Yet it has seemed to me that there has been more hype than substance in the talk of an automation boom (indeed, business investment seems to have mainly been about short-term crisis management--shoring up supply chains, stocking up on inventory, etc., while their success in "digitizing" remains arguable); government action remains a long way from really boom-starting levels (the Inflation Reduction Act, of which only part is devoted to green investment, devotes $400 billion or so to such matters over a decade, a comparative drop in the bucket); and while I remain optimistic about the potentials of renewable energy there is room for doubt that the investment we get in it will be anywhere near enough to make for a long upward movement.

In short, far from finding myself bullish about the prospect of a new long wave, I find myself remembering that the theory was a conclusion drawn from a very small sample (these cycles not generally traced further back than the late eighteenth century), which especially after the experience of the last half century can leave us the more doubtful that there was ever much to the theory to the begin with. However, I also find myself considering another possibility, namely that for that period of history such a cycle may have actually been operative--and that cycle since broken, perhaps permanently, along with the predictive value that it once seemed to possess.

Tuesday, September 6, 2022

The Vision of Japan as the Future: A Reflection

Back in the '80s it was common for Americans to think of Japan as "the future"--the country on the leading edge of technological development and business practice, the industrial world-beater that was emerging as pace-setter, model and maybe even hegemon.

A few years later, as Japan's economic boom was revealed as substantially a real estate-and-stock bubble that had been as much a function of American weakness as Japanese strength (as America's exploding Reagan-era trade deficit filled the country's bank vaults with dollars, and American devaluation translated to a massive strengthening of the yen); as Japan's supremacy in areas like chip-making proved fragile, and its prospects for leaping ahead of the rest of the world in others (as through breakthroughs in fifth-generation computing, or room-temperature superconductors) proved illusory; and the country went from being the fastest-growing to the most stagnant of the major industrial economies; all that faded, reinforced by the illusions and delusions of America's late '90s tech boom, which shifted the tendency of America's dialogue away from hand-wringing over declinism to "irrational exuberance" (at least, for a short but critical period after the debut of Windows '95).

Yet in hindsight it can seem that Japan never really did stop being the image of the future. It was just the case that observers of conventional mind failed to recognize it because the future was not what people thought it was at the time. They pictured technological dynamism and economic boom--but the future, since that time, has really been technological and economic stagnation, with Japan's "lost decade" turned "lost decades" turned "lost generation" matched by the world's own "lost generation" these past many years. And the same goes for that stgantion's effects, like social withdrawal--Americans, certainly, seeming to notice the phenomenon of the "hikikomori" in Japan long before they noticed it at home.

Thus has it also gone with Japan's demography--the country's people less often marrying and having children, such that even by the standards of a world on the whole going through the "demographic transition" the country's situation has been extreme. According to the Central Intelligence Agency's World Factbook tiny and ultra-rich Monaco apart, Japan is the oldest country on Earth, with a median age of almost 49 years, and only 12 percent of its population under age 14. Still, others are not so far behind, with, according to the very same sources, dozens of other countries, including every major advanced industrial country but the U.S., having a median age of over forty (and the U.S. not far behind, with a median age of 39), and similarly dwindling numbers of youth (the percentage who are 0-14 in age likewise 12 percent in South Korea, 13 percent in Italy, 14 percent in Germany, with 15 percent the Euro area average).

Considering the last it seems fitting that the trend was already evident at the very peak of that preoccupation with Japan as industrial paragon, 1989, the year of the "1.57 shock" (when the country recorded a Total Fertility Rate of 1.57--at the time regarded as a shockingly number, though the government would probably be ecstatic if it was that high today). The result is that those interested in the difficulties of an aging society are looking at Japan wondering how it will deal with these difficulties as they manifest there first--and what the country does here as likely to inform others' thought about how to cope with contemporary reality as much as it did back when business "experts" seemed transfixed by "Japan Inc." as the epitome of industrial competence.

Thursday, June 30, 2022

A Generation On: Clifford Stoll's 1995 Essay on the Internet

Clifford Stoll's 1995 Newsweek piece "Why the Web Won't Be Nirvana" has been the butt of many a joke over the years, but not because of its title. Had Stoll limited himself to merely arguing what his title claims we might well remember him as having been clearer-eyed than his contemporaries. Had he somewhat more ambititously argued that the advent of this technology would not in itself translate to not merely nirvana, but not even utopia, we might have accorded him yet greater plaudits. And had he, in more nuanced fashion, argued that some of the much-hyped developments might not come for a long time, if ever, he would also have been right, as we are all too aware looking at exactly some of those things of which he was so dismissive, like telecommuting or the substitution of online for in-person education, or some radical advance for democracy.

However, he was dismissive of the whole thing, not only in the very near term, but, it could seem, any time frame meaningful to people of the 1990s, and on very particular grounds that seem to me more telling than the prediction itself. While paying the limits of the technology as it stood at the time some heed (noting the sheer messiness of what was online, or the awkwardness of reading a CD-ROM while on a '90s-era desktop), he did not stress the limits of the technology as it was then, and likely to remain for some time, even though he could have very easily done so. (What we are in 2022 vaguely talking about as the "Metaverse" was, at the time, widely portrayed as imminent amid the then-insane hyping of Virtual Reality--while what we really had was pay-by-the-hour dial-up in a time in which Amazon had scarcely been founded, and Google, Facebook, Netflix were far from realization.) Nor did Stoll acknowledge the hard facts of economics and politics and power that would a generation on see even those bosses who have made the biggest fortunes in the history of the world out of technological hype broadcast to the whole world their extreme hostility to the very idea of telecommuting, or make the Internet a weapon in the arsenal of Authority against Dissent as much or more than the reverse. (That was not the kind of thing one was likely to get in Newsweek then any more than now.)

Rather what Stoll based his argument on was the need for "human contact," which he was sure the Internet would fail to provide. The result was that where his predictions were correct he was far off the mark in regard to the reasons why (those matters of economics, politics, power), and totally wrong about other points, like his dismissal of online retail and the possibility that it might threaten the business of brick-and-mortar stores, or the viability of online publishing. The truth is that when it comes to mundane tasks like buying cornflakes and underwear convenience, and cheapness, counts for infinitely more than "human contact" with the hassled, time- and cash-strapped great majority of us--while where their performance is concerned human contact is, to put it mildly, overrated. Indeed, it is often a thing many, not all of them introverts, would take some trouble to avoid. (Do you really love encountering pushy salespersons? Long checkout lines where you encounter more rude people? Sales clerks of highly variable competence and personability? For any and all of whom dealing with you may not exactly be the highlight of their own day, one might add?) Indeed, looking at a college classroom in recent years one sees two of his predictions belied, as they are reminded that, while Stoll may indeed be right that "[a] network chat line is a limp substitute for meeting friends over coffee," the average college student much prefers that "limp substitute" to chatting with their neighbors, let alone attending to that instructor right there in the room with them, whom large numbers of them happily replace with an online equivalent whenever this becomes practical.

Thus does it go with other "entertainments." Stoll may well be right that "no interactive multimedia display comes close to the excitement of a live concert," but how often do most people get to go to those? In the meantime the multimedia display has something to commend it against the other substitutes (like the Walkman of Stoll's day). And this is even more the case with his remark that no one would "prefer cybersex to the real thing." After all, the "real thing" isn't so easy for many to come by (even when they aren't coping with pandemic and economic collapse), while even for those for whom it might be an option it seems that not merely cybersex with the real, but "love with the virtual," is competitive enough with the real kind to make many a social critic wag their tongues (with, I suspect, what is treated as a Japanese phenomenon today, like the "hikikomori," likely to prove far from unique to that country in the years ahead).

Far more than Stoll, Edward Castronova and Jane McGonigal seem to have been on the right track when writing about how poorly our workaday world comes off next to the freedoms, stimulation, satisfaction of virtuality, especially when we consider what that reality is like not for the elite who generally get to make a living offering their opinions in public, but the vast majority of the population on the lower rungs of the social hierarchy, facing a deeply unequal, sneering world which every day and in every possible way tells them "I don't care about your problems." Indeed, while a certain sort of person will smugly dismiss any remark about how the world is changing with a brazen a priori confidence that things are always pretty much the same, it seems far from implausible that things are getting worse that way (it's hard to argue with a thing like falling life expectancy!), while it seems there is reason to think that the virtual is only getting more alluring, with people actually wanting it more, not less, as it becomes more familiar to them--a familiar friend rather than something they know only from the Luddite nightmares of so much bottom-feeding sci-fi. In fact, it does not seem too extreme to suspect that many have as little as they can to do with the real offline world--and that only because of the unavoidable physical necessities of dealing with it, on terms that make it any more attractive, only underlining the superiority of the virtual in life as they have lived it.

Wednesday, June 29, 2022

The Pandemic and Automation: Where Are We Now?

In the years preceding the pandemic there was enormous hype about automation, particularly in the wake of the 2013 Frey-Osborne study The Future of Employment . Following the pandemic the effort was supposedly in overdrive.

However, a close reading of even the few items about the matter that serve up actual examples of such automation (like the installation of a voice recognition-equipped system for receiving customers' orders as they pass through some fast food outlet's drive-thru lane) reveals that they are clearer on intentions and possibilities than actual developments--like polls telling us that "50% of employers are expecting to accelerate the automation of some roles in their companies" (never mind how much employment those employers account for, or how serious their expectations are, or what "accelerate" and "some" actually mean here). Meanwhile rather more prominent when we look at discussion of actualities rather than possibilities we find ourselves reading about the discontents of the humans. We read of how workers in jobs that have them dealing with the general public face-to-face are burned-out and fed-up, not of how bosses are replacing those workers with new devices--a decade after robot waiters and the like were supposed to be on the verge of becoming commonplace. We read that industrial robot orders are up--but (as, perhaps, we note that the actual number of robots ordered is not really so staggering) we read far more of supposed "labor shortages" than we do of automation filling in the gaps. We also know that, as seen from the standpoint of the whole economy, productive investment--without which no one is automating anything--remains depressed compared with what it was pre-crisis (and remember, the world never really got over that Great Recession), while it also does not seem terribly likely to get much better very soon, with that media darling and icon of techno-hype Elon Musk, even as he promises a humanoid Teslabot by September, publicly raving about recession just around the corner and preemptively slashing his work force in anticipation, not in the expectation of employing fewer humans, just fewer humans with actual salaries (while those Teslabots do not seem to be part of the story, go figure).

Why do we see such a disparity between the expectations and the reality? A major reason, I think, is that those who vaguely anticipated some colossal rush to automate the economy imagined vast numbers of ready, or nearly ready, systems ready to do the job--a natural result of the press tending to imagine that "innovations" at Technology Readiness Level 1 are actually on Level 9, with the truth coming out when push comes to shove, as it so clearly has amid the crisis, the requisite means are not nearly so far along as some would have had us believe. Those observers also underestimated, just as government and the media have generally done, just how disruptive the pandemic was to be--how long the pandemic and its direct disruptions would last, to say nothing of the indirect, and how much all this would matter. In line with the familiar prejudices of the media lockdowns, strikes and the war in Ukraine get plenty of time and space as sources of economic troubles--but critics of central bank monetary policies get very, very little, with one result that the upsurge in inflation took so many "experts" by surprise. And that inflation, and the tightening of credit that has inevitably followed it, however belated and gradual compared with the talk of latterday Volcker shock it may be, are hardly the kind of thing that encourages investors. Nor are the unceasing supply chain problems. (If the country can't even keep itself in baby formula, how can it keep itself in the inputs required for a drastic expansion of revolutionary automation?) The result is that those of us watching this scene would do well to take those reports of some drastic increase in the rate of automation with a measure of skepticism.

Tuesday, June 28, 2022

Telecommuting: What We Imagined Once, and Where We Actually Are Now

In his 1980 classic of futurology The Third Wave Alvin Toffler remarked that an era of general "transportation crisis" was upon the world, in which "mass transit systems [were] strained to the breaking point, roads and highways clogged, parking spaces rare, pollution a serious problem, strikes and breakdowns almost routine, and costs skyrocketing." Toffler noted, too, the savings that a turn from in-person commuting to telecommuting might achieve, from the lower expenditure of energy of a computer terminal against a private car (or even mass transit), to the chance to scale down physical facilities as people worked from home permitting reductions of real estate, utility, tax and other expenditures. Indeed, it seemed to him that sheer market forces would go a long way to doing the trick, while pressure from environmentalists to bring down the ecological footprint of business activity, and government's seeing in a shift to telecommuting potential benefits in ways from the oil import-beleaguered trade balance to the collapsing nuclear family, would easily nudge it further along.

Of course, four decades later it would seem that Toffler could not have been more wrong on this point--with his being so wrong the more striking given that the transportation crisis, the energy crisis, he wrote of did not get better, but only worse and worse; while the proliferation of computing and Internet connections, and the improvements in their performance and price, went far, far beyond anything Toffler anticipated in the near term. The reason is that, whatever those concerned for public issues like the environment, the trade balance or anything else thought about the matter--and certainly whatever employees thought--employers didn't want it, as anyone who understood economic history had to see they wouldn't, because of the simple matter of control. (The Industrial Revolution's shift from the "putting-out" system to the factory, Taylorist time-and-motion study, Fordism--through it all the key to higher productivity and profit has lain through ever-more intricate division and direction of labor conveniently gathered together under management's eye.)

Ultimately that opposition mattered more than anything else. Since that time it was only the shock of the recent pandemic put the matter as high on the agenda as it has been these last two years, with the scrambling to implement it in the wake of lockdown only underlining how little serious effort government and business made in the direction of figuring out how we could actually make large-scale telecommuting work in all those decades. And now with government and business discarding such measures as they took to constrain the pandemic there is a drive on the part of employers to go back to the old ways, with, ironically, those very business leaders who are apt to be most celebrated as technological superheroes by the makers of the conventional wisdom frequently in the vanguard; all these determined to see that the experiment in telecommuting remains just that, with, of course, the full blessings of government and the media, as media outlets like The Atlantic publish articles with obnoxiously wheedling titles like "Admit It, You Miss Your Commute."

Of a course, a good many workers have displayed less enthusiasm for this course than their bosses--fully two-thirds of remote workers declaring that they do not want to return to the office, with discarding that trip to and from their workplace that so many experience as the worst part of their day specifically cited among the biggest benefits of not doing so. (No, contrary to what the folks at The Atlantic would have the impressionable believe, they did not miss their commute.) Meanwhile if the strikes shocking elites the world over are anything to go by (even Britain, the homeland of that icon of union-crushing Margaret Thatcher, is seeing the kind of strikes that the Thomas Friedmans of the world complacently thought safely relegated to the historic past), workers, shocked, stressed, burned-out and all-around less complacent after the pandemic, are asserting themselves in a way they haven't in decades. The result is that while for over a generation employers have seemed to hold all the cards, and never let anyone forget it, one ought not to rush to the conclusion that they will get their way yet again.

Wednesday, June 22, 2022

On the Word "Startup"

When people speak of "startups" what they mean is that someone is "starting up" a business.

But in choosing to speak not of new businesses, or new companies, but instead startups, they make a far grander claim than the establishment of a new enterprise that will, hopefully, produce something of use to others and a living for its employees and a return for its investors.

Instead they promise that here is a technological and commercial revolution!

And I have to admit myself ever more cynical about that heavily used word and its users.

Because, alongside the success stories there are also the innumerable also-rans, many of which never had a reason to be anything but an also-ran. Because so often we have a startup without an actual product or business plan. Because, even if the technology isn't an overhyped fantasy (as it so often is) the reality is that we tend to end up with a single big winner gobbling up the market (a Microsoft, an Amazon, a Google, a Facebook), driving them out of business or swallowing them up for comparative pennies. They may fantasize that their little firm is the one that will be that winner--but all but the one who actually has the winner are wrong (while even the one who is right can hardly know that at the outset), making what the stupid may call "optimism" or "confidence" really just presumption on a staggering scale. And because so often in back of what is necessarily a misrepresentation are not just illusions and delusions and stupidity, but a good deal of bad faith, and outright criminality. (With Theranos Elizabeth Holmes hoped to be the next Steve Jobs. Instead if Theranos is remembered it will be remembered alongside Enron--about which Holmes ought to know something, her father, whose influence she traded on all her ultra-privileged life, having been Vice-President there.)

And as if all that were not enough I am annoyed, too, by the broader understanding of the world tied up with the word, the muddy thinking and outright lies. About, for example, the actual rate of technological change, which has been wildly exaggerated by so-called "experts" throughout my lifetime to a credulous public and credulous investors, who have so often bet big and lost big as a result. About the notion that technological change do not involve Big Science, established firms, government labs and government subsidies; does not involve vast, diffuse global efforts over very long periods of time; but is just nerd-magic done overnight by garage tinkerers aided by venture capitalists. And of course, there is, what all this justifies and glorifies, and where it has left us in a 2022 far, far different, and far, far sadder than the world we would have got if the Silicon Valley hucksters went anywhere near to delivering on the promises about what their "startups" would bring.

Subscribe Now: Feed Icon