Back in September 2013 Carl Benedikt Frey and Michael Osborne presented the
working paper The Future of Employment. Subsequently republished as an article in the
January 2017 edition of the journal Technological Forecasting and Social Change, the item played a significant part in galvanizing the debate about automation--and indeed produced panic in some circles. (It certainly says something that it got former Treasury Secretary Larry Summers, a consistent opponent of government action to redress downturn and joblessness--not least
during the Great Recession, with
highly controversial result--
talking about how in the face of automation governments would "need to take a more explicit role in ensuring full employment than has been the practice in the U.S.," considering such possibilities as "targeted wage subsidies," "major investments in infrastructure" and even "direct public employment programmes.")
Where the Frey-Osborne study is specifically concerned I suspect most of those who talked about it paid attention mainly to the authors' conclusion, and indeed an oversimplified version of that conclusion that gives the impression that much of the awareness among those who should have had it firsthand was actually secondhand. (Specifically they turned the authors' declaration that "According to our estimate, 47 percent of total U.S. employment is" at 70 percent-plus risk of being "potentially automatable over some unspecified number of years, perhaps a decade or two"--potentially because economic conditions and the political response to the possibility were outside their study's purview--into "Your job is going to disappear very soon. Start panicking now, losers!")
This is, in part, because of how the media tends to work--not only favoring what will grab attention and
ignoring the "boring" stuff, but because of how it treats those whom it regards as worth citing, with Carl Sagan worth citing by way of background. As
he observed in science there are at best experts (people who have studied an issue more than others and whom it may be hoped know more than others),
not authorities (people whose "Because I said so" is a final judgment that decides how the situation actually is for everyone else). However the mainstream media--not exactly accomplished at understanding the scientific method, let alone the culture of science shaped by that method and necessary for its application--does not even understand the distinction, let alone respect it. Accordingly it
treats those persons it consults not as experts who can help explain the world to its readers, listeners and viewers so as to help them learn about it, think about it, understand it and form their own conclusions, but authorities whose pronouncements are to be heeded unquestioningly, like latterday oracles. And, of course, in a society gone insane with the Cult of the Good School, and regarding "Oxford" as the only school on Earth that can outdo "Harvard" in the snob stakes, dropping the name in connection with the pronouncement counts for a lot with people of crude and conventional mind. (People from Oxford said it, so it must be true!)
However, part of it is the character of the report itself. The main text is 48 pages long, and written in that jargon-heavy and parenthetical reference-crammed style that screams "Look how scientific I'm being!" It also contains some rather involved equations that, on top of including those Greek symbols that I suspect immediately scare most people off (the
dreaded sigma makes an appearance), are not explained as accessibly as they might be, or even as fully as they might be. (The mathematical/machine learning jargon gets particularly thick here--"feature vector," "discriminant function," "Gaussian process classifier," "covariance matrix," "logit regression," etc.--while explaining their formulas the authors do not work through a single example such as might show how they worked out the probability for a particular job, even as they left the reader with plenty of questions about just how they quantified all that O*NET data. Certainly I don't think anyone would find attempting to replicate the authors' results would be a straightforward thing on the basis of their explanations.) Accordingly it is not what even the highly literate and mathematically competent would call "light reading"--and unsurprisingly, few seem to have really tried to read it, or make sense of what they did read, or ask any questions. (This is even as, alas, what they did not understand made them more credulous
rather than less so--because not only did people from Oxford say it, but they said it with equations!)
Still, the fact remains that one need not be a specialist in this field to get much more of what is essential than the press generally bothered with. Simply put, Frey and Osborne argued (verbally) that progress in pattern recognition and big data, in combination with improvements in the price and performance of sensors, and the mobility and "manual dexterity" of robots, were making it possible to move automation beyond routine tasks that can be reduced to explicit rules by computerizing non-routine cognitive and physical tasks--with an example of which they made much the ability of a self-driving car to navigate a cityscape (Google's progress at the time of their report's writing apparently a touchstone for them). Indeed, the authors go so far as to claim that "it is largely already technologically possible to automate almost any task, provided that sufficient amounts of data are gathered for pattern recognition," apart from situations where three particular sets of "inhibiting engineering bottlenecks" ("perception and manipulation tasks, creative intelligence tasks, and social intelligence tasks") interfere, and workarounds prove inadequate to overcome the interference. (The possibility of taking a task and "designing the difficult bits out"--of, for example, replacing the non-routine with the routine, as by relying on prefabrication to simplify the work done at a construction site--is a significant theme of the paper.)
How did the authors determine just where those bottlenecks became significant, and how much so? Working with a group of machine learning specialists they took descriptions of 70 occupations from the U.S. Department of Labor Occupational Information Network (O*NET) online database and "subjectively hand-labelled" them as automatable or non-automatable. They then checked their subjective assessments against what they intended to be a more "objective" process to confirm that their assessments were "systematically and consistently related to the O*NET information. This consisted of
1. Dividing the three broad bottlenecks into nine more discrete requirements for task performance (e.g. rather than "perception and manipulation," the ability to "work in a cramped space," or "manual dexterity").
2. On the basis of the O*NET information, working out just how important the trait was, and how high the level of competence in it, for the performance of the task (for instance, whether a very high level of manual dexterity was very important in a task, or a low level of such importance), and
3. Using an algorithm (basically, running these inputs through the formulas I mentioned earlier) to validate the subjective assessments - and it would seem, use those assessments to validate the algorithm.
They then used the algorithm to establish the probability of the other 632 jobs under study, on the basis of their features, being similarly computerizable over the time frame with which they concerned themselves (unspecified, but inclining to the one-to-two decade range), with the threshold for "medium" risk set at 30 percent, that for "high" risk at 70 percent.
Seeing the reasoning laid out in this way one can argue that it proceeded from a set of assumptions that were very much open to question. Even before one gets into the nuances of the methodology they used the assumption that pattern recognition + big data had already laid the groundwork for a great transformation of the economy can seem overoptimistic, the more in as we consider the conclusions to which it led them. Given that the study was completed in 2013, a decade or two works out to (more or less) the 2023-2033 time frame, more or less--in which they thought there was an 89 percent chance of the job of the taxi driver and chauffeur being automatable, and a 79 percent chance of the same going for heavy truck drivers (very high odds indeed, and this, again, without any great breakthroughs). Alas, in 2022, with more perspective on such matters, not least the inadequacies of the neural nets controlling self-driving vehicles even after truly vast amounts of machine learning, there
still seems considerable room for doubt about that. Meanwhile a good many of the authors' assessments can in themselves leave one wondering at the methods that produced the results. (For instance, while they generally conclude that teaching is particularly hard to automate--they put the odds of elementary and high school teaching being computerized at under 1 percent--they put the odds of middle school teaching being computerized at 17 percent. This is still near the bottom of the list from the standpoint of susceptibility, and well inside the low-risk category, but an order of magnitude higher than the possibility of computerizing teaching at those other levels. What about middle school makes so much difference? I have no clue.) The result is that while hindsight is always sharper than foresight, it seems that had more people actually tried to understand the premises of the paper we would have seen more skepticism toward its more spectacular claims.