Two years ago it seemed that the energy transition was accelerating, with the price of oil turning negative, coal in collapse, and solar and wind installations breaking records. Now it seems as if it is slowing down again, maybe even grinding to a halt, with the renewables-bashing and fossil fuel-boosting press cheerfully announcing the end of the era of renewable energy's falling prices as fossil fuels make their "comeback" and exact their "revenge."
What happened?
The answer would seem to be: an unhappy constellation of events. Among the more consequential is how the abruptness of the economic shock, and the subsequent "recovery," created an exaggerated sense of just how fast the coal industry's decline was proceeding--and produced an equally exaggerated sense of rebound when in a moment in which utilities needed more power right away the spare power-generating capacity it had (as a result of the long decline in its competitiveness) meant takers for its electricity. This was all the more the case given that natural gas, use of which has been expanding so greatly, meant that, given the tightness of supply, it was more susceptible than those little utilized coal plants to inflationary shock. That same inflationary shock, and the greater susceptibility to it of those sectors which are running full steam ahead rather than those providing something no one wants (like what, in normal times, is more expensive and more polluting coal-fired electricity), in raising the price of everything else inflation, also raised the price of photovoltaic panels and wind turbines and batteries--a shock to the surprising number of people who do not realize these things are made of stuff that passes through global supply chains and not "magic."
As if all that were not enough Britain had a summer that somehow managed to be both exceptionally windless and sunless, providing grist to the mill of renewables-bashers who fancy themselves engineers rather than "Git a hoss!"-yelling hecklers when they use words like "baseload" and "scalability." Meanwhile the Biden administration, whose "Build Back Better" bill joins the now four decade long list of Democratic Party initiatives of similar type that, running up against the prevailing political winds fell far, far short of the promises initially made, or simply never went anywhere (Clinton's stimulus package and climate action plan, Obama's undersized and short-lived stimulus and quick shift from green energy to the fossil fuel-boosting "All of the Above" policy) was a source of additional disappointment for renewables-watchers.
Especially for those who pinned their hopes of the too long delayed transition from fossil fuels to alternatives, and amid an otherwise wretched 2020 hoped it would all be onward and upward, all of this is deeply dispiriting--the saga gone from a New Hope to the Empire Strikes Back. Yet these developments would seem less fundamental that those cheering for renewables' being defeated yet again would have us believe. Coal's comeback, certainly, seems likely to be short-lived. The commodity price shock affecting renewables affects everything else, with the result that, especially given the long-term trend to the cheapening of solar, wind and battery storage the long-term fall in price is likely to continue (the more obviously and quickly should, as governments and central banks promise, they get a handle on inflation), reinforcing their position as the winners on price as well as sustainability. Britain had a tough summer, and the renewables-bashers have made the most of it, but there is an argument to be made that Britain's problem was not having too much wind and solar in the mix, but too little. Meanwhile it is worth remembering that if helpful government support is not forthcoming renewables have already come a long way in a market where, contrary to the endless complaints of the business press about government favoring them with subsidies, they have been competing with fossil fuels getting government backing far exceeding any help they may have got (in Europe as well as the U.S.), such that--it bears repeating--in spite of everything that has been done to stop it from happening, photovoltaic solar is the cheapest source of electricity provision around, with wind coming in right after it, while the prospects for progress here seem brighter than for any of the old rivals.
The result is that despair is no more appropriate or helpful than complacency. Still, if the long-term trend remains in favor of renewables the speed of the transition does matter greatly, given the economic and ecological stakes--and concern for maximizing that speed would seem the appropriate object of concern for those wanting to see this change happen.
Wednesday, February 23, 2022
Coal's Supposed Comeback is a Blip. The Future is Renewable.
In 2020 we constantly heard about the collapse of the coal industry. By contrast today the media hails the renaissance of fossil fuels, the centerpiece of which is the "comeback" that coal is reportedly making, with all that would seem to imply for the greening of the energy base.
Fossil fuels boosters and renewables-bashers across the relevant industries and throughout the media are clearly delighted, but for those who had hoped to see fossil fuels generally on the way out, and relieved to see coal, generally accounted the ecologically worst and most readily dispensable of the lot, finally in terminal decline, this is the kind of nightmare situation which they had hoped safely relegated to the past. What happened to bring it about?
It would seem the situation reflects the extremity of the shock to the world economy amid the pandemic. In 2020 we saw energy consumption drop. As a result the coal sector, which was increasingly marginal, suffered severely, encouraging the view that it was on its way out. The economic recovery from that extreme shock--combined with the inflation into which many factors, not least expansionary monetary policy, have fed--has seen demand for energy from whatever source soar. Meanwhile coal-fired power stations have been running at lower and lower capacity for a long time (under 48 percent in 2019, 40 percent in 2020), so that even after massive shutdowns of coal stations over the past decade there was considerable extra capacity here when demand went up, with the result more call on coal-fired plants, which burned through more coal.
Of course the mobilization of spare coal-fired capacity is one thing. The economic incentive to, for example, build new coal-based electricity generation instead of natural gas-burning facilities, or the renewables-based generation that is cheapest of all, is quite another thing--a matter not of a short-term crunch that may be only very temporarily making established coal capacity economically for a while, but the longer-term prospect. Especially as there has been no revolution in the efficiency of coal mining and coal-based electricity production, and no really fundamental decline in the efficiency of production from other sources, there seems no basis for thinking that the coal sector has somehow regained its competitiveness with them. Quite the contrary, it seems that the longer-term trend has coal's decline continuing. Yet the fact remains that this event is feeding the illusions among the many influential parties that desperately want to believe otherwise--their bias in favor of fossil fuels, the "short-termism" afflicting far too much decision making in business, and the shallowness of the media coverage and analysis of the phenomenon encouraging a bias in favor of continuing investment here, in spite of the warnings that such investment is likely to end up stranded, and coming at the pace of that already too oft-delayed shift to other technologies which are likely to prove financially sounder in the long run.
Fossil fuels boosters and renewables-bashers across the relevant industries and throughout the media are clearly delighted, but for those who had hoped to see fossil fuels generally on the way out, and relieved to see coal, generally accounted the ecologically worst and most readily dispensable of the lot, finally in terminal decline, this is the kind of nightmare situation which they had hoped safely relegated to the past. What happened to bring it about?
It would seem the situation reflects the extremity of the shock to the world economy amid the pandemic. In 2020 we saw energy consumption drop. As a result the coal sector, which was increasingly marginal, suffered severely, encouraging the view that it was on its way out. The economic recovery from that extreme shock--combined with the inflation into which many factors, not least expansionary monetary policy, have fed--has seen demand for energy from whatever source soar. Meanwhile coal-fired power stations have been running at lower and lower capacity for a long time (under 48 percent in 2019, 40 percent in 2020), so that even after massive shutdowns of coal stations over the past decade there was considerable extra capacity here when demand went up, with the result more call on coal-fired plants, which burned through more coal.
Of course the mobilization of spare coal-fired capacity is one thing. The economic incentive to, for example, build new coal-based electricity generation instead of natural gas-burning facilities, or the renewables-based generation that is cheapest of all, is quite another thing--a matter not of a short-term crunch that may be only very temporarily making established coal capacity economically for a while, but the longer-term prospect. Especially as there has been no revolution in the efficiency of coal mining and coal-based electricity production, and no really fundamental decline in the efficiency of production from other sources, there seems no basis for thinking that the coal sector has somehow regained its competitiveness with them. Quite the contrary, it seems that the longer-term trend has coal's decline continuing. Yet the fact remains that this event is feeding the illusions among the many influential parties that desperately want to believe otherwise--their bias in favor of fossil fuels, the "short-termism" afflicting far too much decision making in business, and the shallowness of the media coverage and analysis of the phenomenon encouraging a bias in favor of continuing investment here, in spite of the warnings that such investment is likely to end up stranded, and coming at the pace of that already too oft-delayed shift to other technologies which are likely to prove financially sounder in the long run.
Tuesday, February 22, 2022
Reflections on Thomas Hobbes' Behemoth
I recall knowing about Thomas Hobbes's Leviathan at least as far back as the ninth grade--because I distinctly remember that Hobbes and his Leviathan were both mentioned in the World History texbook the class used in a chapter which discussed the English civil war and the beginnings of contemporary political theory. (Yes, contrary to the sniveling of certain political hacks one is taught about such things as a matter of course in an American public school, even in a part of the United States less than renowned for the quality of its school system.) Some of Leviathan's more significant selections were required reading in my undergraduate years, certainly in my International Relations courses, while I read the book itself cover to cover in grad school.
Yet amid all that I never heard a word breathed about Hobbes' other book Behemoth, which I found my way to through the circuitous route that passed through Franz Neumann's classic sociological study of Nazi Germany, Behemoth. In the volume he discussed Hobbes' book and the significance of his title--that where the Leviathan represented state-imposed order (rational, presumably consent-based and law-abiding, with obligations resting on the sovereign--indeed, if swallowing society "not swallow[ing] all of it"), the Behemoth represented a "non-state . . . characterized by complete lawlessness," which was certainly Neumann's understanding of Nazism.
Given the parallels between "Behemoth" and "Leviathan" I expected to find in the book a complement or companion to Leviathan--but ended up with something quite different. Where Leviathan is a work of political philosophy, Behemoth is a work of political history, recounting the course of events from the outbreak of the English Civil War to the Restoration of the Stuarts in the form of a question-and-answer dialogue, which I suppose is one reason why it has fallen into such obscurity compared with the other book. Rather than a milestone in the development of modern philosophy it is a secondary history of the events, heavy on commentary that is less anything of really deep interest to political theorists than commonplace op-ed-type stuff--which, unexpectedly, proved to be its main source of interest for me. While most such stuff is merely a matter of political hacks repeating the same stale clichès over and over and over again there was an interest in seeing those clichès, the very same ones, being uttered in relation to the events of almost four hundred years ago. In discussing the causes of the English revolution Hobbes (who was, of course, an opponent) chalked it up to how self-aggrandizing intellectuals and the pernicious influence of the universities corrupted the young "of the better sort" rather than "bringing up . . . young men to virtue"; how the young were susceptible to a dubious idealism and even others not so young might be seduced by foreign ways; how intellectuals and their democratizing notions "corrupted" a swinish multitude "ignorant of their duty" and ever ready to follow any side pandering to their vanity and offering the best prospects with regard to "pay and plunder"; upsetting even the most established and happiest of orders, like some latterday ranter against "Marxists."
Still, what made the accusations really interesting was that while they were familiar, the targets against which he directed them were different, profoundly different from those of today--as with the foreign ways in question, and the kinds of people he regarded as troublemakers, and in what he considered the ignorance of the commons to consist, and what sort of persons he saw cozening them. Today no one pontificates more than the right (in the English-speaking countries, anyway) about liberalism being some wholly Anglo-Saxon invention, all but derived from the very DNA of the English-speaking peoples, who had always been and always would be natural liberals (if not without significant variations, with American centrists championing their own). But as a conservative three centuries back Hobbes was emphatic about liberalism being a pernicious foreign notion-- governance such as the Dutch enjoyed (the notion that "like change of government" such as the Dutch had had in England "would to them produce the like prosperity") in his view an invasive species of weed that no one had any business planting in the soil of Merrie England, with Hobbes the one sounds like what a right-wing writer less than fastidious with his words would today call a Marxist in his attacks on the "entrepreneurial." (Thus did he denounce those "merchants, whose profession is their private gain" and whose "only glory [is] to grow excessively rich by the wisdom of buying and selling," and whose "setting the poorer sort of people on work" supposed to make their calling "the most beneficial to the commonwealth" he regarded in actuality as merely "making poor people sell their labour to them at their own prices . . . to the disgrace of our manufacture"--while more important still assailing them for the anti-tax mentality that has them throwing their lot in with rebellions, unleashing forces they cannot control.) The right exalts the Classical heritage, and maligns the left for, in its view, failing to respect it --but here Hobbes lamented the way the Classics had filled those better-sort young men's heads with republican ideas ("the books written by famous men of the ancient Grecian and Roman commonwealths . . . in which . . . the popular government was extolled by the glorious name of liberty, and monarchy disgraced by the name of tyranny"). And where the right today would that more read the Bible, it was for Hobbes a great tragedy that so many did so (the translation of the book into English meaning that "every boy and wench, that could read English, thought they spoke with God Almighty, and understood what he said," again, upsetting order in the kingdom), as his usage imbued the word "Presbyterian" with the same ring "Jacobin" or "Bolshevik" would have for a later era.
All that, just part of the bigger look at how the events were perceived at the time by an observer who happens to be one of the most foundational political theorists of modern times meant that the book ended up being worth the read--and I would say, deserving of rather more attention than it has got in an era where even what pass for our experts prove themselves frighteningly lacking in either a grasp of the most basic ideas of political philosophy, or the slightest degree of historical perspective.
Yet amid all that I never heard a word breathed about Hobbes' other book Behemoth, which I found my way to through the circuitous route that passed through Franz Neumann's classic sociological study of Nazi Germany, Behemoth. In the volume he discussed Hobbes' book and the significance of his title--that where the Leviathan represented state-imposed order (rational, presumably consent-based and law-abiding, with obligations resting on the sovereign--indeed, if swallowing society "not swallow[ing] all of it"), the Behemoth represented a "non-state . . . characterized by complete lawlessness," which was certainly Neumann's understanding of Nazism.
Given the parallels between "Behemoth" and "Leviathan" I expected to find in the book a complement or companion to Leviathan--but ended up with something quite different. Where Leviathan is a work of political philosophy, Behemoth is a work of political history, recounting the course of events from the outbreak of the English Civil War to the Restoration of the Stuarts in the form of a question-and-answer dialogue, which I suppose is one reason why it has fallen into such obscurity compared with the other book. Rather than a milestone in the development of modern philosophy it is a secondary history of the events, heavy on commentary that is less anything of really deep interest to political theorists than commonplace op-ed-type stuff--which, unexpectedly, proved to be its main source of interest for me. While most such stuff is merely a matter of political hacks repeating the same stale clichès over and over and over again there was an interest in seeing those clichès, the very same ones, being uttered in relation to the events of almost four hundred years ago. In discussing the causes of the English revolution Hobbes (who was, of course, an opponent) chalked it up to how self-aggrandizing intellectuals and the pernicious influence of the universities corrupted the young "of the better sort" rather than "bringing up . . . young men to virtue"; how the young were susceptible to a dubious idealism and even others not so young might be seduced by foreign ways; how intellectuals and their democratizing notions "corrupted" a swinish multitude "ignorant of their duty" and ever ready to follow any side pandering to their vanity and offering the best prospects with regard to "pay and plunder"; upsetting even the most established and happiest of orders, like some latterday ranter against "Marxists."
Still, what made the accusations really interesting was that while they were familiar, the targets against which he directed them were different, profoundly different from those of today--as with the foreign ways in question, and the kinds of people he regarded as troublemakers, and in what he considered the ignorance of the commons to consist, and what sort of persons he saw cozening them. Today no one pontificates more than the right (in the English-speaking countries, anyway) about liberalism being some wholly Anglo-Saxon invention, all but derived from the very DNA of the English-speaking peoples, who had always been and always would be natural liberals (if not without significant variations, with American centrists championing their own). But as a conservative three centuries back Hobbes was emphatic about liberalism being a pernicious foreign notion-- governance such as the Dutch enjoyed (the notion that "like change of government" such as the Dutch had had in England "would to them produce the like prosperity") in his view an invasive species of weed that no one had any business planting in the soil of Merrie England, with Hobbes the one sounds like what a right-wing writer less than fastidious with his words would today call a Marxist in his attacks on the "entrepreneurial." (Thus did he denounce those "merchants, whose profession is their private gain" and whose "only glory [is] to grow excessively rich by the wisdom of buying and selling," and whose "setting the poorer sort of people on work" supposed to make their calling "the most beneficial to the commonwealth" he regarded in actuality as merely "making poor people sell their labour to them at their own prices . . . to the disgrace of our manufacture"--while more important still assailing them for the anti-tax mentality that has them throwing their lot in with rebellions, unleashing forces they cannot control.) The right exalts the Classical heritage, and maligns the left for, in its view, failing to respect it --but here Hobbes lamented the way the Classics had filled those better-sort young men's heads with republican ideas ("the books written by famous men of the ancient Grecian and Roman commonwealths . . . in which . . . the popular government was extolled by the glorious name of liberty, and monarchy disgraced by the name of tyranny"). And where the right today would that more read the Bible, it was for Hobbes a great tragedy that so many did so (the translation of the book into English meaning that "every boy and wench, that could read English, thought they spoke with God Almighty, and understood what he said," again, upsetting order in the kingdom), as his usage imbued the word "Presbyterian" with the same ring "Jacobin" or "Bolshevik" would have for a later era.
All that, just part of the bigger look at how the events were perceived at the time by an observer who happens to be one of the most foundational political theorists of modern times meant that the book ended up being worth the read--and I would say, deserving of rather more attention than it has got in an era where even what pass for our experts prove themselves frighteningly lacking in either a grasp of the most basic ideas of political philosophy, or the slightest degree of historical perspective.
Stranded Trillions? A Note on RethinkX's Report on "The Great Stranding"
In recent years the RethinkX think tank has been the source of some of the more interesting analysis of the major problems facing the world today (energy, climate, food production)--its address of those issues theoretically rigorous, data-based, bold in its conclusions, and while not slighting the challenges and dangers the world faces, managing to offer intelligent arguments for something other than hopelessness in the face of exceedingly daunting difficulties and grave dangers. Where the climate-energy problem is concerned they see the solution lying in the falling price of renewable-supplied electricity, a pattern RethinkX's authors have compared to the falling price of computing power.
Recently the RethinkX team's Adam Dorr and Tony Seba have produced a report ("The Great Stranding: How Inaccurate Mainstream LCOE Estimates are Creating a Trillion-Dollar Bubble in Conventional Energy Assets") on one of the less obvious consequences of the kind of energy transition they anticipate--namely that new investment in older, conventional electricity generation in coal, gas, nuclear and hydro power will prove to be "stranded," unviable investment. This will be due not to changes in laws forcing "greener" energy on utilities, or government subsidies helping to make it happen, but rather the sheer market forces that have already made renewables cheaper than the older options, such that photovoltaic Solar and onshore Wind, in combination with likewise cheapening Battery backup ("SWB"), are well on their way not only to being the cheapest source of electricity around by 2030, but to providing electricity at "near-zero marginal cost." The result is that selling electricity provided by the other sources at anything approaching break-even cost, never mind a profit, becomes an increasingly rare occurrence, driving down the capacity usage of fossil fuels, nuclear and hydro-based generation to financially unviable levels, with coal's recent past the very near future of the rest. (America's dwindling, aging coal plant fleet, in spite of government solicitousness, and the improvement of the average by the shutting down of the least competitive, averaged a mere two-fifths utilization rate as of 2020, compared with two-thirds in 2010. As a result coal's share of American electricity production fell by half from 40 to just 20 percent. As RethinkX acknowledges cheap fracking-produced gas was the principal cause in coal's decline, but even cheaper SWB will be factoring more heavily in every case in the years to come.)
In spite of this recent history (to say nothing of the pressure on business from civil society to move toward zero-carbon operations) business is continuing to invest trillions in indifference not only to the ecological implications but the dollars-and-cents facts as RethinkX have reported them--implying enormous confidence in the enduring viability of those more conventional sources. One possible conclusion is that they know something the rest of us do not. The other is that they are profoundly misinformed and making a very, very big mistake. It is the latter conclusion that RethinkX draws, holding that the investment is being driven by deeply flawed studies from such institutions as the Energy Information Administration (EIA) that greatly exaggerate the likely capacity usage of all those traditional sources, and thus greatly understate the "Levelized Cost Of Electricity" (LCOE) that takes into account all the associated expenses divided per kilowatt-hour. To cite an obviously glaring example they presume an 85 percent capacity usage rate for coal all the way through 2060, a figure absolutely without basis in recent history--the average capacity usage just 56 percent in 2010-2019, 52 percent in 2015-2019, and 47.5 percent in 2019, the trend fairly consistently one of decline even before the exceptional year of 2020 dragged the figure down so much further in the way noted above. Using more realistic figures, which have electricity providers recouping their costs from far lower capacity usage--far fewer per-kilowatt hours sold--translates to a higher LCOE,with the real per-kilowatt hour LCOE of a coal plant established today not 7.6 cents as the EIA said, but 32.4 cents, over four times as much. Thus does it go with the other sources (gas, nuclear, hydro), with the gap only going up over time (with a coal-fired plant established in 2030 likely to have to sell its electricity at 65 cents per-kilowatt-hour).
Quite blunt about the ill-foundedness of the more commonly touted figures (a "dogma . . . promulgated by a small number of self-appointed authorities within the electric power sector . . . that confirms and amplifies a fixed set of thoughts, beliefs and biases" in favor of conventional energy sources and against renewables), the result has been a "financial bubble" in coal, gas, nuclear and hydro that the RethinkX authors compare to the subprime mortgage bubble. This is not least in regard to the risk that, after having made massive investments that were not just ecologically terrible but quite stupid from a business standpoint, the public will be called on to rescue the parties that made those investments at its expense in the same manner seen time and again these past four decades. (Dorr and Seba, of course, argue that any such bailout should be opposed, and in the meantime everyone doing what they can to minimize the danger of such an outcome--investors steering clear of such investment, and financial lenders and government overseers not enabling those who would make the investment anyway.)
It is another bold prediction from the RethinkX team--the more striking because so few are questioning that massive investment in these older electricity generation technologies we are seeing on financial (or for that matter, any other) grounds. Indeed, it can seem as if the mainstream of the business world, and the media prone to faithfully promulgate its prejudices; and RethinkX; are in two entirely different worlds. Which of these is living in the real one, which in the fantasy? As it happens, the former have been relentlessly biased in favor of fossil fuels and nuclear power, and ferociously hostile to renewables, with all the fury of established businesses (and their allies) looking at a potentially disruptive force--and consistently underestimated, and even strained to deny, the long-term improvement in the economic viability of renewables (something to remember when looking at today's headlines, with their clearly delighted expressions of a "comeback" for fossil fuels and nuclear as, overlooking the way inflation is affecting the price of everything, they gloat about the supposed end of an era of falling renewables prices).
For its part RethinkX has not been unknown to be overoptimistic about the rapidity of dramatic and potentially beneficent technological change as their earlier report on Transportation-as-a-Service makes clear. Still, in the case of self-driving cars RethinkX was wrong about a technology that was as yet poorly understood by its own developers (the potential of today's machine learning, running on today's computers, to acquire an imperfectly specified level of competence at driving), which necessarily made any extrapolations that much more tenuous. By contrast, when it makes comparisons between solar and coal, for example, it is discussing developments that have for the most already happened, not only in their view but the view of other, less audacious, observers as well. (Lazard's report of last October had unsubsidized utility-scale solar running approximately $30-$40 and wind $25-$50 per megawatt-hour, versus $45-$75 for combined cycle gas, $65-$150 for coal, and $130-$200 for nuclear.)
Once again, solar and wind are cheaper than the established sources, a cheapness sufficient that even with current storage costs added in (which may add a $20-$40 per hour additional cost), they are, and are likely to remain, a good deal cheaper than coal or nuclear generally, and even competitive with gas in many cases. Accordingly the question becomes how much more progress one can expect in further lowering solar, wind and battery costs. As it happened the drop in the median price of solar has slowed, gone from perhaps a fall of 13 percent per year in 2010-2015 to a fall of 7 percent per year in 2015-2020. The price of wind power was almost as brisk, some 6 percent a year in that same 5 year period. The fall in the price of energy storage capacity has been more impressive still, an 18 percent a year drop in 2015-2020. Were the next five years to see only half that rate of progress across the board we would still see a one-sixth drop in the price of solar, a one-seventh drop in the price of wind, and a better than one-third drop in the price of batteries, meaning that in 2025 we would start to find ourselves looking at solar and wind with storage prices beginning under the $40 per megawatt-hour mark in today's terms--cheaper than the low end for combined cycle gas. Were the rate of improvement for 2025–2030 to average just half of the already reduced rate of the preceding five years we would at the end of them see solar and wind 20–25 percent cheaper than they are today, and battery storage 50 percent cheaper, giving us battery-equipped solar-and-wind power for as little as $30 per megawatt-hour, and even at the high end of the price range ($60) compete with medium-priced gas at today's, and likely tomorrow's, rates. (And in the view of proponents like RethinkX there is plenty of reason to expect much, much better than that.)
The result is that unless per-kilowatt-hour progress in renewables and batteries abrupt comes to a halt in the next few years, or technologists dealing with the already very mature technologies of coal, gas, Generation II atomic power or hydro pull off a technical miracle (or rather, several of them), it would be very hard to picture RethinkX being very far off the mark in this case about not only solar but the SWB combination beating the competition on price by 2030 as they predict. Accordingly it seems to me RethinkX's prediction deserves far more attention, and far more respectful attention, than it has been accorded to date--and indeed, far more respect than most of the "experts" the media so relentlessly flatters and promotes even as they so consistently show themselves lacking in the competence that business and society alike so sorely need.
Recently the RethinkX team's Adam Dorr and Tony Seba have produced a report ("The Great Stranding: How Inaccurate Mainstream LCOE Estimates are Creating a Trillion-Dollar Bubble in Conventional Energy Assets") on one of the less obvious consequences of the kind of energy transition they anticipate--namely that new investment in older, conventional electricity generation in coal, gas, nuclear and hydro power will prove to be "stranded," unviable investment. This will be due not to changes in laws forcing "greener" energy on utilities, or government subsidies helping to make it happen, but rather the sheer market forces that have already made renewables cheaper than the older options, such that photovoltaic Solar and onshore Wind, in combination with likewise cheapening Battery backup ("SWB"), are well on their way not only to being the cheapest source of electricity around by 2030, but to providing electricity at "near-zero marginal cost." The result is that selling electricity provided by the other sources at anything approaching break-even cost, never mind a profit, becomes an increasingly rare occurrence, driving down the capacity usage of fossil fuels, nuclear and hydro-based generation to financially unviable levels, with coal's recent past the very near future of the rest. (America's dwindling, aging coal plant fleet, in spite of government solicitousness, and the improvement of the average by the shutting down of the least competitive, averaged a mere two-fifths utilization rate as of 2020, compared with two-thirds in 2010. As a result coal's share of American electricity production fell by half from 40 to just 20 percent. As RethinkX acknowledges cheap fracking-produced gas was the principal cause in coal's decline, but even cheaper SWB will be factoring more heavily in every case in the years to come.)
In spite of this recent history (to say nothing of the pressure on business from civil society to move toward zero-carbon operations) business is continuing to invest trillions in indifference not only to the ecological implications but the dollars-and-cents facts as RethinkX have reported them--implying enormous confidence in the enduring viability of those more conventional sources. One possible conclusion is that they know something the rest of us do not. The other is that they are profoundly misinformed and making a very, very big mistake. It is the latter conclusion that RethinkX draws, holding that the investment is being driven by deeply flawed studies from such institutions as the Energy Information Administration (EIA) that greatly exaggerate the likely capacity usage of all those traditional sources, and thus greatly understate the "Levelized Cost Of Electricity" (LCOE) that takes into account all the associated expenses divided per kilowatt-hour. To cite an obviously glaring example they presume an 85 percent capacity usage rate for coal all the way through 2060, a figure absolutely without basis in recent history--the average capacity usage just 56 percent in 2010-2019, 52 percent in 2015-2019, and 47.5 percent in 2019, the trend fairly consistently one of decline even before the exceptional year of 2020 dragged the figure down so much further in the way noted above. Using more realistic figures, which have electricity providers recouping their costs from far lower capacity usage--far fewer per-kilowatt hours sold--translates to a higher LCOE,with the real per-kilowatt hour LCOE of a coal plant established today not 7.6 cents as the EIA said, but 32.4 cents, over four times as much. Thus does it go with the other sources (gas, nuclear, hydro), with the gap only going up over time (with a coal-fired plant established in 2030 likely to have to sell its electricity at 65 cents per-kilowatt-hour).
Quite blunt about the ill-foundedness of the more commonly touted figures (a "dogma . . . promulgated by a small number of self-appointed authorities within the electric power sector . . . that confirms and amplifies a fixed set of thoughts, beliefs and biases" in favor of conventional energy sources and against renewables), the result has been a "financial bubble" in coal, gas, nuclear and hydro that the RethinkX authors compare to the subprime mortgage bubble. This is not least in regard to the risk that, after having made massive investments that were not just ecologically terrible but quite stupid from a business standpoint, the public will be called on to rescue the parties that made those investments at its expense in the same manner seen time and again these past four decades. (Dorr and Seba, of course, argue that any such bailout should be opposed, and in the meantime everyone doing what they can to minimize the danger of such an outcome--investors steering clear of such investment, and financial lenders and government overseers not enabling those who would make the investment anyway.)
It is another bold prediction from the RethinkX team--the more striking because so few are questioning that massive investment in these older electricity generation technologies we are seeing on financial (or for that matter, any other) grounds. Indeed, it can seem as if the mainstream of the business world, and the media prone to faithfully promulgate its prejudices; and RethinkX; are in two entirely different worlds. Which of these is living in the real one, which in the fantasy? As it happens, the former have been relentlessly biased in favor of fossil fuels and nuclear power, and ferociously hostile to renewables, with all the fury of established businesses (and their allies) looking at a potentially disruptive force--and consistently underestimated, and even strained to deny, the long-term improvement in the economic viability of renewables (something to remember when looking at today's headlines, with their clearly delighted expressions of a "comeback" for fossil fuels and nuclear as, overlooking the way inflation is affecting the price of everything, they gloat about the supposed end of an era of falling renewables prices).
For its part RethinkX has not been unknown to be overoptimistic about the rapidity of dramatic and potentially beneficent technological change as their earlier report on Transportation-as-a-Service makes clear. Still, in the case of self-driving cars RethinkX was wrong about a technology that was as yet poorly understood by its own developers (the potential of today's machine learning, running on today's computers, to acquire an imperfectly specified level of competence at driving), which necessarily made any extrapolations that much more tenuous. By contrast, when it makes comparisons between solar and coal, for example, it is discussing developments that have for the most already happened, not only in their view but the view of other, less audacious, observers as well. (Lazard's report of last October had unsubsidized utility-scale solar running approximately $30-$40 and wind $25-$50 per megawatt-hour, versus $45-$75 for combined cycle gas, $65-$150 for coal, and $130-$200 for nuclear.)
Once again, solar and wind are cheaper than the established sources, a cheapness sufficient that even with current storage costs added in (which may add a $20-$40 per hour additional cost), they are, and are likely to remain, a good deal cheaper than coal or nuclear generally, and even competitive with gas in many cases. Accordingly the question becomes how much more progress one can expect in further lowering solar, wind and battery costs. As it happened the drop in the median price of solar has slowed, gone from perhaps a fall of 13 percent per year in 2010-2015 to a fall of 7 percent per year in 2015-2020. The price of wind power was almost as brisk, some 6 percent a year in that same 5 year period. The fall in the price of energy storage capacity has been more impressive still, an 18 percent a year drop in 2015-2020. Were the next five years to see only half that rate of progress across the board we would still see a one-sixth drop in the price of solar, a one-seventh drop in the price of wind, and a better than one-third drop in the price of batteries, meaning that in 2025 we would start to find ourselves looking at solar and wind with storage prices beginning under the $40 per megawatt-hour mark in today's terms--cheaper than the low end for combined cycle gas. Were the rate of improvement for 2025–2030 to average just half of the already reduced rate of the preceding five years we would at the end of them see solar and wind 20–25 percent cheaper than they are today, and battery storage 50 percent cheaper, giving us battery-equipped solar-and-wind power for as little as $30 per megawatt-hour, and even at the high end of the price range ($60) compete with medium-priced gas at today's, and likely tomorrow's, rates. (And in the view of proponents like RethinkX there is plenty of reason to expect much, much better than that.)
The result is that unless per-kilowatt-hour progress in renewables and batteries abrupt comes to a halt in the next few years, or technologists dealing with the already very mature technologies of coal, gas, Generation II atomic power or hydro pull off a technical miracle (or rather, several of them), it would be very hard to picture RethinkX being very far off the mark in this case about not only solar but the SWB combination beating the competition on price by 2030 as they predict. Accordingly it seems to me RethinkX's prediction deserves far more attention, and far more respectful attention, than it has been accorded to date--and indeed, far more respect than most of the "experts" the media so relentlessly flatters and promotes even as they so consistently show themselves lacking in the competence that business and society alike so sorely need.
Thursday, February 10, 2022
The Backlash Against Cellular Agriculture Arrives--in All its Incoherence
I have recently had occasion to remark that technological hype (not just about single products, but the rate of progress generally) tends to rise and fall in cyclical fashion, with boom followed by bust. Back in the mid-'10s we had a big bubble indeed, with huge expectations regarding artificial intelligence, virtual reality, and much else. This has, of course, long since fallen through, though the shift from treating them as all but inevitable in the very near term to sneering at the thought that they would ever appear set in earlier with some elements of the package than others. Certainly self-driving vehicles were just such a case, the dismissals already commonplace years ago. By contrast the sneers at cellular agriculture were slower in coming--but intense negativity has since got the upper hand, with a significant element in this Joe Fassler's article about it over at The Counter, which has seen some drawing comparisons between the cellular agriculture startups and Elizabeth Holmes' Theranos. (Ouch!)
Still, while all this would certainly seem to be part of the downward turn of that familiar cycle the backlashes against particular technologies, just as they tend to have their own particular timing, tend to have their own particular features. Certainly this was the case with self-driving cars, which many influential persons and interests find objectionable for reasons having absolutely nothing whatsoever at all to do with infeasibility or practical shortcomings.
So does it go with cellular agriculture. Certainly there is no arguing the reality of overoptimism, evident in how supposedly by 2022 people would have been buying the stuff off the market for years, whereas in 2022 the promises of the bullish still have it an indefinite way off as those looking for substitutes to conventional meat are instead offered the non-meat of plant-based products that remain unconvincing to the discerning meat-lover (the more frustrating as, in spite of their lower resource inputs, they never seem to offer no relief to the wallets of hard-pressed consumers). Yet there is also the reality of a "culture war" between those who hope for technological solutions to the world's ecological and other problems, and Luddite-Malthusians disdainful of the same; and in particular the culture war within the culture war ongoing between meat-eaters hoping for "meat without guilt" and vegans treating veganism as an end in itself, such that even if we could have sustainable, eco-friendly, totally animal cruelty-free meat they would still refuse it, and carry on with the missionary work encouraging others to do the same.
As is always the case when issues get sucked into kulturkampf the result is a muddle--with perhaps the most obvious aspect of such the reality that cellular agriculture is not solely about meat production--that while meat due to its greater inputs, higher price and other problems has been the more obvious first target for producers cellular agriculture can equally be used to produce "clean veggies" and "clean fruit," which ought to be of concern even to vegans. After all, contrary to what some seem to believe the ecological problems of modern industrial agriculture do not begin and end with meat production. Meanwhile societies everywhere face the hard facts of growing populations, more uncertain weather and a rising risk from pests due to climate change, a looming nitrogen and phosphorus crisis, an ever-more volatile international financial scene ever-ready to wreak havoc with commodity prices, and the disruptions attendant upon that resurgent danger about which far too many people seem to me to be far, far too casual, war. (How much are we hearing about the risk of a major war over Ukraine hitting global food supplies? Not nearly enough.) In cellular agriculture we have a technology with the potential to delink the production of essential foodstuffs from open-field production in particular geographic locales, an object of a significance far beyond "meat-lovers vs. vegans," and that so many seem to think of the issue in the latter terms is yet another depressing reminder of the lousy job the media does in its science and technology reporting, and the equally depressing consistency with which cultural conflicts drag the discourse down, down, down far below the lowest intellectual common denominator.
Still, while all this would certainly seem to be part of the downward turn of that familiar cycle the backlashes against particular technologies, just as they tend to have their own particular timing, tend to have their own particular features. Certainly this was the case with self-driving cars, which many influential persons and interests find objectionable for reasons having absolutely nothing whatsoever at all to do with infeasibility or practical shortcomings.
So does it go with cellular agriculture. Certainly there is no arguing the reality of overoptimism, evident in how supposedly by 2022 people would have been buying the stuff off the market for years, whereas in 2022 the promises of the bullish still have it an indefinite way off as those looking for substitutes to conventional meat are instead offered the non-meat of plant-based products that remain unconvincing to the discerning meat-lover (the more frustrating as, in spite of their lower resource inputs, they never seem to offer no relief to the wallets of hard-pressed consumers). Yet there is also the reality of a "culture war" between those who hope for technological solutions to the world's ecological and other problems, and Luddite-Malthusians disdainful of the same; and in particular the culture war within the culture war ongoing between meat-eaters hoping for "meat without guilt" and vegans treating veganism as an end in itself, such that even if we could have sustainable, eco-friendly, totally animal cruelty-free meat they would still refuse it, and carry on with the missionary work encouraging others to do the same.
As is always the case when issues get sucked into kulturkampf the result is a muddle--with perhaps the most obvious aspect of such the reality that cellular agriculture is not solely about meat production--that while meat due to its greater inputs, higher price and other problems has been the more obvious first target for producers cellular agriculture can equally be used to produce "clean veggies" and "clean fruit," which ought to be of concern even to vegans. After all, contrary to what some seem to believe the ecological problems of modern industrial agriculture do not begin and end with meat production. Meanwhile societies everywhere face the hard facts of growing populations, more uncertain weather and a rising risk from pests due to climate change, a looming nitrogen and phosphorus crisis, an ever-more volatile international financial scene ever-ready to wreak havoc with commodity prices, and the disruptions attendant upon that resurgent danger about which far too many people seem to me to be far, far too casual, war. (How much are we hearing about the risk of a major war over Ukraine hitting global food supplies? Not nearly enough.) In cellular agriculture we have a technology with the potential to delink the production of essential foodstuffs from open-field production in particular geographic locales, an object of a significance far beyond "meat-lovers vs. vegans," and that so many seem to think of the issue in the latter terms is yet another depressing reminder of the lousy job the media does in its science and technology reporting, and the equally depressing consistency with which cultural conflicts drag the discourse down, down, down far below the lowest intellectual common denominator.
Friday, January 28, 2022
Our Automotive Dystopia and the Hope of the Self-Driving Car
About eighty years ago a classic science fiction short story painted a dystopian picture of a nightmare world where cars made possible vastly bigger cities, after which, as the ratio of cars-to-humans approached one-to-two those cities "choked" on those cars, with "[s]eventy million steel juggernauts, operated by imperfect human beings at high speed" proving "more destructive than war" of property and human life, obscene insurance premiums, and of course, the squandering of a finite oil supply.
The solution the story envisioned was a colossal solar-powered public transport system launched as a public works program.
I can imagine that a good many readers are sneering at this story as some "socialist," "hippie" vision. But it was actually the furthest thing from that. The story is "The Roads Must Roll," by Robert Heinlein, published in the science fiction magazine Astounding when John Campbell was running the show--and in its depiction of a power grab by "Functionalists" a (very) thinly veiled right-wing attack on organized labor and Marxism.
The result is that its looking like a leftie vision is a matter of how times have changed, with our attitudes toward cars, oil, solar energy and public works sucked into the "culture wars" rather than treated as objects of rational appraisal--and it has to be acknowledged also, our having come to take what the story presented as a dystopia utterly for granted--so much so that it seems worth discussing the costs of that "way of life." Back in 2010 a study found that car crashes cost the U.S. economy some $1 trillion a year. (Putting it another way, if the cost of those crashes were a whole national economy by itself it would likely be in the top twenty globally--and one could guess that the figure has only gone up in the past decade.) That same year the country saw nearly 4 million injured in such accidents, 33,000 fatally. (More destructive than war, indeed!) Lest it need saying, all this has given us a $300 billion a year car insurance industry whose premiums weigh heavily on the budgets of the motorists that very few can escape becoming, while the problem cars pose from a natural resources standpoint should need no elaboration these days.
Yet anyone who questions that this is the best of all possible worlds is apt to get a hostile reaction, even when no one mentions anything anywhere near so radical as solar-powered moving roadways--as we see in the wildly exaggerated sneering at the prospect of self-driving cars, or the turn to Transportation-as-a-Service that this might make possible, or even the electrification of the automotive fleet (with, par for the course, the media generally treating the weakest arguments on these scores with great respect and their opponents with none). However, I for one am prepared to declare that a world where we move beyond tens--hundreds--of millions of gas-powered steel juggernauts driven at high speed by imperfect humans as our default way of getting about is likely to be a better one. Indeed, I look forward to a day when people look back at our era and shudder at the insanity of a society that actually relied on the alertness, reflexes and judgment of human beings who were so often sick, tired, distracted, irascible, angry or worse to control such juggernauts crowded together on superhighways as a default mode of transport.
The solution the story envisioned was a colossal solar-powered public transport system launched as a public works program.
I can imagine that a good many readers are sneering at this story as some "socialist," "hippie" vision. But it was actually the furthest thing from that. The story is "The Roads Must Roll," by Robert Heinlein, published in the science fiction magazine Astounding when John Campbell was running the show--and in its depiction of a power grab by "Functionalists" a (very) thinly veiled right-wing attack on organized labor and Marxism.
The result is that its looking like a leftie vision is a matter of how times have changed, with our attitudes toward cars, oil, solar energy and public works sucked into the "culture wars" rather than treated as objects of rational appraisal--and it has to be acknowledged also, our having come to take what the story presented as a dystopia utterly for granted--so much so that it seems worth discussing the costs of that "way of life." Back in 2010 a study found that car crashes cost the U.S. economy some $1 trillion a year. (Putting it another way, if the cost of those crashes were a whole national economy by itself it would likely be in the top twenty globally--and one could guess that the figure has only gone up in the past decade.) That same year the country saw nearly 4 million injured in such accidents, 33,000 fatally. (More destructive than war, indeed!) Lest it need saying, all this has given us a $300 billion a year car insurance industry whose premiums weigh heavily on the budgets of the motorists that very few can escape becoming, while the problem cars pose from a natural resources standpoint should need no elaboration these days.
Yet anyone who questions that this is the best of all possible worlds is apt to get a hostile reaction, even when no one mentions anything anywhere near so radical as solar-powered moving roadways--as we see in the wildly exaggerated sneering at the prospect of self-driving cars, or the turn to Transportation-as-a-Service that this might make possible, or even the electrification of the automotive fleet (with, par for the course, the media generally treating the weakest arguments on these scores with great respect and their opponents with none). However, I for one am prepared to declare that a world where we move beyond tens--hundreds--of millions of gas-powered steel juggernauts driven at high speed by imperfect humans as our default way of getting about is likely to be a better one. Indeed, I look forward to a day when people look back at our era and shudder at the insanity of a society that actually relied on the alertness, reflexes and judgment of human beings who were so often sick, tired, distracted, irascible, angry or worse to control such juggernauts crowded together on superhighways as a default mode of transport.
Revisiting George Friedman's The Next Decade
After publishing his geopolitical forecast for the twenty-first century in The Next 100 Years (2009) George Friedman endeavored to provide a more detailed, explanatory and prescriptive discussion of his expectations for the next years, The Next Decade (2011).
This being 2022 we can look back and consider how he did. My thought is that while he made some good points in the book he did not do very well, particularly where his larger and more radical predictions were concerned. Consider the following:
* Friedman predicted that in the wake of the Great Recession the balance of power in economic life would shift back from the private sector to the public, with coming years looking something more like the post-war period. Of course, no such thing happened. The neoliberal model may stand in lower credit with the general public than ever, but it remains the conventional wisdom of business, government, academia, the media--and the few compromises required by policymakers hewing to the line have been slight indeed. (Britain, for example, may have left the EU--but the neoliberal standards of privatization, deregulation, profligacy with corporate welfare and stinginess with the general public, remain the order of the day, as does the foundation of economies on globalization and financialization.)
* Friedman envisioned the U.S. facing a rapprochement between Russia and Germany producing a Paris-Berlin-Moscow axis. Of course, the reality has been quite different.
* Friedman, who has never let go of his once headline-grabbing and since much derided vision of the U.S. and Japan as geopolitical rivals (The Coming War With Japan), or hewing to a pessimistic appraisal of China's prospects, once again predicted China's proving an also-ran in the 2010s, and the U.S. becoming more concerned with checking Japan's power instead. Again, this is not exactly how things have gone.
As readers of this blog know I think it simple-minded to sneer at prediction, and even forecasting, as so many do (with outrageous smugness). After all, as Nicholas Rescher made clear, we have no choice but to constantly make choices based on expectations of future conditions and the future outcomes of our choices, and most of the time we are correct. (For example, when we go to work in the morning we expect that our workplace will be there and be operative, and we are usually right about that.) In fact we take our capacity for correct prediction so much for granted that we are only aware of making predictions when we face the relatively small number of matters where prediction proves trickier. Still, the difficulty does not in itself give us an "out," and we still find ourselves forced to choose, with our only choice making as accurate a prediction as we can, not least by learning from our past mistakes in such situations. And for me that, and not the denigration of prediction, is the reason to revisit Friedman's work and consider where he went wrong--which, it seems to me, was in his simply reasoning from the wrong premises and indulging biases that had proved unhelpful in the past (his superficial grasp of political economy, his attitude toward Germany and China, for example, his endless attempts to rehabilitate his prediction about Japan). Alas, incorrect premises and problematic biases are not at all rare in the business--and unfortunately the media has been more inclined to simple-mindedly make such "experts" into authorities rather than appraise what they have to offer.
This being 2022 we can look back and consider how he did. My thought is that while he made some good points in the book he did not do very well, particularly where his larger and more radical predictions were concerned. Consider the following:
* Friedman predicted that in the wake of the Great Recession the balance of power in economic life would shift back from the private sector to the public, with coming years looking something more like the post-war period. Of course, no such thing happened. The neoliberal model may stand in lower credit with the general public than ever, but it remains the conventional wisdom of business, government, academia, the media--and the few compromises required by policymakers hewing to the line have been slight indeed. (Britain, for example, may have left the EU--but the neoliberal standards of privatization, deregulation, profligacy with corporate welfare and stinginess with the general public, remain the order of the day, as does the foundation of economies on globalization and financialization.)
* Friedman envisioned the U.S. facing a rapprochement between Russia and Germany producing a Paris-Berlin-Moscow axis. Of course, the reality has been quite different.
* Friedman, who has never let go of his once headline-grabbing and since much derided vision of the U.S. and Japan as geopolitical rivals (The Coming War With Japan), or hewing to a pessimistic appraisal of China's prospects, once again predicted China's proving an also-ran in the 2010s, and the U.S. becoming more concerned with checking Japan's power instead. Again, this is not exactly how things have gone.
As readers of this blog know I think it simple-minded to sneer at prediction, and even forecasting, as so many do (with outrageous smugness). After all, as Nicholas Rescher made clear, we have no choice but to constantly make choices based on expectations of future conditions and the future outcomes of our choices, and most of the time we are correct. (For example, when we go to work in the morning we expect that our workplace will be there and be operative, and we are usually right about that.) In fact we take our capacity for correct prediction so much for granted that we are only aware of making predictions when we face the relatively small number of matters where prediction proves trickier. Still, the difficulty does not in itself give us an "out," and we still find ourselves forced to choose, with our only choice making as accurate a prediction as we can, not least by learning from our past mistakes in such situations. And for me that, and not the denigration of prediction, is the reason to revisit Friedman's work and consider where he went wrong--which, it seems to me, was in his simply reasoning from the wrong premises and indulging biases that had proved unhelpful in the past (his superficial grasp of political economy, his attitude toward Germany and China, for example, his endless attempts to rehabilitate his prediction about Japan). Alas, incorrect premises and problematic biases are not at all rare in the business--and unfortunately the media has been more inclined to simple-mindedly make such "experts" into authorities rather than appraise what they have to offer.
Tuesday, January 25, 2022
Carbon Nanotube-Based Microchips: A (Very Short) Primer
As Moore's Law runs its course those looking forward to continued improvements in computing power necessarily turn their attention in other directions--not least, a turn away from silicon to other materials as the material from which we make our integrated circuits. For decades one of the more promising possibilities has been the use of carbon nanotubes.
Carbon nanotubes are one of many materials formed from an arrangement of carbon atoms in hexagonal (six-sided) structures, themselves arranged in a lattice, with every one of the six atoms connected to and forming part of the adjacent structures of the same type. Left as a flat sheet, the resulting material is called graphene. With a nanotube what happens is that the "sheet" is bent, and its opposite, horizontal ends joined, to form a hollow cylinder.
These nanotubes' ultrathin bodies and smooth walls enable charges to flow through them more rapidly, and at a lower supply voltage, than is the case with silicon, making them potentially faster and more energy-efficient. Their smoother, more energy-efficient structures may also make them a more suitable material than silicon for denser "3-D" chip designs--which would not just put more transistors on a single 2-D surface the way conventional chips do, but layer those 2-D surfaces on top of one another to cram more computing power into a given space. And altogether the combination of attributes has led to speculation about their permitting a thousandfold gain in computing speed over silicon, equivalent to about ten more doublings and an additional couple of decades of Moore's Law-like progress, which as a practical matter that would convert today's personal computers into machines comparable to today's supercomputers.
Of course, in considering all this one has to note that there has been a long record of great expectations and great disappointments in regard to the mass-production of carbon nanotubes, not least because of the old "works well enough in the lab but not ready for real life" problem of consistently getting the required quality at a competitive price when producing at the relevant scale. (Back in 2014 IBM said they would have them in 2020. Well, 2020 has come and gone, with the arrival of the technology, like that of so many others, deferred indefinitely into the future.) Still, it is one thing to acknowledge that the technology has been slower to emerge than hoped, another to write it off--and it may well be that the belated arrival of carbon nanotube-based chips, and the boost they deliver to the continued progress of computing power, will be what opens the way to the next great round of advance in the development of an artificial intelligence sector that it may turn out has been held back from realizing its promises only by the limits of the hardware.
Carbon nanotubes are one of many materials formed from an arrangement of carbon atoms in hexagonal (six-sided) structures, themselves arranged in a lattice, with every one of the six atoms connected to and forming part of the adjacent structures of the same type. Left as a flat sheet, the resulting material is called graphene. With a nanotube what happens is that the "sheet" is bent, and its opposite, horizontal ends joined, to form a hollow cylinder.
These nanotubes' ultrathin bodies and smooth walls enable charges to flow through them more rapidly, and at a lower supply voltage, than is the case with silicon, making them potentially faster and more energy-efficient. Their smoother, more energy-efficient structures may also make them a more suitable material than silicon for denser "3-D" chip designs--which would not just put more transistors on a single 2-D surface the way conventional chips do, but layer those 2-D surfaces on top of one another to cram more computing power into a given space. And altogether the combination of attributes has led to speculation about their permitting a thousandfold gain in computing speed over silicon, equivalent to about ten more doublings and an additional couple of decades of Moore's Law-like progress, which as a practical matter that would convert today's personal computers into machines comparable to today's supercomputers.
Of course, in considering all this one has to note that there has been a long record of great expectations and great disappointments in regard to the mass-production of carbon nanotubes, not least because of the old "works well enough in the lab but not ready for real life" problem of consistently getting the required quality at a competitive price when producing at the relevant scale. (Back in 2014 IBM said they would have them in 2020. Well, 2020 has come and gone, with the arrival of the technology, like that of so many others, deferred indefinitely into the future.) Still, it is one thing to acknowledge that the technology has been slower to emerge than hoped, another to write it off--and it may well be that the belated arrival of carbon nanotube-based chips, and the boost they deliver to the continued progress of computing power, will be what opens the way to the next great round of advance in the development of an artificial intelligence sector that it may turn out has been held back from realizing its promises only by the limits of the hardware.
Understanding Moore's Law
Those of us attentive to computing are generally familiar with the existence of something called "Moore's Law." However, really satisfying explanation of what Moore's Law actually is would seem a rarer thing--with one result a great deal of confusion about what it means.
Simply put, Moore's Law has to do with "integrated circuits," or, in more everyday usage, microchips--small wafers ("chips") of semiconducting material, usually silicon, containing an electronic circuit. Within these chips transistors amplify, regulate and switch the electric signals passing through them, enabling them to store and move electronic data. Placing more transistors inside a chip means that more such activity can go on inside it at once, which gives the chip, and the device incorporating it, enabling more "parallelism"--the ability to do more at once, and therefore to work faster. All other things being equal one can only put more transistors on the same-sized chip if the transistors are themselves smaller--which means that the electrons passing through them travel shorter distances, which increases the speed at which the system executes its operations yet again.
Since their invention in the late 1950s microchip manufacturers have steadily increased the number of transistors in their chips, by shrinking transistor size--a process that also caused the cost of each transistor to fall. In 1965 electronics engineer Gordon Moore published a short paper titled "Cramming More Components Into Integrated Circuits" in which he noted that the "density at minimum cost per transistor" doubled every year. He extrapolated from that trend that in the next five years they would have chips with twenty times as many transistors on them, each costing just a tenth of their 1965 price, and that this pattern would continue for "at least ten years."
Moore's prediction (which, it is worth recalling, he never called a "law") was inexactly borne out during those years. He proved somewhat overoptimistic, transistor density not quite doubling annually, and today, in fact, different versions of this "law" get quoted with varying claims about doubling times. (Some say one year, some say eighteen months, some say two years, while claims about the implications for processing power and price also vary.) However, the swift doubling in the number of transistors per chip, and the fall in the price of computing power that went with it, continued for a lot longer than the ten years he suggested, instead going on for a half century past that point. The result is that where an efficiently made chip had fifty transistors on it in 1965, they now contain billions of transistors—all as the low price of these densely transistorized chips means that hundreds of billions of them are manufactured annually, permitting them to be stuffed into just about everything we use.
Nonetheless, Moore's Law has certain in-built limitations. The most significant of these is the physical limit to transistor miniaturization. One cannot make a silicon transistor smaller than a single nanometer (a billionth of a meter, equivalent to the width of a single atom) after all, while even before one gets to that point shrinking size makes transistors so small that the electrons whose movements they are supposed to control actually pass (or "tunnel") through their walls.
Of course, when Moore presented his "Law" the prospects of single atom-wide transistors, or even tunneling, seemed remote in the extreme. Transistors in 1971 were drawn on a ten micrometer (millionth of a meter) scale—ten thousand nanometers in the terms more commonly discussed today. However, by 2017 the transistors in commercially made chips were just a thousandth their earlier size, a mere ten nanometers across. The following year major chipmakers began the mass-production of mere seven nanometer transistors got underway, leaving very little space for further size reductions.
This has led a good many observers to declare that "Moore's Law is dead," or will be before too much longer. The claim is controversial--perhaps more than it ought to be. After all, no one disputes that chip speeds cannot continue to increase on the basis of reducing the size of the transistors on silicon wafers--and that is exactly what Moore's Law was concerned with, not the possibility or impossibility of continued progress in computing power. The result is that those who are convinced that the tendency to the exponential increase of computing power is virtually bound to continue as before might do better to set aside claims for Moore's Law continuing, and instead speak of Ray Kurzweil's "Law of Accelerating Returns."
Simply put, Moore's Law has to do with "integrated circuits," or, in more everyday usage, microchips--small wafers ("chips") of semiconducting material, usually silicon, containing an electronic circuit. Within these chips transistors amplify, regulate and switch the electric signals passing through them, enabling them to store and move electronic data. Placing more transistors inside a chip means that more such activity can go on inside it at once, which gives the chip, and the device incorporating it, enabling more "parallelism"--the ability to do more at once, and therefore to work faster. All other things being equal one can only put more transistors on the same-sized chip if the transistors are themselves smaller--which means that the electrons passing through them travel shorter distances, which increases the speed at which the system executes its operations yet again.
Since their invention in the late 1950s microchip manufacturers have steadily increased the number of transistors in their chips, by shrinking transistor size--a process that also caused the cost of each transistor to fall. In 1965 electronics engineer Gordon Moore published a short paper titled "Cramming More Components Into Integrated Circuits" in which he noted that the "density at minimum cost per transistor" doubled every year. He extrapolated from that trend that in the next five years they would have chips with twenty times as many transistors on them, each costing just a tenth of their 1965 price, and that this pattern would continue for "at least ten years."
Moore's prediction (which, it is worth recalling, he never called a "law") was inexactly borne out during those years. He proved somewhat overoptimistic, transistor density not quite doubling annually, and today, in fact, different versions of this "law" get quoted with varying claims about doubling times. (Some say one year, some say eighteen months, some say two years, while claims about the implications for processing power and price also vary.) However, the swift doubling in the number of transistors per chip, and the fall in the price of computing power that went with it, continued for a lot longer than the ten years he suggested, instead going on for a half century past that point. The result is that where an efficiently made chip had fifty transistors on it in 1965, they now contain billions of transistors—all as the low price of these densely transistorized chips means that hundreds of billions of them are manufactured annually, permitting them to be stuffed into just about everything we use.
Nonetheless, Moore's Law has certain in-built limitations. The most significant of these is the physical limit to transistor miniaturization. One cannot make a silicon transistor smaller than a single nanometer (a billionth of a meter, equivalent to the width of a single atom) after all, while even before one gets to that point shrinking size makes transistors so small that the electrons whose movements they are supposed to control actually pass (or "tunnel") through their walls.
Of course, when Moore presented his "Law" the prospects of single atom-wide transistors, or even tunneling, seemed remote in the extreme. Transistors in 1971 were drawn on a ten micrometer (millionth of a meter) scale—ten thousand nanometers in the terms more commonly discussed today. However, by 2017 the transistors in commercially made chips were just a thousandth their earlier size, a mere ten nanometers across. The following year major chipmakers began the mass-production of mere seven nanometer transistors got underway, leaving very little space for further size reductions.
This has led a good many observers to declare that "Moore's Law is dead," or will be before too much longer. The claim is controversial--perhaps more than it ought to be. After all, no one disputes that chip speeds cannot continue to increase on the basis of reducing the size of the transistors on silicon wafers--and that is exactly what Moore's Law was concerned with, not the possibility or impossibility of continued progress in computing power. The result is that those who are convinced that the tendency to the exponential increase of computing power is virtually bound to continue as before might do better to set aside claims for Moore's Law continuing, and instead speak of Ray Kurzweil's "Law of Accelerating Returns."
How Powerful Would a Genuinely Thinking Computer Have to Be?
Discussing the prospect of a computer matching or exceeding human intelligence we find ourselves forced to consider just how it is that we measure human intelligence. That in itself is an old and difficult problem, reflecting the reality that there remains considerable disagreement about just what precisely human intelligence even is. However, one approach that has been suggested is to consider the human brain as a piece of computer hardware, and attempt to measure its apparent capacity by the yardsticks we commonly apply to computers. Based on that we then identify the minimum hardware performance a computer would have to have in order to display human-like performance.
How do we go about this as a practical matter? By and large it has been standard to measure computing power in terms of the number of calculations a computer can perform per second. Of course, there are a variety of kinds of calculation, but in recent years it has been common to think specifically in terms of "floating-point operations," in contrast with simpler "fixed point" operations. (Adding 1.0 to 2.0 to get 3.0 is a fixed point operation--the decimal in the same place in all three numbers. However, the addition of 1.2570 to 25.4620 to get 26.719 is a floating point operation, in that the decimal point appeared in a different place in each of the two numbers.) Indeed, anyone delving very deeply into the literature on high-end computers quickly encounters the acronym "FLOPS" (short for FLOating-point operations Per Second) and derivatives thereof, such as "teraflop" (a trillion flops per second), a "petaflop" (a quadrillion flops--a thousand trillion flops)" and "exaflop" (a quintillion flops--a thousand petaflops, or a million teraflops).
With computers' performance measured in terms of floating-point operations per second, those speculating about artificial intelligence attempt to equate the human brain's performance with a given number of flops. Among others, Ray Kurzweil published an estimate in his 1999 book The Age of Spiritual Machines, since revised in his 2005 The Singularity is Near. The principle he followed was his taking part of the nervous system, estimating its performance in FLOPS, and then extrapolating from that to the human brain. Working from the estimate that individual synapses are in performance equivalent to a two hundred flop computer, and the human brain contained some hundred trillion synapses, he conservatively estimated a figure of some twenty quadrillion (thousand trillion) floating-point operations per second--twenty petaflops--then suggested that the brain may actually run at about half that speed, ten petaflops sufficing.
In considering this one should note that other analysts have used quite different approaches, from which they produced vastly higher estimates of the brain's performance. This is especially the case when they assume the brain does not produce consciousness at the level of nerves, but rather at the level of quantum phenomena inside the nerves. (Jack Tuszynski and his colleagues suggested that not tens of quadrillions, but tens of trillions of quadrillions, would be required.) Of course such "quantum mind" theories (the best known exponent of whom is probably The Emperor's New Mind author Roger Penrose) are extremely controversial--as yet remaining broadly philosophical rather than scientific in the sense, with as yet no empirical evidence in their favor, and indeed, critics regarding such notions as mystical in a way all too common when people delve into quantum mechanics. Still, the idea that Kurzweil's estimate of just how much computing power a human brain possesses may be too low a couple of orders of magnitude is fairly widespread, popular science articles commonly citing the figure as an exaflop (a thousand petaflops).
Still, it can be said that the most powerful supercomputers have repeatedly attained and increasingly surpassed the level suggested by Kurzweil over the past decade. The Fujitsu "K" supercomputer achieved ten "petaflops" (ten quadrillion "floating-point calculations") per second back in November 2011. It also had a 1.4 petabyte memory, about ten times Kurzweil's estimate of the human brain's memory. Moreover, the Fujitsu K has been exceeded in its turn--by dozens of other supercomputers according to the latest (November 2021) edition of the TOP500 list of the world's fastest systems, in cases by orders of magnitude. At the time of this writing the fastest appears to be yet another Fujitsu machine, the Fujitsu Fukagu, with a performance of 442 petaflops per second--some forty times Kurzweil's estimate of human brain performance. And of course, present computer scientists have set their sights higher than that. Among them is a joint effort by the Department of Energy, Intel and Cray to build the Aurora, which is intended to be an exaflop-level machine--as a matter of course, running a hundred times as many calculations per second as Kurzweil's estimate of the human brain's performance--while even that seems modest next to a report this very day that the I4DI consortium is shooting for a 64 exaflop machine by the end of this very year (equivalent to sixty times those higher estimates of the brain's performance, and six thousand times Kurzweil's estimate).
Reading this one may wonder why Kurzweil's hypothesis about such a computer matching or exceeding the brain's capacity has not already been tested with results pointing one way or the other. The reality is that in practice supercomputers like these, which are as few as they are because they are so hugely expensive to build (the Fukagu's a billion dollar machine) and to run (their voracious energy consumption a constant theme of discussion of such equipment), are normally used by only the biggest-budgeted researchers for the most computationally intense tasks, like simulations of complex physical phenomena, such as the Earth's climate or the cosmos--or code-breaking by intelligence services. They have only rarely been available to artificial intelligence researchers. However, the recent enthusiasm for artificial intelligence research has reportedly meant that artificial intelligence researchers has been cited as a factor in the development of the next round of supercomputers (not least because of the utility of AI in facilitating their work).
Especially with this being the case it seems far from impossible that this will enable it to yield new insights into the subject--just as this past decade it was already the case that our having faster computers available permitted the striking advances in areas like machine learning that we saw this past decade. Indeed, even as the recent excitement over artificial intelligence turns into disappointment with the realization that the most-hyped applications (like Level 5 self-driving) are more remote than certain loud-mouthed hucksters promised, the continued expansion of computing power offers considerable grounds to not write those prospects off just yet.
How do we go about this as a practical matter? By and large it has been standard to measure computing power in terms of the number of calculations a computer can perform per second. Of course, there are a variety of kinds of calculation, but in recent years it has been common to think specifically in terms of "floating-point operations," in contrast with simpler "fixed point" operations. (Adding 1.0 to 2.0 to get 3.0 is a fixed point operation--the decimal in the same place in all three numbers. However, the addition of 1.2570 to 25.4620 to get 26.719 is a floating point operation, in that the decimal point appeared in a different place in each of the two numbers.) Indeed, anyone delving very deeply into the literature on high-end computers quickly encounters the acronym "FLOPS" (short for FLOating-point operations Per Second) and derivatives thereof, such as "teraflop" (a trillion flops per second), a "petaflop" (a quadrillion flops--a thousand trillion flops)" and "exaflop" (a quintillion flops--a thousand petaflops, or a million teraflops).
With computers' performance measured in terms of floating-point operations per second, those speculating about artificial intelligence attempt to equate the human brain's performance with a given number of flops. Among others, Ray Kurzweil published an estimate in his 1999 book The Age of Spiritual Machines, since revised in his 2005 The Singularity is Near. The principle he followed was his taking part of the nervous system, estimating its performance in FLOPS, and then extrapolating from that to the human brain. Working from the estimate that individual synapses are in performance equivalent to a two hundred flop computer, and the human brain contained some hundred trillion synapses, he conservatively estimated a figure of some twenty quadrillion (thousand trillion) floating-point operations per second--twenty petaflops--then suggested that the brain may actually run at about half that speed, ten petaflops sufficing.
In considering this one should note that other analysts have used quite different approaches, from which they produced vastly higher estimates of the brain's performance. This is especially the case when they assume the brain does not produce consciousness at the level of nerves, but rather at the level of quantum phenomena inside the nerves. (Jack Tuszynski and his colleagues suggested that not tens of quadrillions, but tens of trillions of quadrillions, would be required.) Of course such "quantum mind" theories (the best known exponent of whom is probably The Emperor's New Mind author Roger Penrose) are extremely controversial--as yet remaining broadly philosophical rather than scientific in the sense, with as yet no empirical evidence in their favor, and indeed, critics regarding such notions as mystical in a way all too common when people delve into quantum mechanics. Still, the idea that Kurzweil's estimate of just how much computing power a human brain possesses may be too low a couple of orders of magnitude is fairly widespread, popular science articles commonly citing the figure as an exaflop (a thousand petaflops).
Still, it can be said that the most powerful supercomputers have repeatedly attained and increasingly surpassed the level suggested by Kurzweil over the past decade. The Fujitsu "K" supercomputer achieved ten "petaflops" (ten quadrillion "floating-point calculations") per second back in November 2011. It also had a 1.4 petabyte memory, about ten times Kurzweil's estimate of the human brain's memory. Moreover, the Fujitsu K has been exceeded in its turn--by dozens of other supercomputers according to the latest (November 2021) edition of the TOP500 list of the world's fastest systems, in cases by orders of magnitude. At the time of this writing the fastest appears to be yet another Fujitsu machine, the Fujitsu Fukagu, with a performance of 442 petaflops per second--some forty times Kurzweil's estimate of human brain performance. And of course, present computer scientists have set their sights higher than that. Among them is a joint effort by the Department of Energy, Intel and Cray to build the Aurora, which is intended to be an exaflop-level machine--as a matter of course, running a hundred times as many calculations per second as Kurzweil's estimate of the human brain's performance--while even that seems modest next to a report this very day that the I4DI consortium is shooting for a 64 exaflop machine by the end of this very year (equivalent to sixty times those higher estimates of the brain's performance, and six thousand times Kurzweil's estimate).
Reading this one may wonder why Kurzweil's hypothesis about such a computer matching or exceeding the brain's capacity has not already been tested with results pointing one way or the other. The reality is that in practice supercomputers like these, which are as few as they are because they are so hugely expensive to build (the Fukagu's a billion dollar machine) and to run (their voracious energy consumption a constant theme of discussion of such equipment), are normally used by only the biggest-budgeted researchers for the most computationally intense tasks, like simulations of complex physical phenomena, such as the Earth's climate or the cosmos--or code-breaking by intelligence services. They have only rarely been available to artificial intelligence researchers. However, the recent enthusiasm for artificial intelligence research has reportedly meant that artificial intelligence researchers has been cited as a factor in the development of the next round of supercomputers (not least because of the utility of AI in facilitating their work).
Especially with this being the case it seems far from impossible that this will enable it to yield new insights into the subject--just as this past decade it was already the case that our having faster computers available permitted the striking advances in areas like machine learning that we saw this past decade. Indeed, even as the recent excitement over artificial intelligence turns into disappointment with the realization that the most-hyped applications (like Level 5 self-driving) are more remote than certain loud-mouthed hucksters promised, the continued expansion of computing power offers considerable grounds to not write those prospects off just yet.
Subscribe to:
Posts (Atom)