As I have remarked time and again, a very large part of contemporary environmentalism--and certainly its more mainstream portion--has been founded on the darkest, most pessimistic Counter-Enlightenment thinking. Premised on postmodernism and Malthusianism; regarding the broad public as a "swinish multitude"; and denying any hope of positive social or political change; it screams about the danger--while being incapable of offering anything in the way of hope.
The result is that defeatism is all that is left to it, one expression of which is its haste to tell everyone that they are about to be dehoused. Live on a coastline? Sorry, you will just have to move. (They seem to especially love telling this to people in places like Miami and New Orleans--their "party town" image, perhaps, which proper Malthusians cannot possibly approve.) Live in an arid region? That's not going to be viable anymore. (The coastal folks have too much water, but they are going to have too little.) The tropics? Better find somewhere else to live. (Go north--to whatever isn't too coastal or too arid.) And this or that realization is followed up by images of latterday Volkerwanderung as millions, billions, relocate. However, anyone of even slight intelligence should be able to see that all of this quickly adds up to there being nowhere left to go--especially as unchecked global warming will mean that the situation will keep worsening, that the sea level will, for example, keep rising, so that areas that appeared safe at one point cease to be so not long after. And it cannot be any other way given that human reliance on freshwater supplies, productive farmland and water transport mean a continued collective dependence on areas particularly vulnerable to any worsening of the situation--while the losses of those areas will mean disruption going far beyond even the colossal human relocations. One would have more people living on far less of the Earth's surface, and getting along on far less of its resources, than they are now.
Meanwhile, how the world has dealt with its current refugee crisis, the worst since World War II but nothing next to the movements those speculating about such movements anticipate, does not inspire great confidence in the readiness of societies to accommodate the displaced on even a much smaller scale than they imagine.
All of this reveals this idea of vast relocations--especially in a world of hundreds of nation-states with all their borders--as the utmost silliness, though in fairness, I strongly suspect that were the troubles to run unchecked the frail international system, already bedeviled by what may be the greatest war danger in human history, would likely have long since escalated to the point of a devastating conflict before it comes to anything like forced mass relocation. (After all, problems like climate change, resource and economic stress, war, do not exist in separate compartments but are all complexly interlinked.)
All of this reminds us that if there is a solution to the problem it is exactly the one that misanthropic, technology- and progress-hating Malthusian-Luddite postmodernists completely reject, namely organization and technology to meet the crisis, with this not a matter of austerity-battered working people displaying great "convenient social virtue" in cheerfully deciding to individually live on less, but large-scale action to accelerate the "energy transition" and decarbonize transport and industry, hack the climate (they can whine all they like about cost and risk--the environmental movement's failures have left little choice but to bet on this route in some form), and minimize whatever damage is actually unavoidable (from slowing the melting of ice sheets to adapting coastal cities to higher sea levels, rather than some individual flight into an ever-shrinking and ever-poorer interior). Some of the technology we need to do all this is available; some exists in only the most nascent forms, and will have to be developed to a point of practical usefulness. Climate "inactivists" will look at acknowledgment of the latter fact and sneer at it as "unrealistic." (Sneering and calling things "unrealistic" are pretty much all that inactivists have in their intellectual arsenal of anti-democracy.) But in contrast with the fantasies of uprooting a planet's people and their life the idea that we can and should support the research and development of practical palliatives is the most pragmatic course--even when this means such exotica as cellular agriculture, or mega-engineering to slow the melting of glaciers.
Sunday, June 25, 2023
Deindustrialization in France
In his writing on France's contemporary troubles (and the contemporary troubles of the advanced industrial world more as well) Emmanuel Todd has had a good deal to say about deindustrialization.
Taking an interest in the issue I looked over the United Nations' time series' on manufacturing "value added" (i.e. net output).
If when considering the matter we see Germany and Japan as at one end of the spectrum (less obviously dynamic than they used to be, but still colossal producers), and Britain and Canada at the other (with their manufacturing output sharply contracted in recent decades), it seems that France is very much in the latter situation--and Todd rightly understanding this as an important factor in the country's well-known discontents.
Taking an interest in the issue I looked over the United Nations' time series' on manufacturing "value added" (i.e. net output).
If when considering the matter we see Germany and Japan as at one end of the spectrum (less obviously dynamic than they used to be, but still colossal producers), and Britain and Canada at the other (with their manufacturing output sharply contracted in recent decades), it seems that France is very much in the latter situation--and Todd rightly understanding this as an important factor in the country's well-known discontents.
David Graeber's "The Bully's Pulpit": Some Reflections
Some years ago David Graeber published a remarkable essay in The Baffler, "The Bully's Pulpit," in which, as an anthropologist, he examined the matter of bullying. In doing so Graeber himself admits that "[t]his is difficult stuff," and tells us that he does not "claim to understand it completely." Still, one of the essay's virtues is a fairly clear conception of what bullying involves--what separates it from other sorts of conflict or aggression, which might be reduced to three interrelated aspects:
1. A significant disparity in power between the bully and their victim. (People who are, in the ways that matter, equals, and know it, cannot be said to bully each other.)
2. The complicity of Authority in the bully's behavior--whether by "looking the other way," or tacitly approving their conduct.
3. The inability of the victim to respond to the bully through means which are both societally approved and effective--and the victim indeed condemned no matter what they do. The victim is unable to flee; and cannot respond to the bully in kind because of the disparity in power; and so is reduced to either ignoring the bullying, resisting ineffectively, or "fighting unfairly." If they ignore the bully (apt to be a painful and humiliating course) the bully escalates their abuse to the point at which they cannot ignore it; if they resist ineffectively they demonstrate that they are weak, and are held in contempt for being weak; and if they resort to something unconventional they are held in contempt for that, too, and likely to be punished as everyone rallies around the bully.
Sanctimoniousness is thus a hallmark of such situations.
In Graeber's analysis the third aspect, the victim's reaction, and the sanctimoniousness toward it, is the point, "[b]ullying creat[ing] a moral drama in which the manner of the victim’s reaction to an act of aggression can be used as retrospective justification for the original act of aggression itself." Putting it another way, central to bullying is propagandizing for the view that the oppressed deserve to be so. (Thus do they abuse someone past the limits of their endurance, and then when they lash out, say "Evil, evil, evil! That's why we have to keep their kind down.")
It is absolutely vile, and I might add, vile in a particular way. While Graeber remarked his having not read Veblen some time after this piece was published, it seems to me that such ritual is yet another reminder of the endurance of what Thorstein Veblen identified as barbarism into our times--with the pervasiveness and severity of such ritual, and the tolerance of it and justification of it, very telling of how much such barbarism lingers in a particular society.
1. A significant disparity in power between the bully and their victim. (People who are, in the ways that matter, equals, and know it, cannot be said to bully each other.)
2. The complicity of Authority in the bully's behavior--whether by "looking the other way," or tacitly approving their conduct.
3. The inability of the victim to respond to the bully through means which are both societally approved and effective--and the victim indeed condemned no matter what they do. The victim is unable to flee; and cannot respond to the bully in kind because of the disparity in power; and so is reduced to either ignoring the bullying, resisting ineffectively, or "fighting unfairly." If they ignore the bully (apt to be a painful and humiliating course) the bully escalates their abuse to the point at which they cannot ignore it; if they resist ineffectively they demonstrate that they are weak, and are held in contempt for being weak; and if they resort to something unconventional they are held in contempt for that, too, and likely to be punished as everyone rallies around the bully.
Sanctimoniousness is thus a hallmark of such situations.
In Graeber's analysis the third aspect, the victim's reaction, and the sanctimoniousness toward it, is the point, "[b]ullying creat[ing] a moral drama in which the manner of the victim’s reaction to an act of aggression can be used as retrospective justification for the original act of aggression itself." Putting it another way, central to bullying is propagandizing for the view that the oppressed deserve to be so. (Thus do they abuse someone past the limits of their endurance, and then when they lash out, say "Evil, evil, evil! That's why we have to keep their kind down.")
It is absolutely vile, and I might add, vile in a particular way. While Graeber remarked his having not read Veblen some time after this piece was published, it seems to me that such ritual is yet another reminder of the endurance of what Thorstein Veblen identified as barbarism into our times--with the pervasiveness and severity of such ritual, and the tolerance of it and justification of it, very telling of how much such barbarism lingers in a particular society.
Fifth Generation Computing: A Reappraisal
Back in April 2022 I published here a brief item about Japan's generally unremembered Fifth Generation Computer Systems Initiative from the standpoint of that initiative's fortieth anniversary (which fell on that very month).
Much hyped at the time, it was supposed to deliver the kind of artificial intelligence toward which we generally still felt ourselves to be straining at that time.
Writing that item my principal thought was for the overblown expectations people had of the program. However, in the wake of more recent work on Large Language Models, like OpenAI's GPT-4, it seems that something of what the fifth-generation computing program's proponents anticipated is at the least starting to become a reality.
It also seems notable that even if fourth-generation computing has not been replaced by fundamentally new hardware, or even shifted the material substrate of the same fourth-generation design from silicon to another material (like the long hoped-for carbon nanotube), we have seen a different chip concept--employed in a specialty capacity rather than as a replacement for fourth-generation computing--play a key role in progress in this field, "AI" (Artificial Intelligence) chips. Indeed, just as anticipated by those who had watched the fifth-generation computing program's development, parallel processing has been critical to the design of these chips for "pattern recognition," and the acceleration of the programs' training.
In the wake of all that, rather than regarding fifth-generation computing as a historical curiosity one may see grounds for it simply having been ahead of its time--and deserving of more respect than it has had to date. Indeed, it may well be that somewhere in the generally overlooked body of research produced in the course of its development there are insights that could power our continued progress in this field.
Much hyped at the time, it was supposed to deliver the kind of artificial intelligence toward which we generally still felt ourselves to be straining at that time.
Writing that item my principal thought was for the overblown expectations people had of the program. However, in the wake of more recent work on Large Language Models, like OpenAI's GPT-4, it seems that something of what the fifth-generation computing program's proponents anticipated is at the least starting to become a reality.
It also seems notable that even if fourth-generation computing has not been replaced by fundamentally new hardware, or even shifted the material substrate of the same fourth-generation design from silicon to another material (like the long hoped-for carbon nanotube), we have seen a different chip concept--employed in a specialty capacity rather than as a replacement for fourth-generation computing--play a key role in progress in this field, "AI" (Artificial Intelligence) chips. Indeed, just as anticipated by those who had watched the fifth-generation computing program's development, parallel processing has been critical to the design of these chips for "pattern recognition," and the acceleration of the programs' training.
In the wake of all that, rather than regarding fifth-generation computing as a historical curiosity one may see grounds for it simply having been ahead of its time--and deserving of more respect than it has had to date. Indeed, it may well be that somewhere in the generally overlooked body of research produced in the course of its development there are insights that could power our continued progress in this field.
Of Left and Right Transhumanism, Again
About a decade ago I had occasion to remark that while transhumanist thought may be said to have been more evident on the left than the right a century ago the existence of left transhumanism scarcely seems a memory where the mainstream is concerned today. This is arguably because of how completely the left has been cut out of the conversation--the bounds of "civil" discourse set by the centrism predominant at mid-century locking it out, and nothing changing that ever since (the left, indeed, coming to be more thoroughly excluded as the "center," deeply conservative from the start, only shifted rightward). However, there also seems little sign of interest in the matter these days on the left.
Why is that?
It may be that in a period like the 1920s, when many on the left felt confident that the future belonged to them, there appeared room for such visions--such that Leon Trotsky, for example, could be seen waxing transhumanist in the seemingly unlikely place for such that is the closing pages of his book Literature and Revolution. By contrast, after decades of catastrophic, bitter political defeat and disappointment (without which the aforementioned marginalization of the left to the degree seen today would not have been possible), the feeling can only be very different--with other issues far more pressing, leaving little time or energy for such concerns. It may even be that, so long accused of being "visionary," with anti-Communist cliché endlessly sneering at the left's claims to rationality and mocking it as a quasi-religion, those on the left hesitate to give their enemies any more ammunition by taking up such concerns as transhumanism.
Of course, were the issue of transhumanism to become more pressing this would change. Some claim that it already has in the wake of recent developments in the field of artificial intelligence, with one team of Microsoft researchers, who strike this author as having been rigorous in their assessment and cautious in their analysis of the new GPT-4 (certainly by the standard of a discourse which seems to think "What sorcery is this?" an intelligent response) suggesting that Artificial General Intelligence (AGI), in however "early" and "incomplete" a form, may already be a reality. Considering their assessment I am inclined to emphasize the "early" and "incomplete" in their judgment--and to suggest that the developments and additions to actually turn what may be early and incomplete AGI into an undeniably complete and mature system (like the system's being equipped with elaborate planning-oriented and self-critical "slow thinking" as well as the "fast thinking" faculties that have so impressed observers, and its being modified so as to learn continually and retain what it learns via a long-term memory facility) may be a long way off. All the same, the extent to which people of ideological tendencies which have not felt the need to address these matters find themselves having to do so may be telling with regard to just how much these new developments really matter.
Why is that?
It may be that in a period like the 1920s, when many on the left felt confident that the future belonged to them, there appeared room for such visions--such that Leon Trotsky, for example, could be seen waxing transhumanist in the seemingly unlikely place for such that is the closing pages of his book Literature and Revolution. By contrast, after decades of catastrophic, bitter political defeat and disappointment (without which the aforementioned marginalization of the left to the degree seen today would not have been possible), the feeling can only be very different--with other issues far more pressing, leaving little time or energy for such concerns. It may even be that, so long accused of being "visionary," with anti-Communist cliché endlessly sneering at the left's claims to rationality and mocking it as a quasi-religion, those on the left hesitate to give their enemies any more ammunition by taking up such concerns as transhumanism.
Of course, were the issue of transhumanism to become more pressing this would change. Some claim that it already has in the wake of recent developments in the field of artificial intelligence, with one team of Microsoft researchers, who strike this author as having been rigorous in their assessment and cautious in their analysis of the new GPT-4 (certainly by the standard of a discourse which seems to think "What sorcery is this?" an intelligent response) suggesting that Artificial General Intelligence (AGI), in however "early" and "incomplete" a form, may already be a reality. Considering their assessment I am inclined to emphasize the "early" and "incomplete" in their judgment--and to suggest that the developments and additions to actually turn what may be early and incomplete AGI into an undeniably complete and mature system (like the system's being equipped with elaborate planning-oriented and self-critical "slow thinking" as well as the "fast thinking" faculties that have so impressed observers, and its being modified so as to learn continually and retain what it learns via a long-term memory facility) may be a long way off. All the same, the extent to which people of ideological tendencies which have not felt the need to address these matters find themselves having to do so may be telling with regard to just how much these new developments really matter.
Will AI Puncture the College Degree Bubble?
Some time ago I wrote about the possibility that college degrees had become the object of an asset bubble-like pursuit, with the price of the "asset" increasingly detached from the value of the good amid increasingly irrational buying--facilitated, as bubbles often are, by burgeoning credit (the more in as student loan generators securitize the debt, providing other money-making opportunities).
The bubble has "inflated" to far greater proportions than onlookers had imagined possible--but it seems that amid harder times and much disappointment, this may be coming to an end as young people become more reticent about the time and money college requires, encouraged to some extent by the changing tone of the press. Ever inclined to the promotion of aspirationalism rather than egalitarianism, its cry of "STEM! STEM! STEM!" has given way to its suggesting young people consider becoming electricians.
Just as much as was the case with the earlier counsel of "Go to college, young man" the "Go to trade school, young man" advice can reflect any number of agendas (like cover for financial austerity that would make college less accessible), but part of it may also be the increasing acknowledgment of the simple-mindedness of the "Send everyone to college" mentality--the more in as so many fear now that when the "AI" come for people's jobs it will be the knowledge workers who lose their jobs first, as, far from truckers having to learn to code, out-of-work coders start thinking about getting trucker's licenses.
The bubble has "inflated" to far greater proportions than onlookers had imagined possible--but it seems that amid harder times and much disappointment, this may be coming to an end as young people become more reticent about the time and money college requires, encouraged to some extent by the changing tone of the press. Ever inclined to the promotion of aspirationalism rather than egalitarianism, its cry of "STEM! STEM! STEM!" has given way to its suggesting young people consider becoming electricians.
Just as much as was the case with the earlier counsel of "Go to college, young man" the "Go to trade school, young man" advice can reflect any number of agendas (like cover for financial austerity that would make college less accessible), but part of it may also be the increasing acknowledgment of the simple-mindedness of the "Send everyone to college" mentality--the more in as so many fear now that when the "AI" come for people's jobs it will be the knowledge workers who lose their jobs first, as, far from truckers having to learn to code, out-of-work coders start thinking about getting trucker's licenses.
"Are We Seeing the Beginning of the End of College as We Know It?": A Follow-Up
Some time ago I speculated about the "end of college as we know it."
What I had in mind was the way that pursuit of a degree--and especially adherence to the "Cult of the Good School"--had become crushing relative to the return to be hoped for from it (to the point that the use of the word "bubble" has some justification). This seems all the more the case for the evidences that young people have become more skeptical about the economic value of a college degree--in part because of the disappointing experience of their elders in an economy ever more remote from the post-war boom and its hazy notions of generalized "middle classness," premised on the piece of paper you get on graduation being a ticket exchangeable for the life of an "organization man" (and the encouragement of "everyone" to go to college by centrist "liberals" more comfortable with preaching individualistic aspirationalism to being something other than working class than doing things that actually help out the working class--as seen in the matter of who pays the bill).
In recent the months "the conversation" has seen two significant, interrelated alterations. One is that, while we have not seen progress in the performance of artificial intelligence in tasks involving "perception and manipulation" such as would suggest an imminent revolution in the automation of manual tasks, progress in "Large Language Models" has called into question the livelihoods of the "knowledge workers" that college trains people to become. (Indeed, one thoughtful study of GPT-4 by a team of Microsoft scientists contended that "general artificial intelligence," in however primitive and incomplete a form, may have already arrived.) The other is a bit of cooling off of the chatter about there not being enough STEM graduates as instead commentators wring their hands over a shortage of personnel in the "skilled trades"--as the chatbots put "the coders" out of work, but still leave us needing electricians.
One can only wonder if those assumptions will prove any more enduring than those which preceded them--if the sense of imminent massive job displacement among knowledge workers will not prove to be hype, if the claims of a skilled trade worker shortage will not prove as flimsy and self-serving as the eternal claims of a STEM worker shortage. (Employers always say there is a shortage of workers--which is to say they always think wages are too high--and the media being what it is faithfully and uncritically reports their views, and no others, while much of the commentary has a nasty streak of intergenerational warfare about it, with grouchy old farts snarling that young people are too lazy or afraid of getting their hands dirty for "real work.")
Nevertheless, if there is any truth to all this--and even if there is not, but people act as if there is--there will be consequences for how we educate, credential, hire and work. However, we should remember that, contrary to the rush of the advocates of the powerful to insist that "There Is No Alternative" as they inflict pain on the powerless, facing those consequences there will be social choices--necessitating a critical attitude from the public toward those claqueurs who instead simply applaud.
What I had in mind was the way that pursuit of a degree--and especially adherence to the "Cult of the Good School"--had become crushing relative to the return to be hoped for from it (to the point that the use of the word "bubble" has some justification). This seems all the more the case for the evidences that young people have become more skeptical about the economic value of a college degree--in part because of the disappointing experience of their elders in an economy ever more remote from the post-war boom and its hazy notions of generalized "middle classness," premised on the piece of paper you get on graduation being a ticket exchangeable for the life of an "organization man" (and the encouragement of "everyone" to go to college by centrist "liberals" more comfortable with preaching individualistic aspirationalism to being something other than working class than doing things that actually help out the working class--as seen in the matter of who pays the bill).
In recent the months "the conversation" has seen two significant, interrelated alterations. One is that, while we have not seen progress in the performance of artificial intelligence in tasks involving "perception and manipulation" such as would suggest an imminent revolution in the automation of manual tasks, progress in "Large Language Models" has called into question the livelihoods of the "knowledge workers" that college trains people to become. (Indeed, one thoughtful study of GPT-4 by a team of Microsoft scientists contended that "general artificial intelligence," in however primitive and incomplete a form, may have already arrived.) The other is a bit of cooling off of the chatter about there not being enough STEM graduates as instead commentators wring their hands over a shortage of personnel in the "skilled trades"--as the chatbots put "the coders" out of work, but still leave us needing electricians.
One can only wonder if those assumptions will prove any more enduring than those which preceded them--if the sense of imminent massive job displacement among knowledge workers will not prove to be hype, if the claims of a skilled trade worker shortage will not prove as flimsy and self-serving as the eternal claims of a STEM worker shortage. (Employers always say there is a shortage of workers--which is to say they always think wages are too high--and the media being what it is faithfully and uncritically reports their views, and no others, while much of the commentary has a nasty streak of intergenerational warfare about it, with grouchy old farts snarling that young people are too lazy or afraid of getting their hands dirty for "real work.")
Nevertheless, if there is any truth to all this--and even if there is not, but people act as if there is--there will be consequences for how we educate, credential, hire and work. However, we should remember that, contrary to the rush of the advocates of the powerful to insist that "There Is No Alternative" as they inflict pain on the powerless, facing those consequences there will be social choices--necessitating a critical attitude from the public toward those claqueurs who instead simply applaud.
Thursday, April 27, 2023
Bill Gates Predicts AI Will Be Teaching Literacy in Eighteen Months. What Are We To Make of That?
Last year, when talk of a teacher shortage was topical, I took up the question of teaching's possibly being automated in the near term. Considering the matter it seemed to me notable that Carl Benedikt Frey and Michael Osborne's Future of Work study, which I thought overly bullish on the prospect of automation as a whole, rated teaching as one of those jobs least likely to be automated within the coming decades. Indeed, given their evaluation of the automatability of various tasks, far from seeing computers replace teachers, I pictured a scenario in which the disappearance of a great many other "knowledge worker" jobs worked by the college-educated had more people turning to teaching to make a living.
Of course, the months since I wrote that piece have been eventful from the standpoint of developments in artificial intelligence research. The excitement over progress in chatbots specifically surged with the releases of the latest iterations of OpenAI's GPT--experiments with which, in fact, convinced the authors of one notable study that "artificial general intelligence" is no longer an object of speculation, but, if only in primitive and incomplete form, a reality.
Now the cofounder and former CEO, president and chairman of the very company whose scientists produced that very study tells us that in eighteen months AI, on the way to becoming as competent as any human tutor.
Reading that statement I wondered whether it was worth remarking.
As a commentator on public affairs I have consistently found "Bill" Gates to be fairly banal--his views pretty much the standard "Davos Man" line whether the matter is poverty, intellectual property, or, as in this case, education--with Gates, one might add, far from being the most articulate, rigorous or interesting champion for his ideas. However, even if one is not impressed with his claims, or his arguments for them, the fact remains that in this culture where billionaires are so often treated as All-Knowing, All-Seeing Oracles by the courtiers of the media, and those who heed such unquestioningly, even Gates’ most unconsidered statements are accorded extreme respect by many, while Gates’ very conventionality means that what he speaks is apt to be what a great many others are already thinking--in this case, that the technology will be doing this before today's toddlers are in kindergarten. Moreover, even if they are wrong about that (Gates has been extremely bullish on the technology for some time now, rather more convinced than I am of its epoch-making nature), what he is saying and those others are thinking is apt to be what a great many will be acting on, or trying to, especially given the matter at hand. There is many a person for whom even the pretext of AI capable of even fractionally replacing the human educators they see as an expensive annoyance could be a powerful weapon--such that as the fights over math, reading, history and all the rest rage across the country's school districts, I increasingly expect to see the issue of automation enter the fight.
Of course, the months since I wrote that piece have been eventful from the standpoint of developments in artificial intelligence research. The excitement over progress in chatbots specifically surged with the releases of the latest iterations of OpenAI's GPT--experiments with which, in fact, convinced the authors of one notable study that "artificial general intelligence" is no longer an object of speculation, but, if only in primitive and incomplete form, a reality.
Now the cofounder and former CEO, president and chairman of the very company whose scientists produced that very study tells us that in eighteen months AI, on the way to becoming as competent as any human tutor.
Reading that statement I wondered whether it was worth remarking.
As a commentator on public affairs I have consistently found "Bill" Gates to be fairly banal--his views pretty much the standard "Davos Man" line whether the matter is poverty, intellectual property, or, as in this case, education--with Gates, one might add, far from being the most articulate, rigorous or interesting champion for his ideas. However, even if one is not impressed with his claims, or his arguments for them, the fact remains that in this culture where billionaires are so often treated as All-Knowing, All-Seeing Oracles by the courtiers of the media, and those who heed such unquestioningly, even Gates’ most unconsidered statements are accorded extreme respect by many, while Gates’ very conventionality means that what he speaks is apt to be what a great many others are already thinking--in this case, that the technology will be doing this before today's toddlers are in kindergarten. Moreover, even if they are wrong about that (Gates has been extremely bullish on the technology for some time now, rather more convinced than I am of its epoch-making nature), what he is saying and those others are thinking is apt to be what a great many will be acting on, or trying to, especially given the matter at hand. There is many a person for whom even the pretext of AI capable of even fractionally replacing the human educators they see as an expensive annoyance could be a powerful weapon--such that as the fights over math, reading, history and all the rest rage across the country's school districts, I increasingly expect to see the issue of automation enter the fight.
Wednesday, April 26, 2023
Is Climate Denial Spreading? If So, Why?
A recent poll regarding the public’s attitude toward climate change funded by the Energy Policy Institute at the University of Chicago (EPIC) and conducted by the university's National Opinion Research Center (NORC) and the Associated Press (full results here) is getting some mention in in the more general press.
According to the poll the percentage of the public that believes climate change is a reality does not seem to have budged much--at 74 percent, within the familiar range of the past several years. What seems more significant is that the percentage who think climate change is not primarily human-caused has dropped--from 60 percent in 2018 to 49 percent in 2023. This seems mostly a matter of the growth of those who think natural/environmental factors are contributing equally as human factors to the phenomenon, which has jumped from 28 to 37 percent over the same time frame, rather than any drastic growth in the number of those who think it is mostly or entirely a natural phenomenon, which has risen comparatively slightly.
Still, if the change does not seem very extreme (one still has 86 percent at least believing that human contributions matter) it is not the direction in which those concerned for anthropogenic climate change would have hoped to see things moving--which, of course, is to see the number of those who recognize the reality of climate change as an essentially anthropogenic phenomenon growing, widening the support for action on the problem. Indeed, from the perspective of those concerned with the issue, and the extremely successful resistance of opposition to any meaningful action on it, any erosion is troubling. The shift from believing climate change is primarily human-caused to believing it is equally of natural causes is especially so because of what it may portend--a transition from the view of climate change as human-caused toward the view that human activity has nothing much to do with it at all.
Thus far I have not seen much interest taken by commentators in why this change may have occurred, important as that is to understanding their implications. However, I can think of at least three factors being of some significance here:
1. Less Mainstream Press Attention.
I have had the impression--unscientific, but all the same, strong and consistent--that amid pandemic, inflation and war climate change has got less press than before in the mainstream media, leaving people somewhat less conscious of the issue than before, and of the scientific consensus that climate change is an anthropogenic phenomenon. At the same time I have noticed no evidence that those pushing the opposite view have slackened in their efforts to persuade the public that climate change is nonexistent, or at least not caused by human activity. The result may be that there is less contestation of the climate denialist view than before, and that this is having its effect on public opinion. It is easier to picture this being the case because
2. The Country's Politics Are Shifting.
It is a commonplace these days that the country is becoming more "polarized" between right and left. I am not so sure this is a really useful way to think about the situation--in part because if there is indeed a left turn on the part of any significant portion of the population (a claim open to question given the ambivalence of the evidence) it is far from making itself felt in the country’s political life as an actual force. By contrast those who have moved further right have done exactly that. (Consider, for instance, how much better Donald Trump fared in his presidential primary than Bernie Sanders, or the weight the Freedom Caucus has within the Republican Party, as against that of the Democratic Socialists of America on the Democratic Party.) Attitudes toward the environment have been no exception here--and it is easy enough to picture those who have shifted rightwards as less willing to acknowledge anthropogenic climate change than before.
3. What "Human-Caused" Climate Change Means May Be Less Clear Than You Think.
It is a truism that polling reflects not just popular feeling on an issue, but the way in which it was asked about it--which can be tailored to elicit the answer the poller desires, or, should the poller be insensitive to the nuances of their own words, produce a misleading result they did not desire. Where this is concerned consider what it means for humans to be causing climate change. Specifically consider how many of those shaping the discourse on the subject have gone to great lengths to make people think of the human impact on the climate as a matter of individual "lifestyle" choices by everyday people--their diet, their choice of appliances, etc.--rather than collective behavior as manifest in large organizations ultimately directed by a powerful few--for instance, the investments of energy and utility companies, or the decisions of major governments. (Indeed, the EPIC poll itself is saturated with such thinking, particularly noticeable in its barraging the surveyed with questions about their personal consumption habits.)
Dumping the responsibility for the climate crisis on hard-pressed individuals who make their consumption choices from a range of options very limited by their means--(which many have long called out as unwise and unjust, an extreme inversion of Uncle Ben's teaching, putting on those who have none of the power all of the responsibility) plausibly elicits a refusal of that responsibility from many. No, they say, I am not the cause of a crisis, which inclines them that much more the view that there is no crisis of humanity's making generally, or even any crisis at all. Which, of course, is exactly the intended result of this "individualization" of the problem in the view of those critical of "climate inactivists" (who note, for example, that the individualistic vision of personal carbon footprint management came not from Greenpeace but BP).
If one accepts this reading of the situation at all then there seem to be three obvious "takeaways," none new to anyone who has been paying much attention, but worth repeating because they simply do not seem to sink in with a great many persons who really need to understand them:
1. The mainstream media so often held up as "our saviors" in a world of "fake news" and other such threats has often been anything but. (After all, it is the mainstream media that consecrated climate denialism as an intellectually respectable position in the first place--and left deeply flawed understandings of the possibility for response as the sole alternative--because of the political biases shaping its framing of the issues.)
2. The environment cannot be treated as conveniently disconnected from other issues the way some prefer to think. Quite the contrary, as people who pride themselves on alertness to the functioning of ecosystems should be aware, everything is connected, and how they think about other things will affect how they think about this thing.
3. Where those connections are concerned one especially cannot ignore the issue of wealth, power and justice when addressing problems like climate change, and the environment generally, a lesson too many environmentalists have forgotten too many times in the past.
According to the poll the percentage of the public that believes climate change is a reality does not seem to have budged much--at 74 percent, within the familiar range of the past several years. What seems more significant is that the percentage who think climate change is not primarily human-caused has dropped--from 60 percent in 2018 to 49 percent in 2023. This seems mostly a matter of the growth of those who think natural/environmental factors are contributing equally as human factors to the phenomenon, which has jumped from 28 to 37 percent over the same time frame, rather than any drastic growth in the number of those who think it is mostly or entirely a natural phenomenon, which has risen comparatively slightly.
Still, if the change does not seem very extreme (one still has 86 percent at least believing that human contributions matter) it is not the direction in which those concerned for anthropogenic climate change would have hoped to see things moving--which, of course, is to see the number of those who recognize the reality of climate change as an essentially anthropogenic phenomenon growing, widening the support for action on the problem. Indeed, from the perspective of those concerned with the issue, and the extremely successful resistance of opposition to any meaningful action on it, any erosion is troubling. The shift from believing climate change is primarily human-caused to believing it is equally of natural causes is especially so because of what it may portend--a transition from the view of climate change as human-caused toward the view that human activity has nothing much to do with it at all.
Thus far I have not seen much interest taken by commentators in why this change may have occurred, important as that is to understanding their implications. However, I can think of at least three factors being of some significance here:
1. Less Mainstream Press Attention.
I have had the impression--unscientific, but all the same, strong and consistent--that amid pandemic, inflation and war climate change has got less press than before in the mainstream media, leaving people somewhat less conscious of the issue than before, and of the scientific consensus that climate change is an anthropogenic phenomenon. At the same time I have noticed no evidence that those pushing the opposite view have slackened in their efforts to persuade the public that climate change is nonexistent, or at least not caused by human activity. The result may be that there is less contestation of the climate denialist view than before, and that this is having its effect on public opinion. It is easier to picture this being the case because
2. The Country's Politics Are Shifting.
It is a commonplace these days that the country is becoming more "polarized" between right and left. I am not so sure this is a really useful way to think about the situation--in part because if there is indeed a left turn on the part of any significant portion of the population (a claim open to question given the ambivalence of the evidence) it is far from making itself felt in the country’s political life as an actual force. By contrast those who have moved further right have done exactly that. (Consider, for instance, how much better Donald Trump fared in his presidential primary than Bernie Sanders, or the weight the Freedom Caucus has within the Republican Party, as against that of the Democratic Socialists of America on the Democratic Party.) Attitudes toward the environment have been no exception here--and it is easy enough to picture those who have shifted rightwards as less willing to acknowledge anthropogenic climate change than before.
3. What "Human-Caused" Climate Change Means May Be Less Clear Than You Think.
It is a truism that polling reflects not just popular feeling on an issue, but the way in which it was asked about it--which can be tailored to elicit the answer the poller desires, or, should the poller be insensitive to the nuances of their own words, produce a misleading result they did not desire. Where this is concerned consider what it means for humans to be causing climate change. Specifically consider how many of those shaping the discourse on the subject have gone to great lengths to make people think of the human impact on the climate as a matter of individual "lifestyle" choices by everyday people--their diet, their choice of appliances, etc.--rather than collective behavior as manifest in large organizations ultimately directed by a powerful few--for instance, the investments of energy and utility companies, or the decisions of major governments. (Indeed, the EPIC poll itself is saturated with such thinking, particularly noticeable in its barraging the surveyed with questions about their personal consumption habits.)
Dumping the responsibility for the climate crisis on hard-pressed individuals who make their consumption choices from a range of options very limited by their means--(which many have long called out as unwise and unjust, an extreme inversion of Uncle Ben's teaching, putting on those who have none of the power all of the responsibility) plausibly elicits a refusal of that responsibility from many. No, they say, I am not the cause of a crisis, which inclines them that much more the view that there is no crisis of humanity's making generally, or even any crisis at all. Which, of course, is exactly the intended result of this "individualization" of the problem in the view of those critical of "climate inactivists" (who note, for example, that the individualistic vision of personal carbon footprint management came not from Greenpeace but BP).
If one accepts this reading of the situation at all then there seem to be three obvious "takeaways," none new to anyone who has been paying much attention, but worth repeating because they simply do not seem to sink in with a great many persons who really need to understand them:
1. The mainstream media so often held up as "our saviors" in a world of "fake news" and other such threats has often been anything but. (After all, it is the mainstream media that consecrated climate denialism as an intellectually respectable position in the first place--and left deeply flawed understandings of the possibility for response as the sole alternative--because of the political biases shaping its framing of the issues.)
2. The environment cannot be treated as conveniently disconnected from other issues the way some prefer to think. Quite the contrary, as people who pride themselves on alertness to the functioning of ecosystems should be aware, everything is connected, and how they think about other things will affect how they think about this thing.
3. Where those connections are concerned one especially cannot ignore the issue of wealth, power and justice when addressing problems like climate change, and the environment generally, a lesson too many environmentalists have forgotten too many times in the past.
Tuesday, April 25, 2023
Just What is an AI Chip Anyway?
These days the discussion of advances in artificial intelligence seems to emphasize the neural networks that, through training on vast amounts of web data, learn to recognize patterns and act on them--as with the "word prediction"-based GPT-4 that has been making headlines everywhere this past month (such that news outlets which do not ordinarily give such matters much heed are writing about them profusely). By contrast we hear less of the hardware on which the neural networks are run--but all the same you have probably heard the term "AI chip" being bandied about. If you looked it up you probably also found it hard to get a straightforward explanation as to how AI chips are different from regular--"general-purpose"--computing chips in their functioning, and why this matters.
There is some reason for that. The material is undeniably technical, with many an important concept of little apparent meaning without reference to another concept. (It is a lot easier to appreciate "parallel processing" if one knows about "sequential processing," for example.) Still, it is not so hard to get some grasp of some of the basics as one may think for all that.
Basically general-purpose chips are intended to be usable for pretty much anything and everything computers do. However, AI chips are designed to perform as many of the specific calculations needed by AI systems--which is to say, the calculations used in the training of neural nets on data, and the application of that training (the term for which is "inference")--as possible, even at the expense of their ability to perform the wider variety of tasks to which general-purpose computers are put.
Putting it crudely this comes to sacrificing "quality" to "quantity" where calculations are concerned--the chip doing many, many more "imprecise" calculations in a given amount of time, because qualitatively those less precise calculations are "good enough" for an object like pattern recognition, and the premium on getting as many calculations done as quickly as possible is high. (Pattern recognition is very calculation-intensive, so that it can be better to have more rough calculations than fewer precise ones.) Admittedly this still sounds a bit abstract, but it has a clear, concrete basis in aspects of AI chip design presented below, namely:
1. Optimization for Low Precision Calculations. (Think Lower-Bit Execution Units on a Logic Chip--But More of Them.)
It is fairly basic computer science knowledge that computers perform their calculations using strings of "bits"--the 0s and 1s of binary code--with increasingly advanced computers using longer and longer strings enabling more precise calculation. For instance, we may speak of 8-bit calculations involving strings of 8 1s and 0s (allowing for, at 2 to the power of 8, a mere 256 values) as against 16-bit calculations using strings of 16 such 1s and 0s (which means at 2 to the power of 16, 256 times 256, or 65,536, values).
However, it may be the case that even when we could have a 16-bit calculation, for particular purposes the 8-bit calculations are adequate, especially if we go about making those calculations the right way (e.g. do a good job of rounding the numbers). It just so happens that neural net training and inference is one area where this works, where the values may be known to fall in a limited range, the task coming back as it does to pattern recognition. After all, the pattern the algorithm is supposed to look for is either there or not--as with some image it is supposed to recognize.
Why does this matter? The answer is that you could, on a given "logic" chip (the kind we use for processing, not memory storage), get a lot more 8-bit calculations done than 16-bit calculations. An 8-bit execution unit, for example, uses just one-sixth the chip space--and energy--that a 16-bit execution unit does. The result is that opting for the 8-bit unit when given a choice between the two means many more execution units can be put on a given chip, and thus have that many more 8-bit calculations done at once (against one 16-unit doing 16-bit calculations). Given that pattern-recognition can be a very calculation-intensive task, the trade-off of precision of calculations against quantity of calculations can be well worth the while.
2. "Model-Level Parallelism." (Chop Up the Task So Those Lower-Bit But More Numerous Execution Units Can Work Simultaneously--in Parallel--to Get it Done Faster.)
In general-purpose computer logic chips are designed for sequential processing--the execution unit does one calculation by itself all the way through. However, computers can alternatively utilize parallel processing which splits a task into "batches" which can be performed all at once by different execution units on a chip, or different chips within a bigger system--the calculation split up among the units, which do their parts of the calculations, with the results being added up. This permits a given piece of processing to be done more quickly.
That being the case you might wonder why we do not use parallel processing for all computing tasks. The reason is that parallel processing means more complexity and higher costs all around--more processors, and more of everything required to keep them running properly (energy, etc.). Additionally, not every problem lends itself well to this kind of task division. Parallelism works best when you can chop up one big task into a lot of small, highly repetitive tasks performed over and over again--in computer jargon, when the task is "fine-grained" with numerous "iterations"--until some condition is met, like performing that task a pre-set number of times, or triggering some response. It works less well when the task is less divisible or repetitive. (Indeed, the time taken to split up and distribute the batches of the task among the various processors may end up making such a process slower than if it were done sequentially on one processor.)
As it happens, the kind of neural network operations with which AI research is concerned are exactly the kind of situation where parallel processing pays off because the operations they involve tend to be "identical and independent of the results of other computations." Consider, for example, how this can be done when a neural network is asked to recognize an image--different execution units responsible for examining different regions, or parts, of an image all at once--until the overall neural network, "adding up" the results of the calculations in those individual units, recognizes the image as what it is or is not supposed to look for.
3. Memory Optimization. (Given All the Space Savings, and the Predictability of the Task, You Might Even Be Able to Put the Memory on the Same Chip Doing the Processing, Speeding Up the Work Yet Again.)
As previously noted in general-purpose computing there is a separation between logic chips and memory chips, which has the logic chips having to access memory "off-chip" as they process data because, given the premium on the chip's flexibility, it is not clear in advance just what data the processor will have to access to perform its task.
As it happens the mechanics of accessing data off-chip constitute a significant drag on a processor's performance. It is the case that it takes more time, and energy, to access off-chip data like this than actually process that data, with all that means performance-wise, the more in as processing speed has improved more rapidly than the speed of that memory access.
However, if one knows in advance what data a particular process will need, the memory storage can be located closer to the processor, shortening the distance and saving energy. In fact, especially when there are contemporary processing and space savings such as those lower-bit execution units afford, the prospect exists of getting around the processing-memory "bottleneck" by having the processing and the memory it needs to use together on the very same chip. Moreover, while chips can be designed for particular operations from the outset (a type known as "Application-Specific Integrated Circuits, or ASICs), chips can be designed so that even after fabrication suitable programming can reconfigure their circuitry to arrange them in the way that would let them most efficiently run some operations developed afterward (called Field Programmable Gate Arrays, or FPGAs). The result is, again, an improvement in speed and efficiency generally that is heavily used in AI chips to help maximize that capacity for low-precision calculation at the heart of their usage.
To sum up: the value of AI chips lies in their use of more but lower-bit execution units organized for parallel processing on chips physically arranged to reduce or eliminate the time and energy costs of memory access so as to maximize their efficiency at low-precision calculations in a way that by no means works for everything, but works well for neural net training and use.
Of course, knowing all that may leave us wondering just how much difference it has all actually made in real-life computing. As it happens, for all the hype about how many hundreds and hundreds of billions of dollars the market for AI chips will expand to in 2030 or somesuch date, in the real-life year of 2021 they were an $11 billion market. That sounds like a lot--until one remembers that the overall chip market is over $550 billion, making the AI chip market just 2 percent of the total. Yes, just 2 percent--which is a reminder that, even if it can look from perusing the "tech" news as if these chips are everywhere, where everyday life is concerned we are still relying on fourth-generation computing--while, again, the AI chips we have, being inferior for general computing use, largely used for research and probably not about to displace the general-purpose kind in your general-purpose gadgets anytime soon.
Still, as one study of the subject of AI chips from Georgetown University's Center for Security and Emerging Technology reports, in the training and inference of neural networks such chips afford a gain of one to three orders of magnitude in the speed and efficiency as against general-purpose chips. Putting it another way, being able to use AI chips for this work, rather than just using the general-purpose kind, by letting computer scientists train neural nets tens, hundreds or even thousands of times faster than we otherwise would, may have advanced the state-of-the-art in this field by decades, bringing us to the present moment when even experts look at our creations and wonder if "artificial general intelligence" has not already arrived.
There is some reason for that. The material is undeniably technical, with many an important concept of little apparent meaning without reference to another concept. (It is a lot easier to appreciate "parallel processing" if one knows about "sequential processing," for example.) Still, it is not so hard to get some grasp of some of the basics as one may think for all that.
Basically general-purpose chips are intended to be usable for pretty much anything and everything computers do. However, AI chips are designed to perform as many of the specific calculations needed by AI systems--which is to say, the calculations used in the training of neural nets on data, and the application of that training (the term for which is "inference")--as possible, even at the expense of their ability to perform the wider variety of tasks to which general-purpose computers are put.
Putting it crudely this comes to sacrificing "quality" to "quantity" where calculations are concerned--the chip doing many, many more "imprecise" calculations in a given amount of time, because qualitatively those less precise calculations are "good enough" for an object like pattern recognition, and the premium on getting as many calculations done as quickly as possible is high. (Pattern recognition is very calculation-intensive, so that it can be better to have more rough calculations than fewer precise ones.) Admittedly this still sounds a bit abstract, but it has a clear, concrete basis in aspects of AI chip design presented below, namely:
1. Optimization for Low Precision Calculations. (Think Lower-Bit Execution Units on a Logic Chip--But More of Them.)
It is fairly basic computer science knowledge that computers perform their calculations using strings of "bits"--the 0s and 1s of binary code--with increasingly advanced computers using longer and longer strings enabling more precise calculation. For instance, we may speak of 8-bit calculations involving strings of 8 1s and 0s (allowing for, at 2 to the power of 8, a mere 256 values) as against 16-bit calculations using strings of 16 such 1s and 0s (which means at 2 to the power of 16, 256 times 256, or 65,536, values).
However, it may be the case that even when we could have a 16-bit calculation, for particular purposes the 8-bit calculations are adequate, especially if we go about making those calculations the right way (e.g. do a good job of rounding the numbers). It just so happens that neural net training and inference is one area where this works, where the values may be known to fall in a limited range, the task coming back as it does to pattern recognition. After all, the pattern the algorithm is supposed to look for is either there or not--as with some image it is supposed to recognize.
Why does this matter? The answer is that you could, on a given "logic" chip (the kind we use for processing, not memory storage), get a lot more 8-bit calculations done than 16-bit calculations. An 8-bit execution unit, for example, uses just one-sixth the chip space--and energy--that a 16-bit execution unit does. The result is that opting for the 8-bit unit when given a choice between the two means many more execution units can be put on a given chip, and thus have that many more 8-bit calculations done at once (against one 16-unit doing 16-bit calculations). Given that pattern-recognition can be a very calculation-intensive task, the trade-off of precision of calculations against quantity of calculations can be well worth the while.
2. "Model-Level Parallelism." (Chop Up the Task So Those Lower-Bit But More Numerous Execution Units Can Work Simultaneously--in Parallel--to Get it Done Faster.)
In general-purpose computer logic chips are designed for sequential processing--the execution unit does one calculation by itself all the way through. However, computers can alternatively utilize parallel processing which splits a task into "batches" which can be performed all at once by different execution units on a chip, or different chips within a bigger system--the calculation split up among the units, which do their parts of the calculations, with the results being added up. This permits a given piece of processing to be done more quickly.
That being the case you might wonder why we do not use parallel processing for all computing tasks. The reason is that parallel processing means more complexity and higher costs all around--more processors, and more of everything required to keep them running properly (energy, etc.). Additionally, not every problem lends itself well to this kind of task division. Parallelism works best when you can chop up one big task into a lot of small, highly repetitive tasks performed over and over again--in computer jargon, when the task is "fine-grained" with numerous "iterations"--until some condition is met, like performing that task a pre-set number of times, or triggering some response. It works less well when the task is less divisible or repetitive. (Indeed, the time taken to split up and distribute the batches of the task among the various processors may end up making such a process slower than if it were done sequentially on one processor.)
As it happens, the kind of neural network operations with which AI research is concerned are exactly the kind of situation where parallel processing pays off because the operations they involve tend to be "identical and independent of the results of other computations." Consider, for example, how this can be done when a neural network is asked to recognize an image--different execution units responsible for examining different regions, or parts, of an image all at once--until the overall neural network, "adding up" the results of the calculations in those individual units, recognizes the image as what it is or is not supposed to look for.
3. Memory Optimization. (Given All the Space Savings, and the Predictability of the Task, You Might Even Be Able to Put the Memory on the Same Chip Doing the Processing, Speeding Up the Work Yet Again.)
As previously noted in general-purpose computing there is a separation between logic chips and memory chips, which has the logic chips having to access memory "off-chip" as they process data because, given the premium on the chip's flexibility, it is not clear in advance just what data the processor will have to access to perform its task.
As it happens the mechanics of accessing data off-chip constitute a significant drag on a processor's performance. It is the case that it takes more time, and energy, to access off-chip data like this than actually process that data, with all that means performance-wise, the more in as processing speed has improved more rapidly than the speed of that memory access.
However, if one knows in advance what data a particular process will need, the memory storage can be located closer to the processor, shortening the distance and saving energy. In fact, especially when there are contemporary processing and space savings such as those lower-bit execution units afford, the prospect exists of getting around the processing-memory "bottleneck" by having the processing and the memory it needs to use together on the very same chip. Moreover, while chips can be designed for particular operations from the outset (a type known as "Application-Specific Integrated Circuits, or ASICs), chips can be designed so that even after fabrication suitable programming can reconfigure their circuitry to arrange them in the way that would let them most efficiently run some operations developed afterward (called Field Programmable Gate Arrays, or FPGAs). The result is, again, an improvement in speed and efficiency generally that is heavily used in AI chips to help maximize that capacity for low-precision calculation at the heart of their usage.
To sum up: the value of AI chips lies in their use of more but lower-bit execution units organized for parallel processing on chips physically arranged to reduce or eliminate the time and energy costs of memory access so as to maximize their efficiency at low-precision calculations in a way that by no means works for everything, but works well for neural net training and use.
Of course, knowing all that may leave us wondering just how much difference it has all actually made in real-life computing. As it happens, for all the hype about how many hundreds and hundreds of billions of dollars the market for AI chips will expand to in 2030 or somesuch date, in the real-life year of 2021 they were an $11 billion market. That sounds like a lot--until one remembers that the overall chip market is over $550 billion, making the AI chip market just 2 percent of the total. Yes, just 2 percent--which is a reminder that, even if it can look from perusing the "tech" news as if these chips are everywhere, where everyday life is concerned we are still relying on fourth-generation computing--while, again, the AI chips we have, being inferior for general computing use, largely used for research and probably not about to displace the general-purpose kind in your general-purpose gadgets anytime soon.
Still, as one study of the subject of AI chips from Georgetown University's Center for Security and Emerging Technology reports, in the training and inference of neural networks such chips afford a gain of one to three orders of magnitude in the speed and efficiency as against general-purpose chips. Putting it another way, being able to use AI chips for this work, rather than just using the general-purpose kind, by letting computer scientists train neural nets tens, hundreds or even thousands of times faster than we otherwise would, may have advanced the state-of-the-art in this field by decades, bringing us to the present moment when even experts look at our creations and wonder if "artificial general intelligence" has not already arrived.
Subscribe to:
Posts (Atom)