Last year, when talk of a teacher shortage was topical, I took up the question of teaching's possibly being automated in the near term. Considering the matter it seemed to me notable that Carl Benedikt Frey and Michael Osborne's Future of Work study, which I thought overly bullish on the prospect of automation as a whole, rated teaching as one of those jobs least likely to be automated within the coming decades. Indeed, given their evaluation of the automatability of various tasks, far from seeing computers replace teachers, I pictured a scenario in which the disappearance of a great many other "knowledge worker" jobs worked by the college-educated had more people turning to teaching to make a living.
Of course, the months since I wrote that piece have been eventful from the standpoint of developments in artificial intelligence research. The excitement over progress in chatbots specifically surged with the releases of the latest iterations of OpenAI's GPT--experiments with which, in fact, convinced the authors of one notable study that "artificial general intelligence" is no longer an object of speculation, but, if only in primitive and incomplete form, a reality.
Now the cofounder and former CEO, president and chairman of the very company whose scientists produced that very study tells us that in eighteen months AI, on the way to becoming as competent as any human tutor.
Reading that statement I wondered whether it was worth remarking.
As a commentator on public affairs I have consistently found "Bill" Gates to be fairly banal--his views pretty much the standard "Davos Man" line whether the matter is poverty, intellectual property, or, as in this case, education--with Gates, one might add, far from being the most articulate, rigorous or interesting champion for his ideas. However, even if one is not impressed with his claims, or his arguments for them, the fact remains that in this culture where billionaires are so often treated as All-Knowing, All-Seeing Oracles by the courtiers of the media, and those who heed such unquestioningly, even Gates’ most unconsidered statements are accorded extreme respect by many, while Gates’ very conventionality means that what he speaks is apt to be what a great many others are already thinking--in this case, that the technology will be doing this before today's toddlers are in kindergarten. Moreover, even if they are wrong about that (Gates has been extremely bullish on the technology for some time now, rather more convinced than I am of its epoch-making nature), what he is saying and those others are thinking is apt to be what a great many will be acting on, or trying to, especially given the matter at hand. There is many a person for whom even the pretext of AI capable of even fractionally replacing the human educators they see as an expensive annoyance could be a powerful weapon--such that as the fights over math, reading, history and all the rest rage across the country's school districts, I increasingly expect to see the issue of automation enter the fight.
Thursday, April 27, 2023
Wednesday, April 26, 2023
Is Climate Denial Spreading? If So, Why?
A recent poll regarding the public’s attitude toward climate change funded by the Energy Policy Institute at the University of Chicago (EPIC) and conducted by the university's National Opinion Research Center (NORC) and the Associated Press (full results here) is getting some mention in in the more general press.
According to the poll the percentage of the public that believes climate change is a reality does not seem to have budged much--at 74 percent, within the familiar range of the past several years. What seems more significant is that the percentage who think climate change is not primarily human-caused has dropped--from 60 percent in 2018 to 49 percent in 2023. This seems mostly a matter of the growth of those who think natural/environmental factors are contributing equally as human factors to the phenomenon, which has jumped from 28 to 37 percent over the same time frame, rather than any drastic growth in the number of those who think it is mostly or entirely a natural phenomenon, which has risen comparatively slightly.
Still, if the change does not seem very extreme (one still has 86 percent at least believing that human contributions matter) it is not the direction in which those concerned for anthropogenic climate change would have hoped to see things moving--which, of course, is to see the number of those who recognize the reality of climate change as an essentially anthropogenic phenomenon growing, widening the support for action on the problem. Indeed, from the perspective of those concerned with the issue, and the extremely successful resistance of opposition to any meaningful action on it, any erosion is troubling. The shift from believing climate change is primarily human-caused to believing it is equally of natural causes is especially so because of what it may portend--a transition from the view of climate change as human-caused toward the view that human activity has nothing much to do with it at all.
Thus far I have not seen much interest taken by commentators in why this change may have occurred, important as that is to understanding their implications. However, I can think of at least three factors being of some significance here:
1. Less Mainstream Press Attention.
I have had the impression--unscientific, but all the same, strong and consistent--that amid pandemic, inflation and war climate change has got less press than before in the mainstream media, leaving people somewhat less conscious of the issue than before, and of the scientific consensus that climate change is an anthropogenic phenomenon. At the same time I have noticed no evidence that those pushing the opposite view have slackened in their efforts to persuade the public that climate change is nonexistent, or at least not caused by human activity. The result may be that there is less contestation of the climate denialist view than before, and that this is having its effect on public opinion. It is easier to picture this being the case because
2. The Country's Politics Are Shifting.
It is a commonplace these days that the country is becoming more "polarized" between right and left. I am not so sure this is a really useful way to think about the situation--in part because if there is indeed a left turn on the part of any significant portion of the population (a claim open to question given the ambivalence of the evidence) it is far from making itself felt in the country’s political life as an actual force. By contrast those who have moved further right have done exactly that. (Consider, for instance, how much better Donald Trump fared in his presidential primary than Bernie Sanders, or the weight the Freedom Caucus has within the Republican Party, as against that of the Democratic Socialists of America on the Democratic Party.) Attitudes toward the environment have been no exception here--and it is easy enough to picture those who have shifted rightwards as less willing to acknowledge anthropogenic climate change than before.
3. What "Human-Caused" Climate Change Means May Be Less Clear Than You Think.
It is a truism that polling reflects not just popular feeling on an issue, but the way in which it was asked about it--which can be tailored to elicit the answer the poller desires, or, should the poller be insensitive to the nuances of their own words, produce a misleading result they did not desire. Where this is concerned consider what it means for humans to be causing climate change. Specifically consider how many of those shaping the discourse on the subject have gone to great lengths to make people think of the human impact on the climate as a matter of individual "lifestyle" choices by everyday people--their diet, their choice of appliances, etc.--rather than collective behavior as manifest in large organizations ultimately directed by a powerful few--for instance, the investments of energy and utility companies, or the decisions of major governments. (Indeed, the EPIC poll itself is saturated with such thinking, particularly noticeable in its barraging the surveyed with questions about their personal consumption habits.)
Dumping the responsibility for the climate crisis on hard-pressed individuals who make their consumption choices from a range of options very limited by their means--(which many have long called out as unwise and unjust, an extreme inversion of Uncle Ben's teaching, putting on those who have none of the power all of the responsibility) plausibly elicits a refusal of that responsibility from many. No, they say, I am not the cause of a crisis, which inclines them that much more the view that there is no crisis of humanity's making generally, or even any crisis at all. Which, of course, is exactly the intended result of this "individualization" of the problem in the view of those critical of "climate inactivists" (who note, for example, that the individualistic vision of personal carbon footprint management came not from Greenpeace but BP).
If one accepts this reading of the situation at all then there seem to be three obvious "takeaways," none new to anyone who has been paying much attention, but worth repeating because they simply do not seem to sink in with a great many persons who really need to understand them:
1. The mainstream media so often held up as "our saviors" in a world of "fake news" and other such threats has often been anything but. (After all, it is the mainstream media that consecrated climate denialism as an intellectually respectable position in the first place--and left deeply flawed understandings of the possibility for response as the sole alternative--because of the political biases shaping its framing of the issues.)
2. The environment cannot be treated as conveniently disconnected from other issues the way some prefer to think. Quite the contrary, as people who pride themselves on alertness to the functioning of ecosystems should be aware, everything is connected, and how they think about other things will affect how they think about this thing.
3. Where those connections are concerned one especially cannot ignore the issue of wealth, power and justice when addressing problems like climate change, and the environment generally, a lesson too many environmentalists have forgotten too many times in the past.
According to the poll the percentage of the public that believes climate change is a reality does not seem to have budged much--at 74 percent, within the familiar range of the past several years. What seems more significant is that the percentage who think climate change is not primarily human-caused has dropped--from 60 percent in 2018 to 49 percent in 2023. This seems mostly a matter of the growth of those who think natural/environmental factors are contributing equally as human factors to the phenomenon, which has jumped from 28 to 37 percent over the same time frame, rather than any drastic growth in the number of those who think it is mostly or entirely a natural phenomenon, which has risen comparatively slightly.
Still, if the change does not seem very extreme (one still has 86 percent at least believing that human contributions matter) it is not the direction in which those concerned for anthropogenic climate change would have hoped to see things moving--which, of course, is to see the number of those who recognize the reality of climate change as an essentially anthropogenic phenomenon growing, widening the support for action on the problem. Indeed, from the perspective of those concerned with the issue, and the extremely successful resistance of opposition to any meaningful action on it, any erosion is troubling. The shift from believing climate change is primarily human-caused to believing it is equally of natural causes is especially so because of what it may portend--a transition from the view of climate change as human-caused toward the view that human activity has nothing much to do with it at all.
Thus far I have not seen much interest taken by commentators in why this change may have occurred, important as that is to understanding their implications. However, I can think of at least three factors being of some significance here:
1. Less Mainstream Press Attention.
I have had the impression--unscientific, but all the same, strong and consistent--that amid pandemic, inflation and war climate change has got less press than before in the mainstream media, leaving people somewhat less conscious of the issue than before, and of the scientific consensus that climate change is an anthropogenic phenomenon. At the same time I have noticed no evidence that those pushing the opposite view have slackened in their efforts to persuade the public that climate change is nonexistent, or at least not caused by human activity. The result may be that there is less contestation of the climate denialist view than before, and that this is having its effect on public opinion. It is easier to picture this being the case because
2. The Country's Politics Are Shifting.
It is a commonplace these days that the country is becoming more "polarized" between right and left. I am not so sure this is a really useful way to think about the situation--in part because if there is indeed a left turn on the part of any significant portion of the population (a claim open to question given the ambivalence of the evidence) it is far from making itself felt in the country’s political life as an actual force. By contrast those who have moved further right have done exactly that. (Consider, for instance, how much better Donald Trump fared in his presidential primary than Bernie Sanders, or the weight the Freedom Caucus has within the Republican Party, as against that of the Democratic Socialists of America on the Democratic Party.) Attitudes toward the environment have been no exception here--and it is easy enough to picture those who have shifted rightwards as less willing to acknowledge anthropogenic climate change than before.
3. What "Human-Caused" Climate Change Means May Be Less Clear Than You Think.
It is a truism that polling reflects not just popular feeling on an issue, but the way in which it was asked about it--which can be tailored to elicit the answer the poller desires, or, should the poller be insensitive to the nuances of their own words, produce a misleading result they did not desire. Where this is concerned consider what it means for humans to be causing climate change. Specifically consider how many of those shaping the discourse on the subject have gone to great lengths to make people think of the human impact on the climate as a matter of individual "lifestyle" choices by everyday people--their diet, their choice of appliances, etc.--rather than collective behavior as manifest in large organizations ultimately directed by a powerful few--for instance, the investments of energy and utility companies, or the decisions of major governments. (Indeed, the EPIC poll itself is saturated with such thinking, particularly noticeable in its barraging the surveyed with questions about their personal consumption habits.)
Dumping the responsibility for the climate crisis on hard-pressed individuals who make their consumption choices from a range of options very limited by their means--(which many have long called out as unwise and unjust, an extreme inversion of Uncle Ben's teaching, putting on those who have none of the power all of the responsibility) plausibly elicits a refusal of that responsibility from many. No, they say, I am not the cause of a crisis, which inclines them that much more the view that there is no crisis of humanity's making generally, or even any crisis at all. Which, of course, is exactly the intended result of this "individualization" of the problem in the view of those critical of "climate inactivists" (who note, for example, that the individualistic vision of personal carbon footprint management came not from Greenpeace but BP).
If one accepts this reading of the situation at all then there seem to be three obvious "takeaways," none new to anyone who has been paying much attention, but worth repeating because they simply do not seem to sink in with a great many persons who really need to understand them:
1. The mainstream media so often held up as "our saviors" in a world of "fake news" and other such threats has often been anything but. (After all, it is the mainstream media that consecrated climate denialism as an intellectually respectable position in the first place--and left deeply flawed understandings of the possibility for response as the sole alternative--because of the political biases shaping its framing of the issues.)
2. The environment cannot be treated as conveniently disconnected from other issues the way some prefer to think. Quite the contrary, as people who pride themselves on alertness to the functioning of ecosystems should be aware, everything is connected, and how they think about other things will affect how they think about this thing.
3. Where those connections are concerned one especially cannot ignore the issue of wealth, power and justice when addressing problems like climate change, and the environment generally, a lesson too many environmentalists have forgotten too many times in the past.
Tuesday, April 25, 2023
Just What is an AI Chip Anyway?
These days the discussion of advances in artificial intelligence seems to emphasize the neural networks that, through training on vast amounts of web data, learn to recognize patterns and act on them--as with the "word prediction"-based GPT-4 that has been making headlines everywhere this past month (such that news outlets which do not ordinarily give such matters much heed are writing about them profusely). By contrast we hear less of the hardware on which the neural networks are run--but all the same you have probably heard the term "AI chip" being bandied about. If you looked it up you probably also found it hard to get a straightforward explanation as to how AI chips are different from regular--"general-purpose"--computing chips in their functioning, and why this matters.
There is some reason for that. The material is undeniably technical, with many an important concept of little apparent meaning without reference to another concept. (It is a lot easier to appreciate "parallel processing" if one knows about "sequential processing," for example.) Still, it is not so hard to get some grasp of some of the basics as one may think for all that.
Basically general-purpose chips are intended to be usable for pretty much anything and everything computers do. However, AI chips are designed to perform as many of the specific calculations needed by AI systems--which is to say, the calculations used in the training of neural nets on data, and the application of that training (the term for which is "inference")--as possible, even at the expense of their ability to perform the wider variety of tasks to which general-purpose computers are put.
Putting it crudely this comes to sacrificing "quality" to "quantity" where calculations are concerned--the chip doing many, many more "imprecise" calculations in a given amount of time, because qualitatively those less precise calculations are "good enough" for an object like pattern recognition, and the premium on getting as many calculations done as quickly as possible is high. (Pattern recognition is very calculation-intensive, so that it can be better to have more rough calculations than fewer precise ones.) Admittedly this still sounds a bit abstract, but it has a clear, concrete basis in aspects of AI chip design presented below, namely:
1. Optimization for Low Precision Calculations. (Think Lower-Bit Execution Units on a Logic Chip--But More of Them.)
It is fairly basic computer science knowledge that computers perform their calculations using strings of "bits"--the 0s and 1s of binary code--with increasingly advanced computers using longer and longer strings enabling more precise calculation. For instance, we may speak of 8-bit calculations involving strings of 8 1s and 0s (allowing for, at 2 to the power of 8, a mere 256 values) as against 16-bit calculations using strings of 16 such 1s and 0s (which means at 2 to the power of 16, 256 times 256, or 65,536, values).
However, it may be the case that even when we could have a 16-bit calculation, for particular purposes the 8-bit calculations are adequate, especially if we go about making those calculations the right way (e.g. do a good job of rounding the numbers). It just so happens that neural net training and inference is one area where this works, where the values may be known to fall in a limited range, the task coming back as it does to pattern recognition. After all, the pattern the algorithm is supposed to look for is either there or not--as with some image it is supposed to recognize.
Why does this matter? The answer is that you could, on a given "logic" chip (the kind we use for processing, not memory storage), get a lot more 8-bit calculations done than 16-bit calculations. An 8-bit execution unit, for example, uses just one-sixth the chip space--and energy--that a 16-bit execution unit does. The result is that opting for the 8-bit unit when given a choice between the two means many more execution units can be put on a given chip, and thus have that many more 8-bit calculations done at once (against one 16-unit doing 16-bit calculations). Given that pattern-recognition can be a very calculation-intensive task, the trade-off of precision of calculations against quantity of calculations can be well worth the while.
2. "Model-Level Parallelism." (Chop Up the Task So Those Lower-Bit But More Numerous Execution Units Can Work Simultaneously--in Parallel--to Get it Done Faster.)
In general-purpose computer logic chips are designed for sequential processing--the execution unit does one calculation by itself all the way through. However, computers can alternatively utilize parallel processing which splits a task into "batches" which can be performed all at once by different execution units on a chip, or different chips within a bigger system--the calculation split up among the units, which do their parts of the calculations, with the results being added up. This permits a given piece of processing to be done more quickly.
That being the case you might wonder why we do not use parallel processing for all computing tasks. The reason is that parallel processing means more complexity and higher costs all around--more processors, and more of everything required to keep them running properly (energy, etc.). Additionally, not every problem lends itself well to this kind of task division. Parallelism works best when you can chop up one big task into a lot of small, highly repetitive tasks performed over and over again--in computer jargon, when the task is "fine-grained" with numerous "iterations"--until some condition is met, like performing that task a pre-set number of times, or triggering some response. It works less well when the task is less divisible or repetitive. (Indeed, the time taken to split up and distribute the batches of the task among the various processors may end up making such a process slower than if it were done sequentially on one processor.)
As it happens, the kind of neural network operations with which AI research is concerned are exactly the kind of situation where parallel processing pays off because the operations they involve tend to be "identical and independent of the results of other computations." Consider, for example, how this can be done when a neural network is asked to recognize an image--different execution units responsible for examining different regions, or parts, of an image all at once--until the overall neural network, "adding up" the results of the calculations in those individual units, recognizes the image as what it is or is not supposed to look for.
3. Memory Optimization. (Given All the Space Savings, and the Predictability of the Task, You Might Even Be Able to Put the Memory on the Same Chip Doing the Processing, Speeding Up the Work Yet Again.)
As previously noted in general-purpose computing there is a separation between logic chips and memory chips, which has the logic chips having to access memory "off-chip" as they process data because, given the premium on the chip's flexibility, it is not clear in advance just what data the processor will have to access to perform its task.
As it happens the mechanics of accessing data off-chip constitute a significant drag on a processor's performance. It is the case that it takes more time, and energy, to access off-chip data like this than actually process that data, with all that means performance-wise, the more in as processing speed has improved more rapidly than the speed of that memory access.
However, if one knows in advance what data a particular process will need, the memory storage can be located closer to the processor, shortening the distance and saving energy. In fact, especially when there are contemporary processing and space savings such as those lower-bit execution units afford, the prospect exists of getting around the processing-memory "bottleneck" by having the processing and the memory it needs to use together on the very same chip. Moreover, while chips can be designed for particular operations from the outset (a type known as "Application-Specific Integrated Circuits, or ASICs), chips can be designed so that even after fabrication suitable programming can reconfigure their circuitry to arrange them in the way that would let them most efficiently run some operations developed afterward (called Field Programmable Gate Arrays, or FPGAs). The result is, again, an improvement in speed and efficiency generally that is heavily used in AI chips to help maximize that capacity for low-precision calculation at the heart of their usage.
To sum up: the value of AI chips lies in their use of more but lower-bit execution units organized for parallel processing on chips physically arranged to reduce or eliminate the time and energy costs of memory access so as to maximize their efficiency at low-precision calculations in a way that by no means works for everything, but works well for neural net training and use.
Of course, knowing all that may leave us wondering just how much difference it has all actually made in real-life computing. As it happens, for all the hype about how many hundreds and hundreds of billions of dollars the market for AI chips will expand to in 2030 or somesuch date, in the real-life year of 2021 they were an $11 billion market. That sounds like a lot--until one remembers that the overall chip market is over $550 billion, making the AI chip market just 2 percent of the total. Yes, just 2 percent--which is a reminder that, even if it can look from perusing the "tech" news as if these chips are everywhere, where everyday life is concerned we are still relying on fourth-generation computing--while, again, the AI chips we have, being inferior for general computing use, largely used for research and probably not about to displace the general-purpose kind in your general-purpose gadgets anytime soon.
Still, as one study of the subject of AI chips from Georgetown University's Center for Security and Emerging Technology reports, in the training and inference of neural networks such chips afford a gain of one to three orders of magnitude in the speed and efficiency as against general-purpose chips. Putting it another way, being able to use AI chips for this work, rather than just using the general-purpose kind, by letting computer scientists train neural nets tens, hundreds or even thousands of times faster than we otherwise would, may have advanced the state-of-the-art in this field by decades, bringing us to the present moment when even experts look at our creations and wonder if "artificial general intelligence" has not already arrived.
There is some reason for that. The material is undeniably technical, with many an important concept of little apparent meaning without reference to another concept. (It is a lot easier to appreciate "parallel processing" if one knows about "sequential processing," for example.) Still, it is not so hard to get some grasp of some of the basics as one may think for all that.
Basically general-purpose chips are intended to be usable for pretty much anything and everything computers do. However, AI chips are designed to perform as many of the specific calculations needed by AI systems--which is to say, the calculations used in the training of neural nets on data, and the application of that training (the term for which is "inference")--as possible, even at the expense of their ability to perform the wider variety of tasks to which general-purpose computers are put.
Putting it crudely this comes to sacrificing "quality" to "quantity" where calculations are concerned--the chip doing many, many more "imprecise" calculations in a given amount of time, because qualitatively those less precise calculations are "good enough" for an object like pattern recognition, and the premium on getting as many calculations done as quickly as possible is high. (Pattern recognition is very calculation-intensive, so that it can be better to have more rough calculations than fewer precise ones.) Admittedly this still sounds a bit abstract, but it has a clear, concrete basis in aspects of AI chip design presented below, namely:
1. Optimization for Low Precision Calculations. (Think Lower-Bit Execution Units on a Logic Chip--But More of Them.)
It is fairly basic computer science knowledge that computers perform their calculations using strings of "bits"--the 0s and 1s of binary code--with increasingly advanced computers using longer and longer strings enabling more precise calculation. For instance, we may speak of 8-bit calculations involving strings of 8 1s and 0s (allowing for, at 2 to the power of 8, a mere 256 values) as against 16-bit calculations using strings of 16 such 1s and 0s (which means at 2 to the power of 16, 256 times 256, or 65,536, values).
However, it may be the case that even when we could have a 16-bit calculation, for particular purposes the 8-bit calculations are adequate, especially if we go about making those calculations the right way (e.g. do a good job of rounding the numbers). It just so happens that neural net training and inference is one area where this works, where the values may be known to fall in a limited range, the task coming back as it does to pattern recognition. After all, the pattern the algorithm is supposed to look for is either there or not--as with some image it is supposed to recognize.
Why does this matter? The answer is that you could, on a given "logic" chip (the kind we use for processing, not memory storage), get a lot more 8-bit calculations done than 16-bit calculations. An 8-bit execution unit, for example, uses just one-sixth the chip space--and energy--that a 16-bit execution unit does. The result is that opting for the 8-bit unit when given a choice between the two means many more execution units can be put on a given chip, and thus have that many more 8-bit calculations done at once (against one 16-unit doing 16-bit calculations). Given that pattern-recognition can be a very calculation-intensive task, the trade-off of precision of calculations against quantity of calculations can be well worth the while.
2. "Model-Level Parallelism." (Chop Up the Task So Those Lower-Bit But More Numerous Execution Units Can Work Simultaneously--in Parallel--to Get it Done Faster.)
In general-purpose computer logic chips are designed for sequential processing--the execution unit does one calculation by itself all the way through. However, computers can alternatively utilize parallel processing which splits a task into "batches" which can be performed all at once by different execution units on a chip, or different chips within a bigger system--the calculation split up among the units, which do their parts of the calculations, with the results being added up. This permits a given piece of processing to be done more quickly.
That being the case you might wonder why we do not use parallel processing for all computing tasks. The reason is that parallel processing means more complexity and higher costs all around--more processors, and more of everything required to keep them running properly (energy, etc.). Additionally, not every problem lends itself well to this kind of task division. Parallelism works best when you can chop up one big task into a lot of small, highly repetitive tasks performed over and over again--in computer jargon, when the task is "fine-grained" with numerous "iterations"--until some condition is met, like performing that task a pre-set number of times, or triggering some response. It works less well when the task is less divisible or repetitive. (Indeed, the time taken to split up and distribute the batches of the task among the various processors may end up making such a process slower than if it were done sequentially on one processor.)
As it happens, the kind of neural network operations with which AI research is concerned are exactly the kind of situation where parallel processing pays off because the operations they involve tend to be "identical and independent of the results of other computations." Consider, for example, how this can be done when a neural network is asked to recognize an image--different execution units responsible for examining different regions, or parts, of an image all at once--until the overall neural network, "adding up" the results of the calculations in those individual units, recognizes the image as what it is or is not supposed to look for.
3. Memory Optimization. (Given All the Space Savings, and the Predictability of the Task, You Might Even Be Able to Put the Memory on the Same Chip Doing the Processing, Speeding Up the Work Yet Again.)
As previously noted in general-purpose computing there is a separation between logic chips and memory chips, which has the logic chips having to access memory "off-chip" as they process data because, given the premium on the chip's flexibility, it is not clear in advance just what data the processor will have to access to perform its task.
As it happens the mechanics of accessing data off-chip constitute a significant drag on a processor's performance. It is the case that it takes more time, and energy, to access off-chip data like this than actually process that data, with all that means performance-wise, the more in as processing speed has improved more rapidly than the speed of that memory access.
However, if one knows in advance what data a particular process will need, the memory storage can be located closer to the processor, shortening the distance and saving energy. In fact, especially when there are contemporary processing and space savings such as those lower-bit execution units afford, the prospect exists of getting around the processing-memory "bottleneck" by having the processing and the memory it needs to use together on the very same chip. Moreover, while chips can be designed for particular operations from the outset (a type known as "Application-Specific Integrated Circuits, or ASICs), chips can be designed so that even after fabrication suitable programming can reconfigure their circuitry to arrange them in the way that would let them most efficiently run some operations developed afterward (called Field Programmable Gate Arrays, or FPGAs). The result is, again, an improvement in speed and efficiency generally that is heavily used in AI chips to help maximize that capacity for low-precision calculation at the heart of their usage.
To sum up: the value of AI chips lies in their use of more but lower-bit execution units organized for parallel processing on chips physically arranged to reduce or eliminate the time and energy costs of memory access so as to maximize their efficiency at low-precision calculations in a way that by no means works for everything, but works well for neural net training and use.
Of course, knowing all that may leave us wondering just how much difference it has all actually made in real-life computing. As it happens, for all the hype about how many hundreds and hundreds of billions of dollars the market for AI chips will expand to in 2030 or somesuch date, in the real-life year of 2021 they were an $11 billion market. That sounds like a lot--until one remembers that the overall chip market is over $550 billion, making the AI chip market just 2 percent of the total. Yes, just 2 percent--which is a reminder that, even if it can look from perusing the "tech" news as if these chips are everywhere, where everyday life is concerned we are still relying on fourth-generation computing--while, again, the AI chips we have, being inferior for general computing use, largely used for research and probably not about to displace the general-purpose kind in your general-purpose gadgets anytime soon.
Still, as one study of the subject of AI chips from Georgetown University's Center for Security and Emerging Technology reports, in the training and inference of neural networks such chips afford a gain of one to three orders of magnitude in the speed and efficiency as against general-purpose chips. Putting it another way, being able to use AI chips for this work, rather than just using the general-purpose kind, by letting computer scientists train neural nets tens, hundreds or even thousands of times faster than we otherwise would, may have advanced the state-of-the-art in this field by decades, bringing us to the present moment when even experts look at our creations and wonder if "artificial general intelligence" has not already arrived.
Monday, April 24, 2023
Remember When France Looked Like the Future?
While natives of the English-speaking world may be surprised to hear it, there was a time when France looked like "the future," Paris a glimpse of "tomorrowland."
This may seem strange given that we associate that status with countries that became industrial superpowers--like Britain or the United States, or more tentatively, Germany or Japan--and France never quite had that stature, for various reasons. (There was maritime Britain's advantage over continental France in chasing after colonial markets, and escaping the effects of continental land wars--like the defeat by Prussia in 1870-1871 that cost it critical natural and industrial resources. There was the extent to which a big and powerful financial sector that preferred speculation and capital export to investment in actually making stuff at home called the shots politically at critical times. And so forth.)
However, if France’s industry was less impressive than others with regard to scale, it consistently impressed qualitatively, the country tending to "punch above its weight class" in high-tech.
Thus while Britain became an industrial superpower on the basis of mechanized textile production, France produced the Jacquard loom--no great hit at the time, but a milestone in the development of computing. The redevelopment associated with Baron Haussmann and world capital-scale gas-light and the construction of the tallest building in the world out of iron (the Eiffel Tower) made Paris seem futuristically modern to Victorian eyes, while pre-Ford France was the world leader in auto production, and aviation too, because of the excellence of French engine-building.
So did it go in the post-World War II period. While people in other countries talked about nuclear power, France actually went and made it the basis of its grid (today getting 70 percent of its electricity from this source, while being a significant electricity exporter as well), and pioneered the "breeder" reactors that it seemed would be hugely important to any wider usage. France, not Britain or wartime era rocket pioneer Germany, was the West European leader in space, becoming the third country to orbit a satellite in 1965, after only the Soviet Union and the U.S.. The country also built high-speed trains, supersonic airliners, and the proto-Internet remembered as the "Minitel"--which gave the French public access to online amenities such as few Americans had at the time.
Why has all this not been more widely appreciated? I suppose there is the old "Anglo-Saxon" prejudice which held the continent, with the French the continentals par excellence, to be backward, poor, shabby in comparison with themselves. In the post-war era there was disdain for France's more statist and welfarist economic model--a disdain which only grew in the 1980s with France’s apparent left turn under a Socialist Party government as America and Britain went right, epochally. Information age euphoria also had its effect--convincing a great many Americans in particular that the Web was about to make the world a utopia, and that all the credit for that could go to Bay Area garage tinkering and absolutely nothing else, and anyone else doing anything in any other way was doomed to be left out forever, with the Europeans in particular an image of perpetual stagnation and inevitable decay.
The misapprehensions of this compound of nationalism and techno-libertarianism have been very, very slow to pass (Paul Krugman in 2011 quipping that "the US elite picture of Europe is stuck in a sort of time warp, in which it's always 1997, and we have the Internet and they don't"), and as yet have done so imperfectly. The result is that where, for example, Germany has managed to win some respect for its undeniable achievements as a manufacturer (such that even longtime Europe-bashers offer a word or two in praise for the German economy) it remains common to see France solely through the lens of the shopworn clichés discussed here, such that one can only wonder what we may be missing about France, and the world, as a result.
This may seem strange given that we associate that status with countries that became industrial superpowers--like Britain or the United States, or more tentatively, Germany or Japan--and France never quite had that stature, for various reasons. (There was maritime Britain's advantage over continental France in chasing after colonial markets, and escaping the effects of continental land wars--like the defeat by Prussia in 1870-1871 that cost it critical natural and industrial resources. There was the extent to which a big and powerful financial sector that preferred speculation and capital export to investment in actually making stuff at home called the shots politically at critical times. And so forth.)
However, if France’s industry was less impressive than others with regard to scale, it consistently impressed qualitatively, the country tending to "punch above its weight class" in high-tech.
Thus while Britain became an industrial superpower on the basis of mechanized textile production, France produced the Jacquard loom--no great hit at the time, but a milestone in the development of computing. The redevelopment associated with Baron Haussmann and world capital-scale gas-light and the construction of the tallest building in the world out of iron (the Eiffel Tower) made Paris seem futuristically modern to Victorian eyes, while pre-Ford France was the world leader in auto production, and aviation too, because of the excellence of French engine-building.
So did it go in the post-World War II period. While people in other countries talked about nuclear power, France actually went and made it the basis of its grid (today getting 70 percent of its electricity from this source, while being a significant electricity exporter as well), and pioneered the "breeder" reactors that it seemed would be hugely important to any wider usage. France, not Britain or wartime era rocket pioneer Germany, was the West European leader in space, becoming the third country to orbit a satellite in 1965, after only the Soviet Union and the U.S.. The country also built high-speed trains, supersonic airliners, and the proto-Internet remembered as the "Minitel"--which gave the French public access to online amenities such as few Americans had at the time.
Why has all this not been more widely appreciated? I suppose there is the old "Anglo-Saxon" prejudice which held the continent, with the French the continentals par excellence, to be backward, poor, shabby in comparison with themselves. In the post-war era there was disdain for France's more statist and welfarist economic model--a disdain which only grew in the 1980s with France’s apparent left turn under a Socialist Party government as America and Britain went right, epochally. Information age euphoria also had its effect--convincing a great many Americans in particular that the Web was about to make the world a utopia, and that all the credit for that could go to Bay Area garage tinkering and absolutely nothing else, and anyone else doing anything in any other way was doomed to be left out forever, with the Europeans in particular an image of perpetual stagnation and inevitable decay.
The misapprehensions of this compound of nationalism and techno-libertarianism have been very, very slow to pass (Paul Krugman in 2011 quipping that "the US elite picture of Europe is stuck in a sort of time warp, in which it's always 1997, and we have the Internet and they don't"), and as yet have done so imperfectly. The result is that where, for example, Germany has managed to win some respect for its undeniable achievements as a manufacturer (such that even longtime Europe-bashers offer a word or two in praise for the German economy) it remains common to see France solely through the lens of the shopworn clichés discussed here, such that one can only wonder what we may be missing about France, and the world, as a result.
A "Greater Germany" in Military as Well as Economic Terms?
Some time ago I read Emmanuel Todd's discussion with Olivier Berruyer of the German economy's weight within the European Union and was struck by the pointed difference between his assessment and my previous impressions. I equated Germany with the nation-state that is the Federal Republic of Germany, with its population of some 80 million and GDP of $4 trillion. However, Todd described an increasingly coherent economic space encompassing Western Europe's other German-speaking territories (Switzerland, Austria); the territories associated with them for centuries through the Prussian and Austrian Empires, especially about the Baltic and Adriatic coasts (Poland, Lithuania, Latvia, Estonia, Slovenia, Croatia, while the landlocked but also formerly Austrian territory of Czechia is also in there); and the Benelux countries (Belgium, the Netherlands and Luxembourg) and Sweden for good measure; within which, by virtue of trade, investment and the influence they bring, German interests are dominant.
The resulting entity, recalling the Pan-German dreams of another era, and which might be spoken of as an economic "Greater Germany," comes to 200 million people with a GDP of $8 trillion that can seem more than the sum of its parts (bringing together as it does unique strengths from the Dutch monopoly of cutting-edge photolithography to the financial weight of Switzerland and Luxembourg). A much more satisfactorily superpower-like entity in itself than the Federal Republic, with the help of partners like France, it is the dominant force in an undeniably superpower-like European Union of 450 million people with a $16 trillion economy.
Still, this Greater Germany had distinct limits, not least in the military sphere, which have only since been underlined by the discussions of Germany's elevation of its defense spending--the proposals of February 2022, however heavily they lay on Germany's citizenry, a far cry from what it would take to make Germany stand as tall in Europe and the world militarily as it does economically, never mind realize any superpower aspirations.
It did occur to me that Germany could endeavor to integrate the armed forces of the "Greater Germany" into its own in some fashion--a process that had already begun in a small way, with the Netherlands has been integrating their ground forces into the German army. As it happens, this process has already been long ongoing, specifically entailing the Netherlands integrating each of the three brigades of the Royal Netherlands Army into one of each of Germany's three army divisions. Taking place over many years, according to the English-language statement on the web site of the Netherlands' Ministry of Defense, this past March saw the completion of the process with the last Dutch unit, a Dutch light armored brigade, now integrated into a German armored division.
What is one to make all this? Certainly examining Germany's ground forces, like those of other major West European states, one is struck by their relatively understrength quality--the Bundeswehr’s "armored divisions" on paper being something a lot less numerous and heavily armed than the label implies. (On paper Germany has two "armored divisions." In reality it lacks enough main battle tanks for one division on the old standard.) One may add that Germany has long strained to make good the shortfalls in its case, in part because of difficulty recruiting enough volunteers to simply keep the gaps from being too disruptive. Germany's filling out "understrength" formations with whole units from its neighbor and longtime NATO ally, with which it has long had a special bilateral association in the German/Dutch Corps founded in the early 1990s, and which just so happens to use much of the same equipment German forces do (like Leopard 2 tanks, Boxer armored fighting vehicles, and Panzerhaubitze 2000 howitzers), seems one way of compensating for the inadequacies in short order.
The Dutch Ministry of Defense emphasizes in the aforementioned statement that "the Netherlands remains in control of the decision whether and where to deploy its military forces," but one may presume that there is an expectation of cooperation in any really important eventuality requiring the dispatch of an entire division. That said, the Netherlands is but one European state, and represents a relatively limited part of the potential "pool" of Greater Germany's military power--but that this happened at all can seem suggestive indeed of the direction in which Germany would like to move, and in which at least some of its neighbors and partners may be prepared to follow.
The resulting entity, recalling the Pan-German dreams of another era, and which might be spoken of as an economic "Greater Germany," comes to 200 million people with a GDP of $8 trillion that can seem more than the sum of its parts (bringing together as it does unique strengths from the Dutch monopoly of cutting-edge photolithography to the financial weight of Switzerland and Luxembourg). A much more satisfactorily superpower-like entity in itself than the Federal Republic, with the help of partners like France, it is the dominant force in an undeniably superpower-like European Union of 450 million people with a $16 trillion economy.
Still, this Greater Germany had distinct limits, not least in the military sphere, which have only since been underlined by the discussions of Germany's elevation of its defense spending--the proposals of February 2022, however heavily they lay on Germany's citizenry, a far cry from what it would take to make Germany stand as tall in Europe and the world militarily as it does economically, never mind realize any superpower aspirations.
It did occur to me that Germany could endeavor to integrate the armed forces of the "Greater Germany" into its own in some fashion--a process that had already begun in a small way, with the Netherlands has been integrating their ground forces into the German army. As it happens, this process has already been long ongoing, specifically entailing the Netherlands integrating each of the three brigades of the Royal Netherlands Army into one of each of Germany's three army divisions. Taking place over many years, according to the English-language statement on the web site of the Netherlands' Ministry of Defense, this past March saw the completion of the process with the last Dutch unit, a Dutch light armored brigade, now integrated into a German armored division.
What is one to make all this? Certainly examining Germany's ground forces, like those of other major West European states, one is struck by their relatively understrength quality--the Bundeswehr’s "armored divisions" on paper being something a lot less numerous and heavily armed than the label implies. (On paper Germany has two "armored divisions." In reality it lacks enough main battle tanks for one division on the old standard.) One may add that Germany has long strained to make good the shortfalls in its case, in part because of difficulty recruiting enough volunteers to simply keep the gaps from being too disruptive. Germany's filling out "understrength" formations with whole units from its neighbor and longtime NATO ally, with which it has long had a special bilateral association in the German/Dutch Corps founded in the early 1990s, and which just so happens to use much of the same equipment German forces do (like Leopard 2 tanks, Boxer armored fighting vehicles, and Panzerhaubitze 2000 howitzers), seems one way of compensating for the inadequacies in short order.
The Dutch Ministry of Defense emphasizes in the aforementioned statement that "the Netherlands remains in control of the decision whether and where to deploy its military forces," but one may presume that there is an expectation of cooperation in any really important eventuality requiring the dispatch of an entire division. That said, the Netherlands is but one European state, and represents a relatively limited part of the potential "pool" of Greater Germany's military power--but that this happened at all can seem suggestive indeed of the direction in which Germany would like to move, and in which at least some of its neighbors and partners may be prepared to follow.
Saturday, April 22, 2023
Will Democracy Survive in France? Reflections on Emmanuel Todd's Comments
From the standpoint of 2023 the history of the twenty-first century can seem one long succession of deeply unpleasant shocks, which by the '10s were increasingly manifesting themselves in the most fundamental aspects of the political life of Western democracies in ways that no one could overlook. France, far from being an exception to the pattern, has instead been a particularly conspicuous case. The country saw the collapse of its established party system, the emergence of a mass protest movement in the Gilets Jaunes (Yellow Vests), the ascent of the far right, and even the country's military establishment's openly threatening a coup--all as the government, headed by a "strong presidency" that, in Simon Kuper's words, is "the closest thing in the developed world to an elected dictator," displayed an increasing taste for rule by emergency measures to the point of routinizing them amid conditions of pandemic, war, economic stagnation and growing labor strife. The result is that even so mainstream and conservative a publication as The Financial Times recently ran an editorial (by the aforementioned Kuper) calling on France to move on from its Fifth Republic to a sixth.
Still, in the English-speaking world we hear about France mainly from other English-speaking writers, often not just outsiders to the country about which they write, but questionably informed about their subject, while bringing to the discussion a heavy freight of view-distorting prejudices (as the "experts" presented by a media historically lousy at identifying genuine expertise so often turn out to be--remember the drivel they used to write about Japan, and still write about that country today?). Meanwhile the commentary in France itself has been increasingly and unhelpfully skewed. The result is that when social scientist Emmanuel Todd (the author of The Final Fall, After the Empire and The Third World War Has Begun) gave Marianne's Etienne Campion a long interview regarding the protests against President Emmanuel Macron's use of article 49.3 to force a raising of the retirement age down the throat of a French public when unable to get his law passed through routine legislative processes the item seemed to me to be well worth some attention.
As one might guess from his prior work Todd is a staunch critic of Macron's act. The raising of the retirement age itself is in his view unjust, useless and incoherent ("injuste, inutile et incohérente"), and its being forced through in this manner unconstitutional. Moreover, there is a context here that makes the act much more than an abuse of the presidency's emergency powers to circumvent democratic institutions, to which neoliberalism is critical. Todd regards that ideology not as some revival of the classical liberalism of John Locke-Adam Smith, but an "economic nihilism" summed up in such "idiocies" ("idioties") as Margaret Thatcher's famous remark that "There is no such a thing as society."* Indeed, so far as Todd is concerned making short-term individual economic rationality ("la rationalité individualiste," "la rationalité économique à court terme") the determining principle of all of social life, with its destruction of the economic base (as through the deindustrialization of the U.S., Britain and France) and the social supports that give the population an indispensable minimum of security (sufficient for people to start families, for example), is "destructive of the capacity of populations to reproduce and societies to survive"--literally, as he shows on the basis of the falling life expectancy in the U.S. and sub-replacement birth rates across the advanced world (with France's high natality bespeaking its refusal of neoliberalism, and in an inversion of the neoliberal view of France, that refusal actually the country's greatness).**
Increasingly exposed for the nihilism it is, Todd holds neoliberalism to be "dying" (alluding even to a return to "state entrepreneurship" in that great champion of neoliberal policy, the U.S.), in spite of which Macron, as this pension reform shows, is, in spite of some glib, vague and unconvincing rhetoric about "reindustrialization," still pushing the neoliberal project. Of course, if Macron is a neoliberal in a world becoming less so this requires some explanation, and Todd offers it, speaking of political inertia--and the personal defects of Macron himself, not least a "cognitive deficit" (Todd uses the term at least three times over the interview's course), and serious problems of personality. These include a hatred for "ordinary people," and even a child-like "testing the limits" of what is allowed him as he deliberately provokes the public with his style as well as his substance.
Why is Macron managing to get away with such behavior? Todd argues that this is a matter of the combination of France's electoral system, and specifically its not offering its voters proportional representation (as seen in the two-round presidential elections), and the divisions between left and right in the country as represented by the "Unsubmissive France" ("Nupes") that Jean-Luc Melenchon led and Marine Le Pen's "National Rally" ("RN"). Both opposed to the neoliberal course, they are divided by what Americans today would call the "Culture War," in which educational levels factor (people who are not well-off but are still college degree-holders favoring Nupes, while their non-college-educated counterparts favor Le Pen). Still, while acknowledging the barriers to civility between them Todd, who admits to worries that on its present course the country is headed for a collision between the neoliberal "state-finance aristocracy" and the far right, ends the interview with a call to all French persons, whatever their educations, wealth, party or anything else to be the "adults" and "stop the child," and Nupes and the RN make some temporary pact under which they join forces to reform the voting system and introduce the "proportional" ballot France does not have, which he now thinks just about the only thing that can save French democracy.
Considering Todd's remarks I found his appraisal of neoliberalism of particular interest--his characterization of it as nihilistic in the cultural and economic sphere fair enough (and I think, Todd quite right to point out that the view of there being "No such thing as society" is not the tradition of Adam Smith, however much Thatcher had her economic thinking done for her by the folks of the Adam Smith Institute). This extends to his more specific remarks about family formation, birth rates, and the rest. Also noteworthy, and quite correct, is Todd's recognition of the essential endurance of neoliberalism underneath Macron's gestures toward protectionism (a subject on which I have found myself writing quite a bit these last couple of years as people speak of neoliberalism's supposed demise).
However, it seems to me that France is less unique here than some French observers seem to think (Todd perhaps included). Bruno Amable and Stefano Palombarini call Macron "the last neoliberal" (and I got the impression that Todd is thinking along the same lines), but I remember all too well that reports of the death of neoliberalism have been "greatly exaggerated" over and over again these past four decades (amid the neomercantilist fad of the '90s, after the 2007 crash, etc., etc.). And it seems to me that pretty much wherever one looks across the advanced world we see neoliberals confusing onlookers with protectionist gestures (increasingly numerous and disruptive as they have been) as they otherwise "stay the course"--that neoliberalism, for all its faults and failures, and for all the opposition it has aroused, is not being given up, even in a gradualist way, but at most adapted to meet the present emergency. (We see it in fiscal and monetary policy, we see it in social policy, we see it above all in the continued centrality of creditism-fueled and speculation-minded financialization in the economic model, among much, much else; while I might add that Todd's reference to "the return of the entrepreneurial state" in the U.S. must be highly qualified, a very limited matter in as well as out of the national security arena, and which seems likely to only dwindle in the months and even years ahead, given the combination of the shifting focus of a Joseph Biden administration which was never much of a candidate for a break with the past, the current makeup of Congress, or the response likely to follow in the wake of the current cryptocurrency and bank failures from Silvergate on.)
This exaggerated sense of the demise of neoliberalism extends to the domestic scene in France where, while popular sentiment is clearly and forcefully against it (across educational levels, a point he discusses in a more nuanced way than we tend to see in the U.S.), I am not sure how deep the opposition to neoliberalism of Melenchon and Le Pen goes--neither having said or done anything to make me think they, or the leaders of the tendencies they lead, are anything but another couple of politicians promising change on the campaign trail, but in office likely to continue and even intensify the policies seen to date. (Britain affords a striking example. There Keir Starmer posed as a socialist with a social democratic platform when it was convenient, then discarded that platform in the most brazen manner imaginable--such that the country's choice in the next General Election, barring some unforeseen change, is between the Tory neoliberalism of Rishi Sunak and the New Labour Blairite neoliberalism of Starmer.)
Being somewhere between "faint" and "facade" the two parties' supposed common opposition to neoliberalism is thus no foundation for overcoming their perhaps irreconcilable differences. Meanwhile, it may be that proportional representation is the last thing that Le Pen and the RN want, precisely because of how, in 2002 and more significantly in 2017 and 2022, the lack of such representation lifted this party with a mere 15 percent of the seats in the National Assembly to that second round of voting in the presidential election—where in 2022 it got over 40 percent of the vote. Putting it bluntly, the RN's best hope for winning in 2027 may be that disgust with Macron, and the blocking of any alternatives to Macron but themselves, push them over the top and put Le Pen in the Élysée Palace. (Indeed, in making his call Todd can seem to be calling on Le Pen and the RN to link hands with their center-left rivals to help save France from . . . Le Pen and the RN.) The result is that some Nupes/RN alliance coming to the rescue of French democracy in the manner Todd describes seems to me very, very unlikely indeed.
All that said (interesting as Todd's remarks about the subject were, the more in as we are so used here in America to the centrist-neoliberal press hailing the Macrons of the world as the "adults in the room" and their critics as unruly children) I am less sure of what to make of Macron the individual--and the relevance of his personality to the situation at hand. After all, the French government's line in policy does not emanate from the outmoded thinking of a single individual stuck in the past, but rather the preferences of the country's elites, which are entirely in line with those of their counterparts the world over (as, consistently shown by the decisions of the Constitutional Council, down to its supporting Macron in his use of 49.3 to push through the reform). Likewise the opposition to neoliberalism, and indeed the recent escalation of that opposition, is not unique to the French people, with, if French opposition in this case has been more focused and dramatic than elsewhere, still clearly part of a growing international trend (as France's neighbors, Britain and Germany, see historic strike action). The same goes for Macron's extreme display of disrespect for the population and authoritarian personal style in response to the protests--disdain for the inevitable widespread dissent virtually a requirement for the job of imposing those hugely unpopular policies. (Indeed, considering Macron, the backlash against him, and his answer to it, can one really say he is very different from Margaret Thatcher--especially the Thatcher of the coal strike and the poll tax riots? Or Blair? Or Sunak and Starmer today? Or any number of other aspirants to the status of being their own country's Thatcher--such as Macron's predecessor, Nicolas Sarkozy was?) Still, it is not too great a stretch to believe that he is enjoying playing the part--and there seems nothing to be said in praise of that. One also cannot rule out that his doing so in these different circumstances may have different, very dangerous, consequences.
* No matter how her loyalists try to spin it, "There is no such thing as society" is exactly what Thatcher meant when she spoke those words, as you can see for yourself looking at the full 1987 interview with Woman's Own posted at the Margaret Thatcher Foundation's web site. ** For Todd it is significant that the great neoliberal "success" story, South Korea, is at the extreme opposite end of the spectrum, with a Total Fertility Rate of 0.8, as against France's near-replacement level 1.8.
Still, in the English-speaking world we hear about France mainly from other English-speaking writers, often not just outsiders to the country about which they write, but questionably informed about their subject, while bringing to the discussion a heavy freight of view-distorting prejudices (as the "experts" presented by a media historically lousy at identifying genuine expertise so often turn out to be--remember the drivel they used to write about Japan, and still write about that country today?). Meanwhile the commentary in France itself has been increasingly and unhelpfully skewed. The result is that when social scientist Emmanuel Todd (the author of The Final Fall, After the Empire and The Third World War Has Begun) gave Marianne's Etienne Campion a long interview regarding the protests against President Emmanuel Macron's use of article 49.3 to force a raising of the retirement age down the throat of a French public when unable to get his law passed through routine legislative processes the item seemed to me to be well worth some attention.
As one might guess from his prior work Todd is a staunch critic of Macron's act. The raising of the retirement age itself is in his view unjust, useless and incoherent ("injuste, inutile et incohérente"), and its being forced through in this manner unconstitutional. Moreover, there is a context here that makes the act much more than an abuse of the presidency's emergency powers to circumvent democratic institutions, to which neoliberalism is critical. Todd regards that ideology not as some revival of the classical liberalism of John Locke-Adam Smith, but an "economic nihilism" summed up in such "idiocies" ("idioties") as Margaret Thatcher's famous remark that "There is no such a thing as society."* Indeed, so far as Todd is concerned making short-term individual economic rationality ("la rationalité individualiste," "la rationalité économique à court terme") the determining principle of all of social life, with its destruction of the economic base (as through the deindustrialization of the U.S., Britain and France) and the social supports that give the population an indispensable minimum of security (sufficient for people to start families, for example), is "destructive of the capacity of populations to reproduce and societies to survive"--literally, as he shows on the basis of the falling life expectancy in the U.S. and sub-replacement birth rates across the advanced world (with France's high natality bespeaking its refusal of neoliberalism, and in an inversion of the neoliberal view of France, that refusal actually the country's greatness).**
Increasingly exposed for the nihilism it is, Todd holds neoliberalism to be "dying" (alluding even to a return to "state entrepreneurship" in that great champion of neoliberal policy, the U.S.), in spite of which Macron, as this pension reform shows, is, in spite of some glib, vague and unconvincing rhetoric about "reindustrialization," still pushing the neoliberal project. Of course, if Macron is a neoliberal in a world becoming less so this requires some explanation, and Todd offers it, speaking of political inertia--and the personal defects of Macron himself, not least a "cognitive deficit" (Todd uses the term at least three times over the interview's course), and serious problems of personality. These include a hatred for "ordinary people," and even a child-like "testing the limits" of what is allowed him as he deliberately provokes the public with his style as well as his substance.
Why is Macron managing to get away with such behavior? Todd argues that this is a matter of the combination of France's electoral system, and specifically its not offering its voters proportional representation (as seen in the two-round presidential elections), and the divisions between left and right in the country as represented by the "Unsubmissive France" ("Nupes") that Jean-Luc Melenchon led and Marine Le Pen's "National Rally" ("RN"). Both opposed to the neoliberal course, they are divided by what Americans today would call the "Culture War," in which educational levels factor (people who are not well-off but are still college degree-holders favoring Nupes, while their non-college-educated counterparts favor Le Pen). Still, while acknowledging the barriers to civility between them Todd, who admits to worries that on its present course the country is headed for a collision between the neoliberal "state-finance aristocracy" and the far right, ends the interview with a call to all French persons, whatever their educations, wealth, party or anything else to be the "adults" and "stop the child," and Nupes and the RN make some temporary pact under which they join forces to reform the voting system and introduce the "proportional" ballot France does not have, which he now thinks just about the only thing that can save French democracy.
Considering Todd's remarks I found his appraisal of neoliberalism of particular interest--his characterization of it as nihilistic in the cultural and economic sphere fair enough (and I think, Todd quite right to point out that the view of there being "No such thing as society" is not the tradition of Adam Smith, however much Thatcher had her economic thinking done for her by the folks of the Adam Smith Institute). This extends to his more specific remarks about family formation, birth rates, and the rest. Also noteworthy, and quite correct, is Todd's recognition of the essential endurance of neoliberalism underneath Macron's gestures toward protectionism (a subject on which I have found myself writing quite a bit these last couple of years as people speak of neoliberalism's supposed demise).
However, it seems to me that France is less unique here than some French observers seem to think (Todd perhaps included). Bruno Amable and Stefano Palombarini call Macron "the last neoliberal" (and I got the impression that Todd is thinking along the same lines), but I remember all too well that reports of the death of neoliberalism have been "greatly exaggerated" over and over again these past four decades (amid the neomercantilist fad of the '90s, after the 2007 crash, etc., etc.). And it seems to me that pretty much wherever one looks across the advanced world we see neoliberals confusing onlookers with protectionist gestures (increasingly numerous and disruptive as they have been) as they otherwise "stay the course"--that neoliberalism, for all its faults and failures, and for all the opposition it has aroused, is not being given up, even in a gradualist way, but at most adapted to meet the present emergency. (We see it in fiscal and monetary policy, we see it in social policy, we see it above all in the continued centrality of creditism-fueled and speculation-minded financialization in the economic model, among much, much else; while I might add that Todd's reference to "the return of the entrepreneurial state" in the U.S. must be highly qualified, a very limited matter in as well as out of the national security arena, and which seems likely to only dwindle in the months and even years ahead, given the combination of the shifting focus of a Joseph Biden administration which was never much of a candidate for a break with the past, the current makeup of Congress, or the response likely to follow in the wake of the current cryptocurrency and bank failures from Silvergate on.)
This exaggerated sense of the demise of neoliberalism extends to the domestic scene in France where, while popular sentiment is clearly and forcefully against it (across educational levels, a point he discusses in a more nuanced way than we tend to see in the U.S.), I am not sure how deep the opposition to neoliberalism of Melenchon and Le Pen goes--neither having said or done anything to make me think they, or the leaders of the tendencies they lead, are anything but another couple of politicians promising change on the campaign trail, but in office likely to continue and even intensify the policies seen to date. (Britain affords a striking example. There Keir Starmer posed as a socialist with a social democratic platform when it was convenient, then discarded that platform in the most brazen manner imaginable--such that the country's choice in the next General Election, barring some unforeseen change, is between the Tory neoliberalism of Rishi Sunak and the New Labour Blairite neoliberalism of Starmer.)
Being somewhere between "faint" and "facade" the two parties' supposed common opposition to neoliberalism is thus no foundation for overcoming their perhaps irreconcilable differences. Meanwhile, it may be that proportional representation is the last thing that Le Pen and the RN want, precisely because of how, in 2002 and more significantly in 2017 and 2022, the lack of such representation lifted this party with a mere 15 percent of the seats in the National Assembly to that second round of voting in the presidential election—where in 2022 it got over 40 percent of the vote. Putting it bluntly, the RN's best hope for winning in 2027 may be that disgust with Macron, and the blocking of any alternatives to Macron but themselves, push them over the top and put Le Pen in the Élysée Palace. (Indeed, in making his call Todd can seem to be calling on Le Pen and the RN to link hands with their center-left rivals to help save France from . . . Le Pen and the RN.) The result is that some Nupes/RN alliance coming to the rescue of French democracy in the manner Todd describes seems to me very, very unlikely indeed.
All that said (interesting as Todd's remarks about the subject were, the more in as we are so used here in America to the centrist-neoliberal press hailing the Macrons of the world as the "adults in the room" and their critics as unruly children) I am less sure of what to make of Macron the individual--and the relevance of his personality to the situation at hand. After all, the French government's line in policy does not emanate from the outmoded thinking of a single individual stuck in the past, but rather the preferences of the country's elites, which are entirely in line with those of their counterparts the world over (as, consistently shown by the decisions of the Constitutional Council, down to its supporting Macron in his use of 49.3 to push through the reform). Likewise the opposition to neoliberalism, and indeed the recent escalation of that opposition, is not unique to the French people, with, if French opposition in this case has been more focused and dramatic than elsewhere, still clearly part of a growing international trend (as France's neighbors, Britain and Germany, see historic strike action). The same goes for Macron's extreme display of disrespect for the population and authoritarian personal style in response to the protests--disdain for the inevitable widespread dissent virtually a requirement for the job of imposing those hugely unpopular policies. (Indeed, considering Macron, the backlash against him, and his answer to it, can one really say he is very different from Margaret Thatcher--especially the Thatcher of the coal strike and the poll tax riots? Or Blair? Or Sunak and Starmer today? Or any number of other aspirants to the status of being their own country's Thatcher--such as Macron's predecessor, Nicolas Sarkozy was?) Still, it is not too great a stretch to believe that he is enjoying playing the part--and there seems nothing to be said in praise of that. One also cannot rule out that his doing so in these different circumstances may have different, very dangerous, consequences.
* No matter how her loyalists try to spin it, "There is no such thing as society" is exactly what Thatcher meant when she spoke those words, as you can see for yourself looking at the full 1987 interview with Woman's Own posted at the Margaret Thatcher Foundation's web site. ** For Todd it is significant that the great neoliberal "success" story, South Korea, is at the extreme opposite end of the spectrum, with a Total Fertility Rate of 0.8, as against France's near-replacement level 1.8.
Friday, April 7, 2023
Microsoft's "Sparks of Artificial General Intelligence" Study: Some Reflections
Since the release of GPT-4 a scarce month ago we have seen an abundance of comment on the capabilities of the system--overwhelmingly superficial comment, with little critical thought in evidence, such that a discerning observer may read a great deal of it without having much basis for judging its actual significance. (We are told over and over and over again that GPT-4 did well on a Bar Exam. That sounds impressive. But what does that really mean?)
However, a new study from a team of scientists at Microsoft, "Sparks of General Intelligence: Early Experiments With GPT-4," offers something more substantial. Working from a definition of intelligence laid out in a noted 1994 editorial produced by a large (52 member!) group of psychologists ("a very general mental capability that" implicitly "encompasses a broad range of cognitive skills and abilities" that includes "among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience"); a standard for what constitutes "Artificial General Intelligence" (as an artificial intelligence able to perform in the aforementioned ways at a level at least comparable to that of a human); and an understanding of how GPT-4 functions (as a neural network-based "Large Language Model" (LLM) trained on a vast body of web-based text "using at its core a self-supervised objective of predicting the next word in a partial sentence"); they devised a series of challenges which would be "novel and difficult" for such a program. In doing so they endeavored to challenge GPT-4 not only with a variety of demanding problems (coping with imagery, producing microcode, solving mathematical problems, etc.), but also highly idiosyncratic challenges that would require the chatbot to synthesize knowledge and skills from different areas to cope with problems for which its training was very unlikely to prepare it, demonstrating that more than memorization was involved, and that instead it possessed a "deep and flexible understanding of concepts, skills and domains." An excellent example is one test requiring it to present the proof of the infinitude of primes (aka "Euclid's theorem") written in the form of Shakespearean poetry, which forced the system to combine "mathematical reasoning, poetic expression and natural language generation." Still another was its being required to solve a riddle that required the system to come up with a conclusion based on the geographical location of the event described purely from clues that, it might be said, could only be handled on the basis of the "common sense" that has been such a challenge for AI developers. (Specifically it was expected to guess the color of a bear a hunter shot in a location where walking one mile south and one mile east walking one mile north brought him back to where he started--which can only be the North Pole, where from any point one mile south a mile back north brings one back to where they were originally.)
All of this established the Microsoft study then proceeds to detail the experiments and their results, and then present more general conclusions drawn from them. Ultimately the authors judged that "in all of these tasks, GPT-4's performance is strikingly close to human-level performance"--it even met the common sense challenge--such that they "believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system" (emphasis added).
Because italicizing the relevant text doesn't seem to do it justice I am going to repeat it: "reasonably . . . viewed . . . as an early . . . version of an artificial general intelligence."
In other words, it is reasonable to say that not only is artificial intelligence here, but AGI is here. Now. As an actuality that millions of people are tinkering with.
Still, in looking at that statement one should not forget the qualifications. It may be reasonable to say GPT-4 is an AGI . . . but an early" and "still incomplete" version of it, and the team devotes as much time to explaining the limits of the AGI at hand as wowing us with its genuinely impressive capabilities. Key to their analysis is the distinction that psychologist Daniel Kahneman drew between "fast thinking" and "slow thinking." The former is automatic and intuitive, the latter controlled and rational--the kind where we have to consciously reason our way to a solution of a problem, a process which is, as implied by the terminology, slower, and more "effortful," but also likely to be more accurate and reliable.
As one might guess from the description of its functioning the word-predicting chatbot is a fast thinker which "essentially" may be said to "come up with an answer in . . . a single pass of the feedforward architecture." This works well with some problems, but not others, particularly those tasks which are "discontinuous" and so "require planning ahead . . . or a 'Eureka idea'"--and when faced with such tasks GPT-4's performance suffers, with the authors as one example of this another problem related to prime numbers. While GPT-4 did surprisingly well at explaining Euclid's theorem by way of Shakespearean poetry (inventively writing that explanation out as a dialogue between Romeo and Juliet) it did not do well at what would seem the much humbler task of simply giving us a list of the prime numbers between 150 and 250, with the authors arguing that this was because this is the kind of "slow thinking" task where most of us would "get out a scratchpad" and work it out--whereas GPT-4 has no such function, or even the basis for one. As this implies GPT-4 is simply not equipped to assess the quality of its own information or thought process, and not very good with context--deficiencies reflective of other lacks, like the absence of long-term memory, and faculties for making use of that memory through "continual learning." Together they leave it very sensitive to the form as well as the content of inputs--"the framing or wording of prompts and their sequencing in a session" easily throwing it even where tasks it can perform well are concerned. The authors of the study also stress that GPT-4 lacks the "confidence calibration" that would let it distinguish between when it is "guessing" and when it actually "knows"; and as many have remarked, prone to "hallucinating" false information. The result is that for its designers to improve this "early" and "still incomplete" AGI into something more complete requires that they add a capacity for "slow thinking" as well as "fast thinking," with the former overseeing the latter; long-term memory; a continual learning capacity to take full advantage of that faculty; and they add a shift beyond single word prediction toward "higher-level parts of [a] text such as sentences, paragraphs or ideas" (which they recognize may or may not emerge from a "next-word-prediction paradigm," even with these improvements).
Even as someone long inclining toward skepticism regarding the significance of the performance of systems like GPT-4 (I have tended to focus on how AI copes with the physical world, an area where the record has been rather disappointing) I have to admit that the case made in the study impressed me, rather more than anything I have heard or read about OpenAI's products since these started grabbing headlines last year. Indeed, I find myself thinking that here, after a great many false starts, AGI may be finally shifting from science fiction to science fact, with some of the capabilities that were supposedly "just around the corner" for decades finally arriving. (Indeed, in this month marking the forty-first anniversary of the Fifth Generation Computer Systems Initiative, it seems that we may be finally getting what was promised then, with important economic and cultural implications perhaps not too far off.)
However, I am also impressed by the constructive criticism offered herein, which seems to me well worth thinking about, particularly in regard for the necessity of a slow thinking function overseeing the more "autocomplete"-like function being demonstrated (the more in as, I suspect, that shift from single-word prediction to dealing with higher parts of text would likely depend heavily on it). If we accept GPT-4 as increasingly human-like when it comes to fast thinking, just where does AI research today stand with regard to the slow kind? And for that matter, where does it stand in regard to the integration of the two in a useful manner? Granting the case made here this would seem the next issue to think about--with the state and rate of progress determining whether the advance of AI research that has seemed so blistering in recent months accelerates, slows or even comes to another screeching halt of the kind that has renewed cynicism toward the field time and time again.
However, a new study from a team of scientists at Microsoft, "Sparks of General Intelligence: Early Experiments With GPT-4," offers something more substantial. Working from a definition of intelligence laid out in a noted 1994 editorial produced by a large (52 member!) group of psychologists ("a very general mental capability that" implicitly "encompasses a broad range of cognitive skills and abilities" that includes "among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience"); a standard for what constitutes "Artificial General Intelligence" (as an artificial intelligence able to perform in the aforementioned ways at a level at least comparable to that of a human); and an understanding of how GPT-4 functions (as a neural network-based "Large Language Model" (LLM) trained on a vast body of web-based text "using at its core a self-supervised objective of predicting the next word in a partial sentence"); they devised a series of challenges which would be "novel and difficult" for such a program. In doing so they endeavored to challenge GPT-4 not only with a variety of demanding problems (coping with imagery, producing microcode, solving mathematical problems, etc.), but also highly idiosyncratic challenges that would require the chatbot to synthesize knowledge and skills from different areas to cope with problems for which its training was very unlikely to prepare it, demonstrating that more than memorization was involved, and that instead it possessed a "deep and flexible understanding of concepts, skills and domains." An excellent example is one test requiring it to present the proof of the infinitude of primes (aka "Euclid's theorem") written in the form of Shakespearean poetry, which forced the system to combine "mathematical reasoning, poetic expression and natural language generation." Still another was its being required to solve a riddle that required the system to come up with a conclusion based on the geographical location of the event described purely from clues that, it might be said, could only be handled on the basis of the "common sense" that has been such a challenge for AI developers. (Specifically it was expected to guess the color of a bear a hunter shot in a location where walking one mile south and one mile east walking one mile north brought him back to where he started--which can only be the North Pole, where from any point one mile south a mile back north brings one back to where they were originally.)
All of this established the Microsoft study then proceeds to detail the experiments and their results, and then present more general conclusions drawn from them. Ultimately the authors judged that "in all of these tasks, GPT-4's performance is strikingly close to human-level performance"--it even met the common sense challenge--such that they "believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system" (emphasis added).
Because italicizing the relevant text doesn't seem to do it justice I am going to repeat it: "reasonably . . . viewed . . . as an early . . . version of an artificial general intelligence."
In other words, it is reasonable to say that not only is artificial intelligence here, but AGI is here. Now. As an actuality that millions of people are tinkering with.
Still, in looking at that statement one should not forget the qualifications. It may be reasonable to say GPT-4 is an AGI . . . but an early" and "still incomplete" version of it, and the team devotes as much time to explaining the limits of the AGI at hand as wowing us with its genuinely impressive capabilities. Key to their analysis is the distinction that psychologist Daniel Kahneman drew between "fast thinking" and "slow thinking." The former is automatic and intuitive, the latter controlled and rational--the kind where we have to consciously reason our way to a solution of a problem, a process which is, as implied by the terminology, slower, and more "effortful," but also likely to be more accurate and reliable.
As one might guess from the description of its functioning the word-predicting chatbot is a fast thinker which "essentially" may be said to "come up with an answer in . . . a single pass of the feedforward architecture." This works well with some problems, but not others, particularly those tasks which are "discontinuous" and so "require planning ahead . . . or a 'Eureka idea'"--and when faced with such tasks GPT-4's performance suffers, with the authors as one example of this another problem related to prime numbers. While GPT-4 did surprisingly well at explaining Euclid's theorem by way of Shakespearean poetry (inventively writing that explanation out as a dialogue between Romeo and Juliet) it did not do well at what would seem the much humbler task of simply giving us a list of the prime numbers between 150 and 250, with the authors arguing that this was because this is the kind of "slow thinking" task where most of us would "get out a scratchpad" and work it out--whereas GPT-4 has no such function, or even the basis for one. As this implies GPT-4 is simply not equipped to assess the quality of its own information or thought process, and not very good with context--deficiencies reflective of other lacks, like the absence of long-term memory, and faculties for making use of that memory through "continual learning." Together they leave it very sensitive to the form as well as the content of inputs--"the framing or wording of prompts and their sequencing in a session" easily throwing it even where tasks it can perform well are concerned. The authors of the study also stress that GPT-4 lacks the "confidence calibration" that would let it distinguish between when it is "guessing" and when it actually "knows"; and as many have remarked, prone to "hallucinating" false information. The result is that for its designers to improve this "early" and "still incomplete" AGI into something more complete requires that they add a capacity for "slow thinking" as well as "fast thinking," with the former overseeing the latter; long-term memory; a continual learning capacity to take full advantage of that faculty; and they add a shift beyond single word prediction toward "higher-level parts of [a] text such as sentences, paragraphs or ideas" (which they recognize may or may not emerge from a "next-word-prediction paradigm," even with these improvements).
Even as someone long inclining toward skepticism regarding the significance of the performance of systems like GPT-4 (I have tended to focus on how AI copes with the physical world, an area where the record has been rather disappointing) I have to admit that the case made in the study impressed me, rather more than anything I have heard or read about OpenAI's products since these started grabbing headlines last year. Indeed, I find myself thinking that here, after a great many false starts, AGI may be finally shifting from science fiction to science fact, with some of the capabilities that were supposedly "just around the corner" for decades finally arriving. (Indeed, in this month marking the forty-first anniversary of the Fifth Generation Computer Systems Initiative, it seems that we may be finally getting what was promised then, with important economic and cultural implications perhaps not too far off.)
However, I am also impressed by the constructive criticism offered herein, which seems to me well worth thinking about, particularly in regard for the necessity of a slow thinking function overseeing the more "autocomplete"-like function being demonstrated (the more in as, I suspect, that shift from single-word prediction to dealing with higher parts of text would likely depend heavily on it). If we accept GPT-4 as increasingly human-like when it comes to fast thinking, just where does AI research today stand with regard to the slow kind? And for that matter, where does it stand in regard to the integration of the two in a useful manner? Granting the case made here this would seem the next issue to think about--with the state and rate of progress determining whether the advance of AI research that has seemed so blistering in recent months accelerates, slows or even comes to another screeching halt of the kind that has renewed cynicism toward the field time and time again.
The Russian Armed Forces' Robotization Aspirations
Given both the escalation of interstate conflict in our time, in which tension between Russia and the West, and the Russo-Ukrainian conflict in particular, have been central; and the upsurge of interest in artificial intelligence research and its applications in the military as other spheres; the March 2023 report by the RAND Corporation's Krystyna Marcinek and Eugeniu Han regarding the Russian government's aspirations to the robotization of its military forces (Russia's Asymmetric Response to 21st Century Strategic Competition: Robotization of the Armed Forces) would seem especially timely.
The document, which runs to 132 pages all told, details the concept of robotization not only as understood generally but as it appears to be understood specifically among Russian military thinkers and policymakers; the Russian government's stated plans in this direction; the actual work done to achieve robotization of the Russian armed forces; and Russia's technological-industrial potential to achieve its goals.
As might be expected the mass of organizational, technical and even linguistic detail is considerable (the last seeing the authors decipher particular usages in the Russian literature for the benefit of Western readers who, even if among the very few possessing Russian language fluency, might not have a good grasp of the relevant terminology). Still, they manage to handle the material intelligibly, while in this moment when emotions are running high, and the old analytical baggage surrounding assessments of Russian industrial, technological and military performance can seem to weigh even more heavily than usual, the authors seem to me to manage to treat the subject intelligibly, and with some fair-mindedness.
The result is that they come to some interesting conclusions about differences in the thinking on military robotization as between Russia and the West. In particular, they argue that the Russian government is, if anything, more ambitious than the West with regard to robotization--that one can see this in their greater hope of replacing rather than augmenting human personnel with robots (hence, "robotization"); the signs they give of greater expectations of soon being able to deploy significant numbers of combat robots on the ground, like unmanned tanks; and their apparently greater willingness to countenance those systems using deadly force on an autonomous, no-human-in-the-loop basis. The authors also acknowledge the distance between the actual state-of-the-art in Russian, and everyone else's, robotics. And, as suggested by the title of their study itself (foregrounding "asymmetric response" while relegating mention of robotics to the subtitle), they stress the extent to which the government's ambitions reflect the difficulties of its situation--not least its demographic limitations relative to its rivals, and what its government may see as a geopolitical balance shifting disadvantageously in the near term. (Russia, after all, is a country of 150 million--as against a NATO alliance of some 900 million and growing, or a China of 1.4 billion--while even a Russian military devoted solely to homeland defense of the Russian Federation's territory would have that less than 2 percent of the world's population trying to hold a sixth of the world's landmass.)
The result is that the Russian government's high aspirations in the field of robotics can seem to come down to the hope of "wonder weapons" helping to salvage a worsening security situation--to an implausible degree. After all, overoptimism about the possible rate of progress aside, the fact remains that if Russia is not without some strengths here (among them a good STEM educational system, a relatively well-digitized economy and a genuinely serious government commitment here), it is far from clear how it would vault ahead of the rest of the world in this area given its not only being well behind other nations in even relatively easy areas like the less-demanding aerial drones to the point of relying heavily on foreign inputs for their production, numerous factors handicap it in any "race to AI superweaponry"--be it its having a much smaller industrial base than many other major states (its industrial disadvantage actually far exceeding its demographic disadvantage); its pointed weakness in many of the specific technological areas relevant to producing such weaponry (like the production of semiconductors, and their own required inputs); the difficulties for its activity posed by a sanctions regime increasingly incorporating the most important producers in those fields where it relies on imports (directed against not just Russia, but its important partner China); and the many enduring obstacles to its development broadly (like slight industry-academic collaboration, and the prospect of a "brain drain" as underemployed talent leaves the country for work abroad). Still, no one should be under any illusions about such problems being uniquely Russian, with, to all evidences, every major power currently and significantly a captive of technological hype as it copes with a world situation that is not all that any of them would like it to be.
The document, which runs to 132 pages all told, details the concept of robotization not only as understood generally but as it appears to be understood specifically among Russian military thinkers and policymakers; the Russian government's stated plans in this direction; the actual work done to achieve robotization of the Russian armed forces; and Russia's technological-industrial potential to achieve its goals.
As might be expected the mass of organizational, technical and even linguistic detail is considerable (the last seeing the authors decipher particular usages in the Russian literature for the benefit of Western readers who, even if among the very few possessing Russian language fluency, might not have a good grasp of the relevant terminology). Still, they manage to handle the material intelligibly, while in this moment when emotions are running high, and the old analytical baggage surrounding assessments of Russian industrial, technological and military performance can seem to weigh even more heavily than usual, the authors seem to me to manage to treat the subject intelligibly, and with some fair-mindedness.
The result is that they come to some interesting conclusions about differences in the thinking on military robotization as between Russia and the West. In particular, they argue that the Russian government is, if anything, more ambitious than the West with regard to robotization--that one can see this in their greater hope of replacing rather than augmenting human personnel with robots (hence, "robotization"); the signs they give of greater expectations of soon being able to deploy significant numbers of combat robots on the ground, like unmanned tanks; and their apparently greater willingness to countenance those systems using deadly force on an autonomous, no-human-in-the-loop basis. The authors also acknowledge the distance between the actual state-of-the-art in Russian, and everyone else's, robotics. And, as suggested by the title of their study itself (foregrounding "asymmetric response" while relegating mention of robotics to the subtitle), they stress the extent to which the government's ambitions reflect the difficulties of its situation--not least its demographic limitations relative to its rivals, and what its government may see as a geopolitical balance shifting disadvantageously in the near term. (Russia, after all, is a country of 150 million--as against a NATO alliance of some 900 million and growing, or a China of 1.4 billion--while even a Russian military devoted solely to homeland defense of the Russian Federation's territory would have that less than 2 percent of the world's population trying to hold a sixth of the world's landmass.)
The result is that the Russian government's high aspirations in the field of robotics can seem to come down to the hope of "wonder weapons" helping to salvage a worsening security situation--to an implausible degree. After all, overoptimism about the possible rate of progress aside, the fact remains that if Russia is not without some strengths here (among them a good STEM educational system, a relatively well-digitized economy and a genuinely serious government commitment here), it is far from clear how it would vault ahead of the rest of the world in this area given its not only being well behind other nations in even relatively easy areas like the less-demanding aerial drones to the point of relying heavily on foreign inputs for their production, numerous factors handicap it in any "race to AI superweaponry"--be it its having a much smaller industrial base than many other major states (its industrial disadvantage actually far exceeding its demographic disadvantage); its pointed weakness in many of the specific technological areas relevant to producing such weaponry (like the production of semiconductors, and their own required inputs); the difficulties for its activity posed by a sanctions regime increasingly incorporating the most important producers in those fields where it relies on imports (directed against not just Russia, but its important partner China); and the many enduring obstacles to its development broadly (like slight industry-academic collaboration, and the prospect of a "brain drain" as underemployed talent leaves the country for work abroad). Still, no one should be under any illusions about such problems being uniquely Russian, with, to all evidences, every major power currently and significantly a captive of technological hype as it copes with a world situation that is not all that any of them would like it to be.
Reporting the G-7 Economies
Americans are commonly stereotyped less interested in the outside world than other peoples--a function, some suppose, of the country's position in world affairs (its dominant status, which has others paying more attention to it than vice-versa) and more distinct features of its geography and culture (its continental scale, its being part of an already long predominant Anglosphere, its tendency to receive rather than send out immigrants, etc.).
Whether one sees the stereotype as containing any truth or not, the country's media seem to behave as if it were the case, adding a lessened attention to events in much of the world to its other biases--its preference for politics over policy, personality over material facts, narrative over nuance, and a centrist perspective squeamish about or hostile to contextualization and big-picture thinking, inclined to neoliberalism and neoconservatism, and deeply deferential to Establishment "expertise."
All this means that what coverage other countries get is of a very particular kind, as where the economic life of the Group of Seven advanced industrialized countries is concerned. It by and large approves Britain's being neoliberal trailblazer and "financial superpower" (and is much less interested in Britain's deindustrialization). It is less warm in its attitude toward Germany and Japan, but those two countries' economies are so big, powerful and dominant in their regions that the media are compelled to give them some heed--and if hewing to narratives of Eurosclerosis and Japanese "lost decades" for a long time, in the case of Germany especially, but also sometimes Japan, acknowledging manufacturing successes. (It also helps that playing up those countries' economic weight has been prominent in the American media's tendency to encourage those countries' "rearmament"--while given the campaign against China's IT sector there is no getting away from Japan's colossal profile in areas like chip-making inputs and robotics.)
By contrast there seems to me much less coverage of the other three G-7 members--France, Italy, Canada. Where the first two countries are concerned, and France particularly, it revels in neoliberal clichés about an "old Europe" of bloated governments, overgenerous welfare states, uppity, strike-happy workers, ever-low and eroding productivity and "competitiveness," and an oppressed entrepreneur class looking longingly across the English Channel, and still better across the Atlantic, toward FREEDOM! (Indeed, such cliché, which is to be found in France's own press as well as that of foreign countries, has reared its ugly head in American coverage of French protest against the raising of the country's retirement age--cliché consistently acknowledged even by those who would offer a more sympathetic take.) Italy is hazily perceived in a similar manner (the tendency to treat a whole continent as a nearly homogenous blob may not be as bad in Europe's case as in that of "Africa" or "Asia," but not as much better as one might think). Still, it is mostly ignored. And Canada is ignored even more completely--such that one has the irony of America neighboring one of the world's largest and most important economies, and its press paying almost no attention to the fact at all, such that I suspect that even Americans attentive to the relevant areas of the news know less about the economy of their own northern neighbor and North American Free Trade Agreement partner than they do any of the other countries.
Such is the "quality" of the media that centrists endlessly sing as our salvation from fake news-purveying hordes.
Whether one sees the stereotype as containing any truth or not, the country's media seem to behave as if it were the case, adding a lessened attention to events in much of the world to its other biases--its preference for politics over policy, personality over material facts, narrative over nuance, and a centrist perspective squeamish about or hostile to contextualization and big-picture thinking, inclined to neoliberalism and neoconservatism, and deeply deferential to Establishment "expertise."
All this means that what coverage other countries get is of a very particular kind, as where the economic life of the Group of Seven advanced industrialized countries is concerned. It by and large approves Britain's being neoliberal trailblazer and "financial superpower" (and is much less interested in Britain's deindustrialization). It is less warm in its attitude toward Germany and Japan, but those two countries' economies are so big, powerful and dominant in their regions that the media are compelled to give them some heed--and if hewing to narratives of Eurosclerosis and Japanese "lost decades" for a long time, in the case of Germany especially, but also sometimes Japan, acknowledging manufacturing successes. (It also helps that playing up those countries' economic weight has been prominent in the American media's tendency to encourage those countries' "rearmament"--while given the campaign against China's IT sector there is no getting away from Japan's colossal profile in areas like chip-making inputs and robotics.)
By contrast there seems to me much less coverage of the other three G-7 members--France, Italy, Canada. Where the first two countries are concerned, and France particularly, it revels in neoliberal clichés about an "old Europe" of bloated governments, overgenerous welfare states, uppity, strike-happy workers, ever-low and eroding productivity and "competitiveness," and an oppressed entrepreneur class looking longingly across the English Channel, and still better across the Atlantic, toward FREEDOM! (Indeed, such cliché, which is to be found in France's own press as well as that of foreign countries, has reared its ugly head in American coverage of French protest against the raising of the country's retirement age--cliché consistently acknowledged even by those who would offer a more sympathetic take.) Italy is hazily perceived in a similar manner (the tendency to treat a whole continent as a nearly homogenous blob may not be as bad in Europe's case as in that of "Africa" or "Asia," but not as much better as one might think). Still, it is mostly ignored. And Canada is ignored even more completely--such that one has the irony of America neighboring one of the world's largest and most important economies, and its press paying almost no attention to the fact at all, such that I suspect that even Americans attentive to the relevant areas of the news know less about the economy of their own northern neighbor and North American Free Trade Agreement partner than they do any of the other countries.
Such is the "quality" of the media that centrists endlessly sing as our salvation from fake news-purveying hordes.
Sunday, April 2, 2023
On Italy's GPT-4 "Ban"
The Italian Data Protection Authority (Garante Per La Protezione Dei Dati Personali, or GPDP) has imposed an (I quote the official English translation of its statement on its web site) "immediate temporary limitation" on GPT-4.
According to the Authority's own statement (which you can see in the original, and in English translation, at its own web site, here) the action has nothing to do with the fears of AI-out-of-control of which so much is now being made. Rather the GPDP cites the more commonplace Internet regulation concerns of the protection of user privacy ("no information is provided to users and data subjects whose data are collected by Open AI; more importantly, there appears to be no legal basis underpinning the massive collection and processing of personal data in order to 'train' the algorithms on which the platform relies"), and the fear that children will be subjected to inappropriate content ("the lack of . . . age verification mechanism expos[ing] children to receiving responses that are absolutely inappropriate to their age and awareness").
One may speculate that these stated concerns do not exhaust the GPDP's concerns about the technology--and even that other concerns may actually be of higher priority than the ones stated. Still, that these are the ones presented is a reminder that, in what can seem a silly rush to see the release of GPT-4 as a bad sci-fi "rebellion of the robots" scenario, we may be overlooking humbler but quite important concerns--the more in as so much is being made of some of these exact concerns in relation to other technologies.
According to the Authority's own statement (which you can see in the original, and in English translation, at its own web site, here) the action has nothing to do with the fears of AI-out-of-control of which so much is now being made. Rather the GPDP cites the more commonplace Internet regulation concerns of the protection of user privacy ("no information is provided to users and data subjects whose data are collected by Open AI; more importantly, there appears to be no legal basis underpinning the massive collection and processing of personal data in order to 'train' the algorithms on which the platform relies"), and the fear that children will be subjected to inappropriate content ("the lack of . . . age verification mechanism expos[ing] children to receiving responses that are absolutely inappropriate to their age and awareness").
One may speculate that these stated concerns do not exhaust the GPDP's concerns about the technology--and even that other concerns may actually be of higher priority than the ones stated. Still, that these are the ones presented is a reminder that, in what can seem a silly rush to see the release of GPT-4 as a bad sci-fi "rebellion of the robots" scenario, we may be overlooking humbler but quite important concerns--the more in as so much is being made of some of these exact concerns in relation to other technologies.
Subscribe to:
Posts (Atom)