Friday, August 21, 2020

From Bubble to Bust--and Perhaps Boom Again? Notes on Technological Hype

At this point I am old enough to have lived through a number of cycles of boom and bust for technological hype. And I think I have noticed a possible pattern in recent busts, certainly with regard to the "bust" end of the broader cycle. In particular it seems that this tends to combine three features:
1. The failure of much-hyped technologies to actually materialize.
2. Economic downturn.
3. A crisis which gives the lie to self-satisfaction over some particularly significant claim for our technology as a revolutionary problem-solver.
Back in the late '90s there was enormous hype over computers. Of course, this period did produce some genuine, significant technologies of everyday life--personal computing, Internet access and cellular telephony--beginning to become genuinely widespread, and becoming refined considerably in the process, culminating in the extraordinary combination of power, versatility and portability of our smart phones and tablets with their 5G-grade broadband two decades on.

Yet as a review of Ray Kurzweil's predictions for 2009 makes clear, much that was widely expected never came to pass. Artificial intelligence, virtual reality, nanotechnology. To put it mildly, progress in those areas, which would have had far more radical consequences, proved . . . slower, so slow that expectation fizzled out in disappointment.

Meanwhile, the New Economy boom of the late '90s turned to bust, a bust which never quite turned into boom again, so that not the crash but the years of growth turned out to be the historical anomaly, and even the most credulous consumers of the conventional wisdom reminded that the idiot fantasy that the economic equivalent of the law of gravity had been suspended was just that, an idiot fantasy. This was all the more painful because, in contrast with the New Economy promises that science was liberating us from our reliance on a finite and frail base of natural resources, we confronted spiking natural resource prices, above all fossil fuel prices, that brought on a global fuel and food crisis (2006-2008). And then the comparatively crummy performance of the early twenty-first century, which was really just another, much less impressive bubble--and that not in any fancy new tech, but old-fashioned real estate and commodity prices--led straightaway to the worst economic disaster since the 1930s, from which we have been reeling ever since.

Science fiction, a useful bellwether for these things, showed the reaction. Where in the wake of the '90s boom and hype the genre showered readers in shiny, ultra-high-tech Singularitarian futures, and through sheer momentum this substantially continued through the early '00s (it can take years to finish a book, years after that to get it into print), it was all post-apocalypse and dystopia--like World War Z and The Hunger Games and Paolo Bacigalupi's stuff. And even when science fiction bothered with the future it was the future as it looked from the standpoint of the past--as with steampunk, which was very popular post-2008 as well.

Of course, the popular mood did not stay permanently down in the dumps. The economy made a recovery of sorts--a very anemic one, but a recovery all the same.

And there was renewed excitement about many of the very technologies that had disappointed, with the companies and the press assuring us that they were finally getting the hang of carbon nanotubes and neural nets and virtual reality and the rest. Soon our cars would drive themselves, while drones filled our skies, bringing us, if we wanted them--and we would--our virtual reality kits, our supermarket orders of clean meat.

Alas, just about none of that came to pass either--and where in the '90s we at least got the PCs and Internet and cell phones, I cannot think of a single real consolation prize for the consumer this time around. Meanwhile a new crisis hit--in the form of a pandemic which underlined just how un-automated the world still was, how reliant on people actually physically doing stuff in person. And underlined, too, that for all the talk of our living in an age of biotechnomedical miracle that has filled the air for as long as I can remember, the war on viruses is very, very, very far from being won. All of this contributed to an even worse economic disaster than the one seen in 2008 (not that things ever normalized after that).

It seems to me not just possible, but probable, that the combination of technological disappointment, crisis and economic downturn will spell another period of lowered expectations with regard to technological progress. Indeed, I have already been struck by how the chatter about the prospect of mass technological unemployment in the near term vanished amid economic crash generating plenty of the old-fashioned, regular kind.

Of course, in considering that one should acknowledge that some are pointing to the current crisis, precisely because of the way in which it has demonstrated certain societal vulnerabilities and needs as spurring further efforts to automate economic operations, or at least permit them to be performed remotely--with implications extending to, among much else, those drones and self-driving cars. A similar logic, some hold, may work in favor of clean meat.

It is not wholly implausible, of course. Crises can and do spur innovation, when the backing is there--when the prevailing institutions elect to treat a crisis as a crisis. However, I have yet to get the impression of any such sensibility among those flattered by the feeble-minded everyday usage of terms like "world leader."

Because No One Else Seems to be Keeping Tabs--A Glance Back at the Past Decade's Techno-Hype

The vast majority of people, I find, are very "well-trained" consumers. By that I mean that they have been trained in the way marketing hucksters want them to be. They completely swallow the hype about how soon a thing will be here and how much difference it will make in their lives--and then after the product's arriving later, or maybe not making so much difference, or maybe never even arriving at all and therefore making no difference whatsoever, thinking in terms of the hype rather than their own lived experience. They dutifully remember nothing and learn nothing, so that they are just as ready to believe the promises of the next huckster who comes along. And they pour scorn down on the head of anyone who questions what might most politely be called their credulousness--when they are not absorbed in the smart phone they believe is the telos of all human history--adding meanness to their extreme stupidity.

Still, as the words "vast majority" make clear, not everyone falls into this category. Some are a little more alert, a little more critical, than others. And sometimes those with the capacity to get a little more skeptical do so.

I think we are approaching such a period, because so many of the expectations raised in the 2010-2015 period are, at this moment, being deeply disappointed--and not simply because the ill-informed hacks of the press have oversold things far beyond their slight comprehension, but because in many a field those generally presumed to be in a position to know best (like CEOs of companies actually making the stuff in question) have publicly, often with great fanfare, announced specific dates for the unveiling of their promised grand creations, and those dates have come and again, sometimes again and again, as a world in need of the innovations in question goes on waiting.

Consider the Carbon NanoTube (CNT) computer chips that were supposed to keep computing power-per-dollar rising exponentially for a generation as the old silicon-based chips hit their limits.

Back in 2014 IBM announced it would have a commercial CNT chip by 2020--winning what has with only a little melodrama been called a "race against time."

Well, it's 2020. That commercial chip, however, is not here. Instead we are hearing only of breakthroughs that may, if followed up by other breakthroughs, eventually lead to the production of those chips, perhaps sometime this coming decade.

Indeed, the latest report regarding the Gartner Hype Cycle holds that carbon-based transistors are sliding down from the "Peak of Inflated Expectations" into the "Trough of Disillusionment."

Perhaps unsurprisingly, the progress of artificial intelligence, on which so many were so bullish a short while ago, is also slowing down--in part, for lack of computer capacity. It seems, in fact, that even carbon nanotube chips wouldn't get things on track if they were here. Instead the field's spokespersons are talking quantum computers, which, to put it mildly, are a still more remote possibility.

Also unsurprisingly, particularly high-profile applications for that artificial intelligence are proving areas of disappointment as well.

To cite an obvious instance, in 2013 Jeff Bezos said that within five years (by 2018) drone deliveries would be "commonplace."

In considering the absence of such deliveries years after those five years have run their course the press tends to focus on regulatory approval as the essential stumbling block, but, of course, the requisite technology is apparently still "under development."

Perhaps more germane to most people's lives, back in 2015 Elon Musk predicted that fully autonomous cars (Level 5) would hit the market in 2017.

That prediction has fared even more badly, with the result that the self-driving car (certainly to go by the number of articles whose writers smugly use smug phrases like "reality check" in their titles) is starting to look like the flying car. (Or the flying delivery drone?)

The Oculus Rift created quite a sensation back in 2013.

Alas, today the excitement that had surrounded it is even more completely recognized as past.

Clean meat was supposed to be on the market in 2019, if not before the end of 2018.

Now in 2020 the Guardian is talking about clean meat's hitting the market happening "in a few years." (For its part, IDTechEx says, think 2023.)

In area after area, what was supposed to have been here this year or the year before that or even before that is not only not here, but, we are told, still a few more years away--the Innovations talked up by the Silicon Valley Babbitts and their sycophants in the press receding further and further into the future.

Will it necessarily always be so? Of course not. Maybe the dream deferred will be a dream denied only temporarily, and briefly, with the semiconductor factories soon to be mass-producing CNT chips, which maybe along with quicker-than-expected progress in quantum computing will keep the AI spring of the twenty-first century from giving way to a long, cold AI winter, while perhaps even without them the delivery drones and the self-driving cars arrive ahead of schedule. Maybe, if still rough around the edges, next year will be VR's year, while this time it really is true that clean meat will be in our supermarkets "in a few years."

However, as one old enough to remember the extraordinary expectations of the '90s in many of these precise areas--nanotechnology, artificial intelligence, virtual reality--the disappointment is already very familiar, and worse for that familiarity, as well as how little in the way of tangible result we have been left with this time around. (The disappointments of the '90s were colossal--but we did get that explosion of access to personal computing, cellular telephony, the Internet, and those things did improve quite rapidly afterward. What from among the products of this round of techno-hype can compare with any of that, let alone all of it?) And if anything, where the development is less familiar but perhaps potentially more significant, the disappointment is even more galling. (Clean meat could be a very big piece of the puzzle for coping with the demand of a growing population for food, and the environmental crisis, at the same time.) In fact I cannot help wondering if we will not still be waiting for the promised results in twenty years--only to be disappointed yet again, while the hucksters go on with their hucksterism, and a credulous public continues to worship them as gods.

Wednesday, August 19, 2020

Contextualizing the French War in the Sahel

When we hear about the French operations in Mali and surrounding countries, I suppose few have much sense of how extraordinary the action is. I suspect that those who follow the news casually take it for granted that France has long been involved militarily in sub-Saharan Africa, without much sense of history or the details. This is all the more significant because, certainly where an American news audience is concerned, the commitment of 3,000, or even 5,000, troops to the region does not sound like very much, used as it is to thinking in terms of tens or hundreds of thousands of troops in overseas action. And Americans who have seen their forces almost continuously engaged against or in Iraq since 1990--for thirty years--might not be too struck by a commitment that began only in 2013. And so what France is doing in the Sahel does not seem like anything out of the ordinary.

Still, it is worth remembering that if France remained militarily active in Africa after decolonization, with its bases numerous and its interventions frequent, it has during that last half century been very sensitive to the scale and length of operations, especially where they have involved "boots on the ground." (By the end of the '60s France's sub-Saharan presence was down to 7,000 troops, total, and trended downward afterward.) The French government preferred brief actions emphasizing air power rather than ground troops (its '70s-era interventions sometimes referred to as "Jaguar diplomacy" for that reason), while its '80s-era confrontation with Libya over Chad, was exceptionally taxing--scarcely feasible without considerable American support.

Indeed, for the whole generation afterward no operation was comparable to the '80s action in Chad in its combination of scale and duration. Given the difference in population and the size of its armed forces (one-fifth and one-seventh of the U.S. figures, respectively), France's deployment has been comparable to a commitment of 15-35,000 American troops, equal to what the U.S. deployed in Afghanistan for much of that war--and likewise fulfilling an evolving mission over a far vaster area. What had originally been an action to recover specific ground from a specific enemy (recovery of northern Mali from the National Movement for the Liberation of Azawad) turned into a broader regional alliance/counter-terrorism operation (the Joint Force of the Group of Five Sahel/Operation Barkhane) against a multiplicity of groups extending across the Sahel, from Mauritania to Chad (an area the size of Western Europe)--overlapping with but separate from the ongoing peacekeeping mission in north Mali that picked up after the original French operation, the "United Nations Multidimensional Integrated Stabilization Mission in Mali" that quickly acquired the dubious distinction of being the world's most dangerous peacekeeping operation. Moreover, in contrast with the direct clash-avoiding, selective, minimalist use of force seen against Libya three decades ago, combat, if comparatively low in intensity, has been a continuous feature of the operation, which increasingly looks like an indefinite commitment to the general policing of this vast and still unstable region.

Consequently it is not for nothing that a recent New York Times article called it "France's Forever War." One might add, moreover, that the Sahel military operation(s) are just one way in which French policy has become more militarized, with France pursuing new overseas bases, and talking about sixth generation fighter jets, and French Presidents even fantasizing about (and perhaps even taking small steps toward) reviving conscription. And that, in turn, bespeaks how the conduct of every last major power has become increasingly militarized this past decade, supposedly pacific "Old Europe" included.

Yes, Tony Blair Was a Neoliberal

Recently surveying Tony Blair's record as party leader and prime minister I saw that the pretense of Blair not being a neoliberal is just as risible as Bill Clinton's not being a neoliberal, given his not only acquiescing in the profound changes wrought in English economic and social life by his predecessors (privatization, union-breaking, financialization, etc.), but his particular brand of budgetary austerity with its tax breaks and deregulation for corporations and stringency with and hardness toward the poor, his backdoor privatization of basic services such as health and education (with college tuition running up from zero into the thousands of pounds a year on his watch), his hostility to government regulation of business, his inane New Economy vision of Cool Britannia (groan), and the rest. (Indeed, examining his record, and reexamining that of his predecessors, I was staggered by how much of it I had seen before reviewing the comparable history in the United States.)

That said, even considering the ways in which offended and disappointed many on the left, can seem halcyon in comparison with what has been seen since. The economic disasters and brutal austerity seen since his departure from office, which really does seem to spell the final doom of the post-war welfare state--the shift to an American-style regime with regard to higher education, a slower but still advancing shift in the same direction with the country's health care system, the raising of the regressive Value Added Tax yet again to 20 percent, the renewed assault on the social safety net of yet another Welfare Reform Act (2012) that delivered Universal Credit and bedroom tax, the hundreds of thousands of "excess deaths" in recent years traceable to cuts in care facilities, the plans to raise the retirement age (perhaps all the way to age 75, effectively abolishing retirement for most)--and all that, before the current public health/economic crisis.

I admit that next to that Blair's tenure does not look quite so bad--until one remembers the extent to which his policies did so much to pave the way for all of it, in carrying forward what apologists for New Labour tend to think of as Conservative projects, and his general lowering of the bar for what constitutes tolerable government. That led to this. And so lends the question "Was Tony Blair's Prime Ministership neoliberal?" an additional, very contemporary, significance, the more so with the Labour Party, for the time being, still walking the Blairite road.

Tuesday, August 18, 2020

Was Tony Blair a Neoliberal?

In recent years figures like Jonathan Chait have made it fashionable to deny the existence or salience of neoliberalism as a concept--and this has especially been the case in regard to the term's use as a descriptor for the (nominally) left of center parties of the United States and Britain.

My personal experience of discussion with those who espouse this view showed differences among those making the case in these respective countries. Those I encountered on social media who denied that Bill Clinton was a neoliberal were never equipped with any facts, only bullying and abusiveness that gave the impression they were professional trolls intent on silencing anyone who publicly espoused such an opinion. That only underlined how they had nothing to say on behalf of a position that even slight familiarity with Clinton's actual policy record makes appear risible--a line of thought which had me soon finding that there was a scarcity of comprehensive, systematic and thoroughly grounded assessments of that record to make this clear.

The thought of, if only in a small way, redressing that deficiency led to my paper, "Was the Clinton Administration Neoliberal?" and after that a book examining the U.S. policy record from the 1970s on in more comprehensive fashion (The Neoliberal Age in America: From Carter to Trump), both of which endeavor to offer an explicit, testable definition of neoliberalism, and then systematically consider the record of the administrations in question against it.

Those who contested Blair's labeling as neoliberal, however, assumed a different tone--in part, I suppose, because they did have something to say for themselves. They would point in particular to his establishment of a minimum wage and other rights for British workers that, certainly by American standards, appear very generous; and his funding of social services, which, again by American standards, also appeared very generous at the time. It did, at least, compel me to think about what they said, the more in as I was less familiar with the finer points of Blair's policy record than I was with Clinton's, or for that matter, Margaret Thatcher's, or Harold Wilson's, or Clement Attlee's.

In that I do not think I was alone. My impression is that Blair's domestic record has been overshadowed to a considerable degree by his foreign policy record--above all his supporting the U.S. invasion of Iraq in 2003 and bringing Britain's forces into the invasion along with it even as longtime NATO allies France and Germany (to say nothing of other powers like Russia and China) forcefully and publicly opposed the move. Moreover, critical examination of Blair's ministership would seem to have been inhibited by, on top of the generally lousy job with these things done by public intellectuals these days, the extreme resistance of the neoliberals in the Labour Party, whose hostility to any change of course was made all too plain in the pathetic lows to which they descended in their campaign against Jeremy Corbyn.

Still, examine Blair's record I did. And in doing so I saw that the pretense of Blair not being a neoliberal is just as risible as Clinton's not being a neoliberal, given his not only acquiescing in the profound changes wrought in English economic and social life by his predecessors (privatization, union-breaking, financialization, etc.), but his particular brand of budgetary austerity with its tax breaks for corporations and stringency for the poor, his backdoor privatization of basic services, his hostility to government regulation of business, his embrace of flaky New Economy thinking, and the rest. (Indeed, examining his record, and reexamining that of his predecessors, I was staggered by how much of it I had seen before reviewing the comparable history in the United States.)

You can check out my examination of Blair's record--which also includes an equally detailed examination of Margaret Thatcher's record--here at the web site of the Social Sciences Research Network.

Thursday, July 30, 2020

Gripen vs. Viggen and the Rising Cost of Fighter Aircraft

Recently writing about the Gripen I found myself thinking again about the lengthy, rapid rise in the cost of fighter aircraft, and from the start how it constrained the country's ambitions in this area from the start.

As I observed in the prior post, the Swedish program for a fourth-generation fighter aimed only for a light fighter, and was content to produce an aircraft delivered only fairly late in that cycle (the Swedish air force taking its Gripens in the '90s, when the U.S. was already flight-testing the Raptor, and the Eurofighter Typhoon was similarly being tested).

The country had been more ambitious when procuring the earlier, third-generation Viggens. They went for a medium fighter, not a light fighter, one reflection of which is that the later jet actually had a lighter maximum payload than the earlier plane (5300 kg to the Viggen's 7000 kg). It might be acknowledged that with its first deliveries made only in 1971 the jet can look like a relative latecomer compared with the F-4 Phantom (1960), but still came into service just behind the MiG-23 (1970) and a little ahead of the better-known Mirage F-1 (1973). Moreover, if there were earlier third generation type jets the Viggen was still in many ways a cutting-edge fighter, incorporating many relatively novel features, including the terrain-following radar and integrated circuit-based airborne computer just starting to appear in tactical aircraft at the time, a then ground-breaking canard design and thrust reverser, and in its afterburning turbofan engine, look down/shoot down capability and multi-function displays, technologies we associate with fourth-generation jets. (In fact, it does not seem unreasonable to think of the Viggen as a generation 3+ or 3.5 plane rather than just a gen-3.)

None of that, of course, detracts from the quality of the Gripen aircraft, which was a well-regarded aircraft at the time of its introduction, and has notably been upgraded in a number of respects, with the latest "E" version having a supercruise-capable engine and an AESA (active electronically scanned array) radar, turning generation 4 into generation 4+, while some impressive claims have also been made for its electronic warfare systems (with the most bullish arguing for them as an acceptable substitute for full-blown stealth capability). Still, the shift in strategy does reflect the way even affluent, highly industrialized nations with good access to the world market in the required inputs have been pinched by the mounting cost of this kind of program--which has already seen the biggest air powers in the world, with fifth-gen jets in service, buying up upgraded fourth-generation to fill out the ranks--while raising additional question marks about just how really "sixth generation" the next sixth generation of jets will actually be.

Thursday, July 23, 2020

How Could Sweden Afford the Saab-39 Gripen Fighter Program? A Postscript

In the end the answer to the question, "How Could Sweden Afford the Saab-39 Gripen Fighter Program?"--how a small (if rich and industrially advanced) country could afford its own fourth-generation fighter--is that there is a significant extent to which it did not afford it. The country's government ultimately counted on others to develop most of the requisite technology, which it accessed via licensing and outsourcing; and then where the final product was concerned, on others to share the cost by buying their own copies. Additionally, even that required a willingness to settle for an aircraft that, while very good, did not represent the outer limit of its generation's capability or the cutting edge of fighter design when it appeared (generation 5 was just beginning to come online when the first deliveries were made), while the country committed a disproportionate share of its defense resources to the program, as it could only do because of its specific geopolitical situation. (Had Sweden been obliged to fund a bigger navy, the competition for resources might have been too much.)

That there was a considerable gamble ought not to be overlooked, with the planes a very long-term investment that could easily have suffered were technological change more aggressive (even now the plan is to have them flying into the 2040s), or if the export market were less open. (It is worth remembering that the Cold War was heating up during the program's early days--that the preceding Saab-37 Viggen completely failed to line up foreign orders--and that by the '90s the export market was very uncertain.) Still, in the end it seems to have been a success.

Saturday, July 18, 2020

How Could Sweden Afford the Saab-39 Gripen Fighter Program?

The exploding cost of fighter aircraft has made programs to build domestically an up-to-date fighter decreasingly affordable for even the largest and richest countries, with even G-7 states increasingly forgoing that course. They find that given the resources at their disposal, the diseconomies of scale of producing an aircraft they alone might end up using, it just does not pay to go it alone.

Naturally I have found myself wondering how Sweden--a nation which, however affluent and technologically advanced, was still of a mere 10 million people, and did not commit a drastic proportion of its national income to defense spending during the relevant time period--managed to successfully produce a well-regarded fourth-generation fighter, the Saab-39 "Gripen," and to do that in apparently quite cost effective fashion (with Gripen Cs recently marketed for as little as $30 million).

Four factors seem to have made the difference.

1. Sweden Spent Less Than Other States, but Also Differently--Giving it Room for One Fighter Program if it Prioritzed it (and it Did)
For comparison purposes, let us use Britain. That country had a GDP six times Sweden's, and a defense budget eight times as big in 1979.1 Yet Britain had already given up on building its own current-generation fighters all by itself, relying on partnerships with other European countries to build its next generation of such planes--notably Germany and Italy in the Panavia Tornado program.

However, it has to be remembered that Britain also had numerous expenditures Sweden did not--on a nuclear arsenal and large navy it constructed domestically, and on a global network of bases and overseas garrisons (not least the big one in West Germany), all bound up with a complex array of international commitments.

Sweden did not have these expenses, instead being oriented to a fully conventional defense of its limited national territory, while it might be added that Sweden placed a very high priority on its air force. While, as noted before, Sweden was a much smaller country than Britain in the relevant ways, it operated almost as big a fleet of combat aircraft (still 400+ jets in the late Cold War, compared with the 500 or so Britain was generally operating at the time, as the RAND Facing the Future study on the Swedish air force remarked at the time).

It might also be added that even where procurement was concerned Britain insisted on an array of different combat aircraft, pursuing besides the Tornado, which was coming in fighter and strike versions, the Anglo-French Jaguar and the idiosyncratic VTOL Harrier (while operating a sizable fleet of F-4 Phantoms incorporating British engines and other components, and already thinking about what was to become the Eurofighter Typhoon). Had Britain not pursued so many types it would have had an easier time affording its own design. And that was what Sweden did, going with just the Gripen.

2. Sweden Was Content to Let Others Go First, and Settle for Less Than the Maximum Possible Capability
It is worth noting that besides going for just one fighter program instead of many such projects at once, Sweden did not strive for the ultimate. The Gripen is, as the exclusive focus on it required for justification's sake, a multirole aircraft. However, unlike the twin-engined, swing-wing Tornado with its low-level deep penetration capability and high payload, the Gripen was a single-engined, multi-role fighter of shorter range and lighter armament. To put it into U.S. Air Force terms it was more F-16 than F-15, with all that implied with regard to price.

Of course, even if the jet is more F-16 than F-15, the Gripen is still a fourth-generation jet, and again, a well-regarded one. Yet, consider the timing of its appearance. The U.S. Air Force received the delivery of its first true fourth-generation jet, the F-15, in 1974. As indicated above the Gripen program did not even begin until five years later, and the Swedish Air Force did not receive its first production copy of the aircraft until 1993, fourteen years after that--by which time the U.S. Air Force was already flight-testing the fifth-generation F-22. In a less dramatic way it is the same story with the British and their partners, who had their Tornado going into production just as the Gripen was emerging as a concept, while the Typhoon was to make its first flight as the Swedish Air Force formed its first Gripen squadrons.

In short, the Swedish government was ready to wait fifteen to twenty years longer than others to get even a light fourth-generation jet, and in the meantime make do with third-generation Saab-37 Viggens. Saving money, after all, was a necessity for even the Swedes at this stage in the history of fighter development, and walking a beaten path does that--not least because of one thing that did much to bring down costs, namely that

3. Sweden Outsourced and Licensed the Necessary Technology Where Feasible Rather Than Making Everything From Scratch
While the Gripen is Swedish-made, it is not all-Swedish, with crucial components developed jointly, or derived from other, established products, for the sake of cost (as much as a third of the aircraft sourced from the U.S. alone). The Gripen's first engine is an obvious example. While constructed by Volvo, it is a licensed derivative of the engine that General Electric made for the F-18. (It may also be worth noting that Saab had prior experience developing fighters in such a fashion, key systems on the prior Viggen being similarly sourced--and that the stress on minimizing cost can be contrasted with Japan's emphasis on developing technical know-how in the F-2 program, which produced a very advanced but also very costly F-16 derivative.)

4. Sweden Banked Big on Exports
Finally, in addition to its readiness to focus its resources on this one program, its moderation in its demands, and its willingness to use technology developed by others, Sweden counted heavily on the prospect of foreign sales did help make the Gripen project more plausible financially. Of course, in contrast with members of the globally active, NATO-affiliated, military aid-providing powers like Britain, or France, let alone the U.S.. And the Gripen's successes there are, at least thus far, a far cry from those of other fourth-gen single-engine jets like the French Mirage 2000 (almost 270 sold to eight different foreign customers) or the generation's most popular fighter, the F-16 (with nearly two thousand jets serving in some twenty-five air forces alongside the vast American fleet). Still, the fighter has already found a number of customers (Hungary, Czechia, South Africa, Thailand, Brazil, the last by itself looking to buy 108 aircraft), with as many as two dozen reportedly ongoing bids holding out the hope of still more (two of them to Canada and India, which could by themselves take another 200 aircraft, and make the Gripen a bestseller yet).

The gamble, in short, looks as if it is paying off. Still, it is worth noting that Sweden, like most countries, sat out the pursuit of a fifth generation of fighters, making do with upgraded Gripens--while in apparently taking an interest in the sixth generation it is not going it alone, joining the British-led "Tempest" program. I admit to not being bullish on that program, just as I have not been bullish on the sixth generation fighter given the technological claims made for it (initially, at least). However, it does seem safe to say that by this point the strategy that let Sweden build a fourth-generation fighter has long since run up against its limits.

1. As measured by the UN in 2015 U.S. dollars in its current National Accounts data set, Britain had a GDP of $1.33 trillion to Sweden's $232 billion, while spending a higher proportion of its GDP on defense--4.2 percent versus 3.1 percent--giving it a budget of $55 billion to Sweden's $7 billion.

Friday, May 1, 2020

THE MILITARY TECHNO-THRILLER: A HISTORY



"[A] multi-century tour de force . . . comprehensive . . . easily readable, making it the best of both worlds . . . a lot of fascinating insights . . . an excellent book that examines an overlooked genre through a variety of interesting perspectives in a highly readable way. I cannot recommend The Military Techno-Thriller: A History enough for fans of the genre." -Fuldapocalypse Fiction

THE MILITARY TECHNO-THRILLER: A HISTORY takes a close look at this widely read but still little studied genre, tracing its origins from the Victorian-era invasion story, to its 1980s heyday as king of the bestseller list in the hands of authors like Tom Clancy, down to today, considering its interaction with other genres and other media throughout. In the process, this book also tells the larger story of how the ways in which we think about, imagine and portray war evolved during the last century to bring us to where we are now.

Now available in print and e-book formats from Amazon and other retailers.

Get your copy today.

Monday, March 2, 2020

THE TECHNOLOGICAL SINGULARITY AND THE TRAGIC VIEW OF LIFE

Originally published at SSRN on January 29, 2019

The theorizing about what has come to be known as the "technological Singularity" holds that human beings can produce a greater-than-human sentience technologically, and that this may come within the lifetimes of not only today's children, but much or even most of its adult population as well.1 Much of this anticipates that this will be part of a larger trend of accelerating technological change, with advances in computers matched by advances in other areas, like genetic engineering and nano-scale manipulation of matter; or be a source of acceleration itself as superhuman artificial intelligences perform technological feats beyond human capability.2 In either case, as the term "singularity" suggests, the consequences become unpredictable, but a common view is that shortly after the arrival of the big moment, we will see the most fundamental aspects of human existence—birth, growth, aging, senescence, mortality; the limits of an individual consciousness' functioning in space and time; material scarcity and the conditions it sets for the physical survival of human beings—altered so radically that we would become transhumans, on the way to becoming outright posthumans. Those who describe themselves as Singularitarians expect these changes to be not merely profound, but a liberation of the species from the constraints that have cruelly oppressed it since its first appearance on this planet.

All this, of course, is mind-bending stuff. Indeed, no one alive today can really, fully wrap their mind around it, even those most steeped in the idea. Still, the difficulty of the concepts lies not only in the complete alienness of such conditions to our personal and collective experience, but also in their flying in the face of the conventional expectations—not least, that even if we have grown used to technology changing constantly, the really important things in life do not, cannot change. Indeed, passive acceptance of the world as it is; a view of it as unchanged, unchanging and unchangeable; and given that this applies to a great deal that is unquestionably bad, an ironic attitude toward the prospects for human happiness; are commonly equated with "wisdom." And rejection of things as they are, a desire to alter them for the better, a belief that human beings have a right to happiness, are likewise equated with not just the opposite of wisdom, but the cause of disaster.

This is all, of course, a terribly bleak and unattractive perspective to any but the most complacent of us—and not altogether rational, inconceivable as it is without the idea that the cosmos is somehow metaphysically rigged against human beings. Why should this morbid outlook enjoy such force? Especially in a modern world where it has been proven that life does indeed change? And, frankly, that meaningful improvement in the terms of existence of human beings is achievable?

The Tragic View
One obvious reason is the weight of a very old, long tradition that has deeply suffused just about every human culture, Western culture by no means least. In ancient times, as people began to think about the world around them, they could see that the world was not what human beings would like it to be. Life was poor, nasty, brutish, short—with hunger, illness, violence, insecurity (to say nothing of innumerable lesser hardships and frustrations) utterly saturating their existence. Childbirth was dangerous to all concerned, and few of the children born made it all the way to adulthood. A more settled existence brought somewhat greater affluence and security, but this was only relative, and purchased at a high price in toil and oppression, with daily existence defined by a more regimented routine of labor, and a more stratified society. The many worked for the enrichment of a few—and even the comparative luxury in which the few lived only exempted them from so much. Even those who ate well suffered their share of disease in an age of primitive medicine, and violence too when being part of a ruling caste meant being a warrior. And of course, even the most sheltered existence meant aging and death.

It also seemed to them that there was not much they could do about it. The faintness of their understanding of how the world about them worked, the primitiveness of the means to hand that necessarily followed from this, the dimness of their awareness that knowledge of the world could be applied to the end of enlarging economic productivity, meant that one could picture only so much improvement in the economic output that in hindsight we know to be key to deep or lasting material progress.

The crying out in anguish against all this was the birth of tragedy. One can see it in that oldest of stories, the Epic of Gilgamesh. Gilgamesh, seeing a bug fall out of the nose of his fallen friend Enkidu (the image crops up again and again in the poem) is horrified by the reality of death, sets out in quest of immortality—and all too predictably fails to achieve it, falling asleep while a snake gobbles up the herb that would have let him live forever before he can take it. A very large part of higher culture has been a development of this sensibility. In the Abrahamic religious tradition we have the temptation of Adam and Eve, original sin, the Fall, expulsion from Eden, a punishment compounded by the familiar limits of the "human condition": traumatic birth, a life spent in toil, death. So does it likewise go in the Classical tradition, where humans, whose Golden Age lies in the past, have their lives spun out, measured and cut by the Fates, and the details of those lives not decided by the Fates determined by the whims of gods intent on keeping them humble.3 (Poseidon could not keep Odysseus ever from getting home—outside his purview, that—but he did see that it was a ten year odyssey, and did a good many worse things with a good deal less reason; while "wise" Athena was not so far above petty jealousy as to refrain from turning the human who bested her as a weaver into a spider.)

Eventually developed, too, was an element of compensation for all this. Human beings suffer in this world—but if they bow their heads and obey, they will eventually be blessed in this world, or if they don't get so blessed, find something better in another one on the other side of death. And in at least some traditions, human suffering as a whole does end, a Millennium arriving and all well with the world after that.

Still, the connection between good behavior and reward was necessarily fuzzy, and even in those traditions notes of doubt about the rightness of all this are evident. As God inflicts on his exceptionally faithful servant Job one horrific suffering after another simply for the sake of a bet with the Devil, when he has had all he can take (he is huddling shit to keep warm), Job cries out "Why?"

Oedipus, approaching his death (in Oedipus at Thebes, the most accomplished but least-read of the trilogy by Sophocles), wonders at the same thing. After all, killing a man who challenged him in that sort of roadside confrontation and marrying a queen were only turned from incidents in a tale of heroic adventure into cosmic crimes by the fact that the man was his father, the woman his mother, both of which details were totally unknown to him—while the whole sequence of events was triggered by his father's attempt to evade punishment for an unspeakable crime of his own by ordering his own son's infanticide. Where was the justice in that?

Of course, no satisfactory answer is forthcoming to such questions. Indeed, to modern, rational eyes, tales like those of Job and Oedipus are about the subjection of human beings through no fault of their own to horrors by the will of arbitrary, cruel gods, whose right to do such things is a simple matter of their having the power to do it and get away with it.

And as it happens, there are also doubts that things really have to be this way. The idea that humans could become like those gods, acquire the power to be like them, and even overthrow them, but that this was forbidden to them and they were slapped down when they tried, cropped up again and again. Gilgamesh, after all, may not have attained his goal but he did come very, very close, only at the very last minute losing the herb that would have made him live forever. In the Garden of Eden the sin of Adam and Eve was to eat of the fruit of the tree of Knowledge of Good and Evil, knowledge which could make them like gods. Zeus begrudged man the gift of fire, and punished his benefactor Prometheus by chaining him to Mount Elbrus and having a vulture tea out and eat his liver, after which it grew back at night so that it could be torn out and eaten again the next day, and the day after that, and the day after that . . . but human beings kept the knowledge of fire nonetheless.

All the same, these admittedly not insignificant details are exceptional, contrary hints, and not more than that in a narrative that, pretty much always, reiterated again and again passivity and awe before the majesty of a design beyond our ken.

"KNOW YOUR PLACE!" it all thunders.

And by and large, it was exceedingly rare that anyone carried the thought further. There was, after all, much more emphasis on what had been and what was than what might be—and little enough basis for thinking about that. Even among the few who had leisure to think and the education and associations to equip them with the best available tools with which to do it, mental horizons were bounded by the crudity of those tools. (Two thousand years ago, syllogisms were cutting-edge stuff.) By the narrowness of personal experience. (Communities were small, movement across even brief distances a privilege of very few and even that difficult and dangerous, while the "Known World" of which people even heard was a very small thing indeed.) The slightness of the means of communication. (Illiteracy was the norm; books hand-copied on papyrus and parchment and wax tablets were rare and expensive things; and the more complex and less standardized grammar and spelling, the tendency to use language decoratively rather than descriptively, the roundabout methods of argument and explanation—such as one sees in Socratic dialogue—likely made deep reading comprehension a rarer thing than we realize.) And even the brevity of life. (Life expectancy was thirty, imposing a fairly early cut-off on how much the vast majority of people could learn, even if they had the means and opportunity.)

Moreover, the conventional ideas enjoyed not only far more powerful cultural sanction than they possess now, through the force of religious belief and custom, but were backed up by all the violence of which society was capable. This was the more so not only because the powerful found it easier to take such views (it is one thing to be philosophical about man's condemnation to hard toil when one is a slave in the fields, another when one is a wealthy patrician who has never done any sitting on his shaded porch enjoying a cool drink), but because, within the limits of the world as they knew them, their situation was quite congenial to them.

Carroll Quigley, who wrote at some length about the conflict between democratically inclined, socially critical "progressives" and oligarchical "conservatives" in ancient Greece in The Evolution of Civilizations observed that the latter settled on the idea "that change was evil, superficial, illusory, and fundamentally impossible" as a fundamental of their thought. This applied with particular force to terms of social existence, like slavery, which they held to be "based on real unchanging differences and not upon accidental or conventional distinctions." Indeed, the object pursued by those who would have changed such things—a redress of material facts—was itself attacked by the associated view that "all material things" were "misleading, illusory, distracting, and not worth seeking."

In short—the world cannot be changed, trying to change it will make life even worse than it is, and anyway, you shouldn't be thinking about the material facts at all. This anti-materialism went hand in hand with a denigration of observation and experiment as a way of testing propositions about the world—an aristocrat's musings superior to actually seeing for oneself how things actually stood. With the concrete facts of the world trivialized in this way, the conventional wisdom handed down from the past was that much further beyond challenge (while, of course, this outlook did not exactly forward the development of technological capability). Ultimately the promotion of these ideas by the "oligarchs" (and their rejection of the ideas not to their liking), helped by the primitiveness of communication (works the rich would not pay to copy did not endure), was so effective that, as Quigley noted, "the works of the intellectual supporters of the oligarchy, such as Plato, Xenophon, and Cicero" have survived, but "the writings of the Sophists and Ionian scientists," "of Anaxagoras and Epicurus," have not.

In the wake of all this it may be said that philosophy was less about understanding the world (let alone changing it) than accommodating oneself to it—by learning to be better at being passive. One picks up Marcus Aurelius' Meditations, and finds the celebrated work by the famous emperor-philosopher to be . . . a self-help book. And rather than a philosophy concerned with nature or politics, the metaphysics of the thoroughly anti-worldly Plotinus (who taught that the ultimate good lay in one's turning away from the low world of material sense-reality to the higher one of the spirit as the path to ecstatic union with the Divine) was arguably the most influential legacy of the latter part of the Classical era.

The pessimism of the Classical outlook, particularly by way of Plotinus, did much to shape and even blend with the Abrahamic tradition as Jewish and early Christian thinkers used Greek philosophy in interpreting religious texts—while the work of the Greeks endured as the cornerstone of secular learning in its own right. Of course, Classical civilization crumbled in Western Europe. Yet, in its way that strengthened rather than weakened the hold of such thinking. Of the major institutions of the Western Roman Empire, only the Church survived and flourished, playing a larger role than ever in the centuries that followed. Meanwhile, amid these "Dark Ages," there was a nostalgia for ancient times, and with it a propensity for exalting its practical achievements as unsurpassed and unsurpassable. The Greeks, the Romans, were held to have all the answers, and it was thought best to consult them rather than try to find out new things, the means for which that Classical philosophy, of course, marginalized. Along with the prevailing religiosity, this whole outlook directed philosophers' attentions away from the workaday world—to the point of being famously satirized as debates over how many angels could dance on the head of a pin. And of course, if reason said one thing and religion another, then one had to accept that religion was right—or face the Inquisition. Unsurprisingly much energy went into attempts to reconcile the world's very real horrors with the existence of a divine plan by an all-good and all-powerful Supreme Being—pessimism about whether things could have been or could be better confusingly passed off as the optimism that this is the "best of all possible worlds."

Tragedy, Modernity and Reaction
So things went until the Renaissance, and the flowering of humanism with it, and the intellectual developments that succeeded it. In the seventeenth century thinkers like Francis Bacon and Renee Descartes not only explicitly formulated and advocated a scientific method based precisely on the value of study of the material world. They also declared for the object of uncovering all nature's secrets and applying the knowledge to the end of "relieving man's estate." Moreover, such thinking was quickly extended by others to social, economic and political life. Opposing barbarous custom and superstition, they identified and defended the rights of all individual human beings enjoyed specifically because they are human beings (life, liberty, property), extending to the right to choose their own government—even rebelling when an existing government failed to perform even the bare minimum of its duty (as Thomas Hobbes did), or became repressive (as John Locke did).

In short, the prospect of positive, meaningful, humanly conceived and controlled change was raised explicitly and forcefully by the Scientific Revolution, by liberalism, and by the Enlightenment more generally—and raised them successfully. However, that scientific inquiry, applied science and political liberalism flourished in modern times as they did not in the ancient did not mean that they were unopposed. The old ideas never ceased to have their purchase, and certainly vested interests in the modern world (as with Churchmen concerned for their influence and privileges) could not look on such talk with equanimity any more than their forebears did. Conservatives threatened by all this clung to tradition, to the religious institutions that usually sided with it, to the unchanging verities of the "ancients" over the reason and science of the "moderns." The now-obscure English theorist of divine right, Robert Filmer, insisted in De Patriarcha that kings were granted by God paternal rights over their peoples, which extended to the power of life or death—and revolutionary, democratic alternatives doomed to short, bloody and chaotic lives ending in failure.

Filmer's arguments (which Locke eviscerated in his First Treatise on Government) were belied by the unprecedented peace and prosperity that England enjoyed after the 1688 Revolution. However, conservatives responded to the Enlightenment with the Counter-Enlightenment, identifying reason and change more generally with disaster, stressing the original sin Christianity held tainted human beings, and even rejecting the idea of the individual human being as a meaningful unit of social analysis.

Indeed, it became common to oppose to the universalism of the Enlightenment a politics of identity in the manner of Joseph de Maistre (who famously remarked that he had met Frenchmen, Italians, Russians, but knew nothing of "man"), with identity usually defined in terms hostile to progressive ideas. Reason, secularism, democracy, were commonly characterized by such thinkers as alien, reflecting the character of another people—typically a character held to be less virtuous, less "pure," less "spiritual" than "our own." If such things worked at all, they said, it was only for those less virtuous, less pure, less spiritual people; certainly they cannot work for us, which is assuredly a good thing as our traditionalism, religiosity, monarchism, serf-lord relationships and the like express and sustain a greater wisdom than those Others can ever know, and which importing their ways could only corrupt.

Going hand in hand with this was much romanticizing of the rural, agrarian element of the population as a repository of those older ways—unlike these rootless city types, especially the ones with a modicum of "book learning," which seemed not an altogether good thing. Worst of all in their eyes those "overeducated" types who, "educated above their intelligence," perhaps defectively born with too much brain and too little body, too little blood, have become alienated from their roots and their natural feelings—internal foreigners. And indeed the visions of reform to which they so often inclined, they said, showed that while they spoke of the people they did not know, understand or respect them—and said that what they needed most of all was some hardship and toil among the lower orders to teach them "the real world." (Thus does it go in Leo Tolstoy's War and Peace, where Pierre Bezukhov goes from Western-educated cosmopolitan intellectual to apostle of the peasant Karataev, who passively accepts whatever life brings until he dies, marching in the French army's prisoner train as it retreats from Russia.)

All this naturally converged with a whole host of prejudices, old and new, exemplified perhaps in Victorian-era theorists of "Aryanism," who identified the conservative, traditionalist stances (an acceptance of the unchanging nature of things, idealism over materialism, etc.) with spiritually superior Aryan cultures, and liberal/radical, modern outlooks with inferior "non-Aryans"—even when they made up their lists of who went under each heading in mutually exclusive ways. Thus one had the absurdity of German and Russian nationalists each insisting that their country was the purest bearer of the Aryan legacy—while the other nation was not Aryan at all.4

Of course, this reaction did not turn back the clock, the Old Regime never returning and those who even really wished for such an outcome becoming something of a lunatic fringe, but it still had its successes. Religious, nationalistic, traditionalist, anti-intellectual and "populist" appeals on behalf of the status quo and its greatest beneficiaries helped make the spread of formal democracy a less threatening thing to the contented. Meanwhile, as conscious, explicit belief in specific religious doctrines weakened, what might be called the "religious frame of mind" remained, plainly evident in the phrases of everyday language, like the need "to have faith," the idea that "things are meant to be" or "not meant to be," and of course, that there are "things man is not meant to know." (Faith in what, exactly? If things are "meant" or "not meant," just who—or Who—is doing the "meaning?")

And of course, as the old guard of throne, aristocracy and established church declined, the bourgeoisie that had once been revolutionary, now enjoying power and privilege, and anxious about the lower orders and the social questions their existence raised, became conservative in its turn, likewise inclining toward that idea of change as "evil, superficial, illusory, and fundamentally impossible," and reason and its prescriptions as a thing best not pushed "too far." It was less of a stretch than might be imagined—the bourgeois outlook, after all, being focused on the individual, specifically an ethic of individual success-striving and individual moral responsibility, the existing social frame so utterly taken for granted that it did not seem to exist for them at all. (Indeed, as Margaret Thatcher made clear, at times their politics has explicitly denied that it does.)

Unsurprisingly, those favoring constancy over change found more rationalistic-seeming supports for their outlook. The dark view of radical social change taken by the French Revolution's enemies, which identified the French Revolution not with the Declaration of the Rights of Man or the abolition of feudal oppressions but guillotines, Napoleon and Restoration, colored the popular imagination of the event—driving home the idea that if revolution was not a crime against God, then it was still bloody, messy and doomed to failure.

And this was not simply founded on a vague sense of society's machinery as a complex thing not easily tinkered with, or insecurity about whether the state of the art in social engineering is up to such a challenge (the position of a Karl Popper, for example), but a whole host of newer rationales for the unchangeable nature of the world, the ineradicability of the evils in it, the obvious implications for human happiness, and the wisdom of letting everything alone. Like Malthusian scarcity, which attributed poverty and its attendant miseries not to economic or social systems, but "the passion between the sexes." And its extension in the Social Darwinism of Herbert Spencer, in which perhaps God did not assign different people different stations, but Nature did in its uneven distribution of "fitness," and the still more uneven rewards accruing to it. Or the Nietzschean will-to-power. Or Freudian psychoanalysis, which declared the repression of basic human drives (the pursuit of sex, the aversion to work as we commonly think of it) as essential to civilized life.

Or postmodernism, ostensibly secular adherents of which speak of "the problem of evil" in the mystical tones of Medieval theologians, with the subject-object separation of their epistemology an apparent stand-in for the Fall, while in their attachment to identity politics they echo de Maistre's remarks about never having met Man, all of which adds up to a hostility to "grand narratives" as ferocious as any other attack ever launched on the idea of progress—in as thoroughly obscurantist a language as any clergy ever devised. And of course, there are more popular streams of thought, not least the self-help culture, which promotes a conservative idealism scarcely distinguishable from that of the ancient Greek oligarchs. (You can't change the world! There is no world! There's just you, and how you look at it, so change yourself instead!)

All of this so thoroughly saturates our culture that there is no getting away from it—even those most educated for the task of critical thought. The age, ubiquity, association of such an outlook with those most prestigious philosophical and literary texts that have never ceased to be the cornerstone of an education in the humanities (from Aristotle to Shakespeare, from Milton to Tolstoy) is itself be enough to endow such an outlook with enormous prestige to which few intellectuals are totally immune. (Indeed, it was a nineteenth century radical of some note who observed that "the past weighs like a nightmare on the brain of the living"—and a twentieth century radical who remarked that in his youth England and its empire were ruled by "men who could quote Horace but had never heard of algebra.") That there is never a shortage of rationalists who, in disappointment or despair personal or political—or simple self-interest—repudiate their old beliefs to take up exactly opposite ones at the other end of the political spectrum also encourages the tendency. (After all, just as much as ever, intellectual, cultural and especially political life remain the preserve of the privileged, who remained inclined to conservatism, while privilege remains ready to place its prestige and wealth behind conservative thinkers and thought.)

Little wonder, then, that ultra right-wing postmodernism, passed off as daring leftishness to the confusion of nearly all, has become the conventional wisdom of the humanities, the social sciences, and much else, while popular culture serves up a diet of pessimism in one movie about science and technology going wrong after another. We may not know what will happen, exactly, but we are usually safe in assuming that something bad will come of that experiment, that invention. Safe in assuming that if there is a robot, it will rebel. And with the far, far more radical prospects opened up by the Singularity, the dread is commensurately greater.

Those who would discuss the subject rationally have to realize that they are up against all this baggage. And if they wish to be persuade the public of the positive possibilities that our technology has already opened up, and might continue to open up in the coming years, they must be prepared not only to promise the realization of long thwarted human hopes, but challenge the colossal weight of millennia of dark, misanthropic irrationality with the still more overpowering force of reason, too little appreciated these days, but as great as it ever was.

1 The term "technological Singularity" is generally credited to Vernor Vinge. Other writers in this area include Hans Moravec and perhaps most famously Ray Kurzweil. See Vernor Vinge, "The Coming Technological Singularity: How to Survive in the Post-Human Era," VISION-21 Symposium, NASA Lewis Research Center and the Ohio Aerospace Institute, 30-31 Mar. 1993; Hans Moravec, Mind Children: The Future of Human and Robot Intelligence (Cambridge, MA: Harvard University Press, 1990) and Moravec, Robot (New York: Oxford University Press, 2000); Ray Kurzweil, The Age of Spiritual Machines: When Computers Exceed Human Intelligence (New York: Viking, 1999) and Kurzweil, The Singularity is Near: When Humans Transcend Biology (New York: Viking, 2005).
2 As Irving John Good put it in an early paper on the subject, "the first ultraintelligent machine is the last invention that man need ever make." Irving John Good, "Speculations Concerning the First Ultra-Intelligent Machine." In Franz L. Alt and Morris Rubinoff, Advances in Computers 6 (New York: Academic Press, 1965), 31-88.
3 Writing of the ancient Greeks John Ransom Crowe characterized their view of the world as something that not only "resists mastery, is more mysterious than intelligible . . . a world of appearances," but also "perhaps . . . more evil than good." John Crowe Ransom, The New Criticism (Westport, CN: Greenwood Press, 1979), 335.
4 German theorists, of course, excluded the Slavs from the Aryan category, while the Russian Slavophile Aleksey Khomyakov regarded the Slavs as Europe's true Aryans, while Germans were non-Aryan "Kushites."

Subscribe Now: Feed Icon