Friday, October 31, 2008

In The News

Remember the turn-of-the-century calculation by the World Wildlife Fund that we're using the resources of 1.2 Earths? The picture's been getting worse, not better. According to their latest Living Planet report, discussed in this article in the New Scientist, we're now up to 1.4 Earths, and on track for two by the 2030s. (In case you're wondering, the UN has generally come to the same conclusions about these things.)

On another, weirder note: a Sino-American team has discussed new developments pertaining to an amnesia ray. Make of that what you will.

Monday, October 27, 2008

New Article Out in Space Review

My newest Space Review article, "Revisiting Island One," has just been published. Click on the link to check it out.

In The News

A new article on the Scientific American web site about geoengineering as a response to global warming. Not something to take lightly by any means, but the pathetic rate of progress on this issue (we've known about this since the 1960s) may eventually leave us with this as the least-worst choice. And the debate about this approach is nowhere near as developed as it should be.

On a different note--real-life Sentinels. (In case you didn't get that, it's an X-Men reference.)

Friday, October 24, 2008

Societal Complexity and Diminishing Returns in Security

By Nader Elhefnawy

Originally published in International Security 29.1 (Summer 2004), pp. 152-174
Copyright 2004 by the President and Fellows of Harvard College and the Massachusetts Institute of Technology

Discussions of security in recent years have frequently concerned the ways in which the complexity of technologically advanced societies can be an Achilles' heel. Various observers and real-life events have drawn attention to how relatively unsophisticated threats against air travel, power grids, computer networks, financial systems, lean manufacturing processes, and the like can have devastating effects. There has been little effort, however, to approach the problems that complexity poses for security in a comprehensive way, though this is not to say that work applicable to such a purpose has failed to appear.

In The Collapse of Complex Societies, Joseph Tainter argues that societies canreach and pass a point of diminishing marginal returns to investment in societal complexity.1 Eventually, this eats away at their slack, which can be understood as that "human and material buffering capacity that allows organizations and social systems to absorb unpredicted, and often unpredictable, shocks." In other words, slack is the untapped human and material resources for pursuing new endeavors or meeting emergencies. This lack of slack leaves these organizations and systems increasingly vulnerable to collapse as a result of such a shock, for example, military invasion.2 Although concern with diminishing returns in the areas of defense and economics has long been a part of discussions of international security, appearing in the theoretical summations of authors such as Robert Gilpin and John Mearsheimer, Tainter's work (and the field of complexity studies in general) have had little impact on security studies to date.3

This article argues that security is becoming an area of diminishing returns to complexity for today's advanced societies because of the diminishing returns from investments in complexity in general, the risks posed by the interconnections this growing complexity creates, and the rising cost of security forces. Before proceeding to the argument's details, however, some clarification of what "complexity" means, why it rises, and why it can lead to diminishing returns is in order. According to one definition, complexity refers to "asymmetric relationships that reflect organization and restraint" between the parts of a system.4 As such, the characteristic features of complex systems are their composition from a large number of components with a dense web of connections between them; a high degree of interdependence within them; an openness to outside environments, rather than their being self-contained; "synergy," meaning that the whole is more than the sum of its parts; and nonlinear functioning, so that changes in these systems have effects disproportionate to their size, either larger or smaller.5 Such nonlinearity and synergy come with an exponentially increased range of possible interactions, including unplanned interactions, making an incomplete understanding of at least some processes also an aspect of complex systems.6

A more complex human society therefore "has more institutions, more subgroups and other parts, more social roles, greater specialization, and more networks between its parts. It also has more vertical and horizontal controls and a greater interdependence of parts," which may interact in unexpected ways.7 Also in keeping with the greater interdependence and interconnection between a larger number of components, more complex societies have a larger information load. This is the case for both communications and information processing.

As a larger information-processing burden indicates, complexity carries costs. Institutions, networks, and the like require energy for their sustenance; vertical and horizontal controls inhibit personal freedom; and so forth. Given that such costs make human beings generally "complexity averse," why does complexity tend to rise over time? W. Brian Arthur offers three explanations.8

The first is that competition and interdependence among entities create niches that new entities can fill in a process of "coevolutionary development." "Diversity begets diversity"—that is, a multiplicity and variety of elements allowing a greater variety of possible connections, which translates into an ever-more elaborate web, in technology or economics as well as ecosystems.9 The advent of the automobile, for instance, created niches for paved roads, motels, and traffic lights.10

The second explanation Arthur offers is that competitive environments encourage individual entities to improve their performance by adding new subsystems in what is known as "structural deepening." In other words, a system becomes more complex so that it can operate in a wider range of environments, sense and react to exceptional circumstances, service other systems so they operate better, or enhance reliability.11 Arthur offers the example of the jet engine, which in Frank Whittle's design in the 1930s contained only a single moving part. Today's jet engines, which can put out thirty to fifty times as much thrust, are much more complex with up to 22,000 parts.

The third explanation Arthur offers is that large systems of entities can incorporate simpler systems to boost performance in what is known as "software capture."12 (The entry of new member states into the European Union, for instance, results in the EU becoming a more complex entity.)

Nevertheless, even though complexity proves a successful strategy much or most of the time, this is not always the case. As Murray Gell-Mann has observed, a complex system "under the influence of selection pressures in the real world, [engages] in a search process... that is necessarily imperfect."13 A system does not always adapt appropriately, and even adaptations adequate to a particular challenge have effects that are maladaptive.14

This article explores the three trends mentioned above. The first trend, analyzed in the section "Complexity and Slack," is one of diminishing returns from investments in complexity in general. This section establishes that advanced societies are becoming more complex. It then advances the economic evidence for diminishing returns to investments in complexity. Finally, it demonstrates that societal slack is shrinking as a result.

The second trend, analyzed in the section "Interconnection and Vulnerability," is the tendency in social and technological systems toward tighter coupling between their components, and more "scale-free" networks, with their attendant vulnerabilities, as societies grow more complex. The result is that more points exist to be attacked while the effect of any single attack is magnified, creating a need for stronger protection at more points.

The third trend, discussed in the section "Rising Security Costs," is the increasing cost of the means of security in the face of threats such as terrorism and weapons of mass destruction. In particular, this section examines security expenses rarely looked at in such studies, such as the cost of police and other emergency and law enforcement units, private security outlays, and "hidden" expenditures such as insurance rates, all of which are apparently headed upward.

In the conclusion this article brings the implications of all three trends together. The end result, it argues, is that as today's advanced societies grow more complex, they become less able to absorb shocks. At the same time they offer more points to attack, any one of which can have greater effects because of their heightened interconnectivity. The cost of defending against any of these shocks also rises. As a result, these societies are less and less secure. The conclusion also discusses some possible ways of coping with the problem, central to which is the judicious development and application of a range of new technologies.

Complexity and Slack
Tainter's focus is on a particular category of maladaptation, societal investment in complexity for diminishing returns, in which the returns to each unit of investment shrink (or even become negative). This shrinking in turn reduces societal slack over the long term, which is something that Tainter has argued is the case for today's advanced societies. There are three necessary prerequisites to substantiating this line of argument: (1) establishing that advanced societies are actually becoming more complex; (2) determining that such changes correspond to a pattern of diminishing returns; and (3) confirming that these diminishing returns are consuming slack.

Complexity and Advanced Societies
The measurement of complexity is a highly controversial matter, with some experts questioning even the feasibility of trying to do so.15 Nevertheless, a basis for measurement has been suggested by theorists, with some consensus existing that the quantity of information required to operate or represent a given structure is a guide to a system's level of complexity.16 Where social systems are concerned, a feudal, agrarian economy requires less information to operate than an advanced industrial economy, and is therefore less complex. The change in an entity's information load is one way of measuring whether that entity is becoming more complex, as would be the case with a shift from an agrarian economy to an industrial one.

The indicators report a dramatically rising information load as economies are "informatized."17 One expression of this is that spending on information and communications technology as a share of nonresidential, fixed investment rose from 15.2 percent to 31.4 percent in the United States between 1980 and 2000, with comparable increases in the European Union and Japan.18 There is also a rising volume of communication, travel, and trade—in short, interconnection and interactivity, which can be identified with increasing complexity. Virtually every indicator of the level of such traffic shows a long, upward trend, with per capita traffic volume doubling in North America between 1960 and 1990.19 The daily volume of person-to-person email messages, virtually zero twenty years ago, stood at 21 billion in 2002, a figure that the International Data Corporation estimates will rise to 35 billion by 2006.

Between 1950 and 2000, world trade generally expanded faster than gross domestic product (GDP), and more than three times as fast during the 1990s.20 Although it is often stated that international trade levels were higher in the pre-1914 period than they are currently, this observation tends to ignore the more complex character of the trade. One aspect of this greater complexity is increased "production sharing," that is, trade in components or parts rather than fully fabricated manufactures.21 The growing trade in services, exemplified by the outsourcing of work from accounting to software writing internationally, and the sheer volume of international financial flows have no previous analog. Lowered trade barriers, moreover, have commonly brought more rather than less legal and administrative infrastructure, as with entities such as the World Trade Organization. At the same time, contrary to the claims of those who believe that globalization is bringing about the death of the state, governments—a major source of complexity—have grown larger rather than smaller in this period.22

In short, complexity, and in particular the complexity created by more technology and economic integration, is increasing rapidly—but to what end? Thesecond issue, namely whether these investments are producing diminishing returns, remains unanswered.

Complexity and Diminishing Returns
An obvious approach to determining the relationship between complexity and diminishing returns is to look at economic trends, given the overwhelmingly economic orientation of the increased complexity. There is widespread evidence that several economic sectors are showing diminishing returns to investment in complexity, including agriculture, energy production from fossil fuels, and heavy, bulk-processing manufacturing such as steelmaking.23 The same is true for certain elements of the service sector, such as education and health care.24 The costs of these two services are increasing at a markedly more rapid rate than economic growth, and without showing commensurate improvements in either their contribution to economic productivity or health standards.25

Several areas of high technology, such as aerodynamics, are also producing diminishing returns, requiring much greater investment for much more modest results.26 The most striking exception to this pattern, at least according to the conventional wisdom, is information technology, which is widely credited with driving the continuing increases reported in productivity.27 It should be remembered, however, that the "information" society remains underpinned by the older technologies of moving parts and fossil-fuel energy sources.28 It should also be remembered that the full costs of information technology are rarely taken into account, this being an area where capital (i.e., software and computers) depreciates very rapidly, eating more deeply into productivity increases than is generally noted by economists.29

Moving beyond single sectors of the economy, there is evidence that economies in the aggregate are also producing diminishing returns. The world economy grew by 5.3 percent a year in the 1960s, 3.9 percent in the 1970s, 3.2 percent in the 1980s, and only 2.3 percent in the 1990s.30 Moreover, alternative indicators suggest that even the sagging figures for GDP growth are misleading with regard to the brightness of the picture. GDP does not take into account the ecological, social, or long-term economic costs of such activities (i.e., the maladaptations accompanying rising complexity, as Gell-Mann might put it). For that reason, some economists have turned to other indicators that would take into account such costs and that, incidentally, make explicit a connection between certain kinds of complexification and diminishing returns.

Net domestic product (NDP), for instance, is broadly comparable to GDP but takes into account the depreciation of capital. Given the rapid depreciation of information technology relative to more traditional kinds of capital, the gap between rates of U.S. GDP and NDP growth has increased from 0.1 percent in the 1960s and 1970s to 2 percent from the late 1990s on.31 Such a wide gap suggests that "real" economic growth in the advanced economies is lower than reported, with equipment depreciation consuming a significant portion of their gains. A 4 percent growth rate would be effectively cut to a "real" rate of 2 percent for the 1995-2000 period.

Another alternative is the Genuine Progress Indicator (GPI), which accounts for a still wider range of the side effects of economic activity that can undermine growth in the long run, including resource depletion, environmental damage, lopsided income distributions, unemployment, and debt.32 While U.S. per capita GDP grew 55 percent between 1973 and 1993, GPI per capita declined some 45 percent.33 Rather than a slowing increase in national wealth, the conclusion that can be drawn from the use of GPI is its gradual erosion through negative growth.

Diminishing Returns and Reduced Slack
A long-term trend of diminishing returns to heightened complexity is strongly suggestive of shrinking slack but insufficient to prove it, the two trends being closely connected but not synonymous. Another approach is necessary to settle the third issue, whether societal slack is actually shrinking. The most obvious is an examination of the level of governmental activity relative to a state's income over an extended period, particularly taxation, spending, and debt. Government, after all, is uniquely positioned to command slack in the event of exogenous shocks, making its ability to do so a way of measuring how much slack exists in the system. A combination of increased taxation, spending, and debt indicates that governments are less and less able to live within their means, that public goods are becoming more expensive, and that a society is spending a higher share of its income on debt service rather than investment or consumption.34 With taxes and debt levels high, governments also have less leeway to raise taxes further or undertake new types of activity. In other words, because more of their resources are already committed elsewhere, less slack is available to be mobilized when it is needed.

As already stated, government has grown in recent decades, with taxes and spending rising throughout the developed world—as have debt burdens. Tax revenue rose from 31.5 percent to 38.4 percent of GDP between 1970 and 2002 among the Group of Seven advanced industrial nations.35 Spending rose at an even swifter rate, with the result that the proportion of gross debt to income almost doubled between 1977 and 2002 alone.36 Notably this rise continued despite post-Cold War reductions in military spending; the scaling back of welfare states, as with lower public spending on education; major reductions in public spending on infrastructure and research and development; and the savings that privatizing and decentralizing government services were supposed to generate.37

There are also reasons to think that states will continue moving in this direction of greater spending and indebtedness.38 The most widely discussed of these is that mandatory spending is increasing as a percentage of government expenditures, so that even with more money being levied, governments have less leeway in making spending decisions.39 This is partly because of the pressure that aging populations put on social safety nets, which appear to be growing less effective as a way of achieving their goals, particularly pension plans and health care systems.40 Also consistent with a trend toward older populations (but not solely due to it), savings rates have declined throughout the advanced world, and private debt has risen, meaning a shallower well from which governments can draw in times of need.41

Defense economics in recent years underscore this tightening of finances. Yet defenders of recent increases in U.S. defense spending have frequently argued that the United States managed to spend 37.5 percent of GDP on defense in 1945, and then 5 percent or more of its annual income in the Cold War.42 Implicitly, the United States could do the same today. What those making this argument commonly miss is that the federal government was much smaller at the outset of World War II, and also less indebted.43 Postwar economic growth was also sufficient to enable the United States to "grow out of" its debt, cutting it by three-quarters as a share of GDP.44 By contrast, the spike in U.S. national debt in the 1980s suggests that the decade's defense expenditures, in the range of 5-6 percent of GDP, were less supportable than the much higher levels of the 1950s.45 Current defense spending in the area of 4 percent of GDP in the early years of the twenty-first century produces budget deficits and increases in the debt burden comparable to those of the 1980s.

In short, the less-taxed, less indebted United States of the World War II era had considerable slack on which to draw, and a high growth rate enabled a rapid fiscal recovery. More recent trends, however, have been toward diminishing returns to investment in complexity, as evidenced by slowing growth. Moreover, these investments have consumed slack, as seen in rising debt levels and shrinking savings. Nor is this to be regarded as a temporary aberration, as indicators suggest this pattern will continue well into the early decades of the twenty-first century.

Interconnection and Vulnerability
In addition to leaving advanced societies less slack, greater complexity may mean more vulnerability because of the higher level of interconnection it entails. Certainly, the opposite is typically considered true: that is, a high level of interconnection between components can often contain or ameliorate disruptions, a point well established in the ecological literature. Where societies are concerned, the existence of a large number of interconnections empowers a state to respond to security threats by enabling it to better monitor its domain and move military and police forces as needed.46 A large number of interconnections also suggests that it is a fairly simple matter to "summon aid to the injured points, erect bypasses around them, and find substitutes for them" in the event of disruptions, as Martin Van Creveld put it in Technology and War.47

The reverse, however, can be true just as often. In practice, the same infrastructures that allow states to cope with threats also open avenues for thosesame threats they mean to guard against, be they an invading army or a terrorist cell. Consequently, the conduits must be guarded not only against exploitation by a hostile force but also against attacks on the conduits themselves—passenger aircraft, for instance, being favorite targets of terrorists. More important, the dense web of connections within a society can more widely propagate the effects of any attacks that do occur.

The question then becomes: what sort of interconnections give a society the ability to recover quickly and which do the opposite? Charles Perrow has made the case that it is a question of the tightness of coupling within a system. Tightly coupled systems, which are short on slack, are also intolerant of delay and contain invariant sequences not allowing for improvisation.48 For that reason there is no room for failure, which means that buffers and redundancies have to be built in, rather than being "fortuitously available." Consequently, tightly coupled systems are highly susceptible to "idiosyncratic threats," meaning that "if one can find a weakness through which safety factors can be overloaded or bypassed, he can cause imploding, catastrophic failure."49 Power grids demonstrate how this can happen. In November 1965 the shutdown of one of six lines carrying power into Ontario's electric grid from the Beck plant outside Toronto disabled much of the Canadian system.50 When the demand for electricity from Canada went off-line, Beck's output into New York doubled, surging through the U.S. grid, endangering plants all over the northeastern United States, and compelling utilities to take their systems off-line. The result was that in the space of four seconds, much of Canada and the northeastern United States were left in the dark. Although this blackout resulted from an accident rather than an attack, it does suggest possibilities for sabotage capable of producing similar effects.

Oil pipelines offer another example of a tightly coupled system, one entailing more localized but also longer-term disruption than is generally the case with power grid failures. The sabotage of the pipeline between Iraq and Turkey in August 2003 closed off the flow of oil from the fields in Kirkuk—40 percent of Iraq's total production.51 No other way exists to move the oil, and the shutdown of the damaged section rendered the rest of the pipeline inoperative at a cost of an estimated $7 million a day in lost oil sales and a small but noticeable rise in world oil prices. Moreover, even after the repair of such a pipeline, it is a matter of days before the process can function normally again. Meanwhile the infrastructure remains vulnerable to further disruption, the Kirkuk pipeline proving no exception to the rule. Consequently, the time given by authorities for which the critical pipeline would be inoperable increased from three weeks to three months (early November). In the face of later attacks, the authorities suggested that the pipeline could be out of service indefinitely, though operations did finally resume in March 2004.52

Certainly, electric grids and oil pipelines may be dismissed as relatively old-fashioned technology. Nevertheless, they will remain critical parts of modern infrastructures for a long time to come. There is also good reason to believe that the connections most characteristic of advanced societies are creating tighter coupling, and the greater vulnerability that goes along with them. One reason is that tight coupling is widely seen as the key to extracting greater efficiency and productivity from a system.

This pattern is reflected in many of the post-Fordist production approaches that rely on computers and other information technologies. The combination of accelerated production rates and smaller, more highly specialized workforces makes a given disruption (i.e., the loss of a man-hour of work) more costly. The number of man-hours required to produce a ton of steel, for instance, dropped from 10.5 in 1980 to 2.2 by 2000 largely because of automation and information technology.53 Older, "less efficient" ways of going about an activity are eliminated, and user autonomy is restricted through standardization of process and procedure, making sequences more rigid and leaving even less room for improvisation.54

Just-in-time (JIT) manufacturing, in which components are delivered just as they are needed, is an example of such a low-slack, tightly coupled process—and can be quickly disrupted when these are interfered with, as demonstrated by the terrorist attacks of September 11, 2001. The tightening of security following the attacks produced major bottlenecks in supply-chain management systems, with trucking coming to a near halt along the borders the United States shares with Canada and Mexico.55 According to a U.S. government report, "Transportation issues played havoc with order flows and drove up shipping costs"; the plants most dependent on JIT (such as those belonging to Ford, Honda, and Toyota) stopped working. At the macroeconomic level, the result was a 1 percent drop in U.S. industrial production in the month of September, largely resulting from the disruption of industry in the week before normality returned.56

Tighter coupling can also be regarded as a consequence of "coevolutionary development," in which niches create still other niches, as with the example of the automobile cited earlier. Without automobiles, there is little need for motels, traffic lights, and paved roads, so that economic sectors involved with these similarly suffer. Given the connection of such trends with rises in complexity, it stands to reason that more complex economies are more dependent on niche activities tightly interconnected with one another. The effects of this in the face of disruption were also demonstrated by the September 11 attacks.

The terrorist attacks on one niche of the aviation industry (air travel, which suffered $15 billion in losses in 2001 and 2002 because of the September 11 attacks and subsequent disruption) translated into losses for other sectors.57 Deliveries by Boeing, the world's largest supplier of commercial airliners, were down 28 percent in 2002 from the previous year, to 381 from 527.58 In anticipation of the reduced demand for its aircraft after the attacks, Boeing alone announced 30,000 layoffs.59 Along with the major airlines and Boeing, airports and other related concerns scaled back operations, eventually cutting about 400,000 jobs worldwide according to one estimate.60 These job losses in turn had "multiplier" effects in the areas where they were located, while the falloff in air travel and international travel generally damaged other sectors such as tourism, which many estimates indicate lost millions of jobs as a result of the attacks.61 Meanwhile insurance companies, faced with paying out indemnities in the tens of billions of dollars, increased premiums for some types of insurance more than 300 percent in a year's time, with a doubling across the board thought likely in a permanent and significant elevation of the cost of doing business.62 In short, even excluding elevated spending on security and longer-term effects such as higher insurance rates, the September 2001 attacks did hundreds of billions of dollars in economic damage in the United States and abroad.63

Buffers and redundancies can be built into tightly coupled systems, such as a backup pipeline, larger inventories, or greater reserves of capital, to keep companies going in lean times. These are deliberate investments rather than something fortuitously available, however, and the redundancies themselves, aside from being costly, can be a cause of additional breakdowns. This appears to have been the case with, for instance, nuclear reactors, where the addition of redundant components was "the main line of defense" against failure, but also "the main source of failures."64

Planners may also consciously choose not to invest in such buffers. Redundancy and slack are conventionally perceived as a drag on efficiency, and therefore a cost to be cut rather than as a good in social and technological systems.65 At the same time, the tendency is toward leaner operations and ever-shorter time horizons in business, as well as rising competitive pressures.66 The pressure is therefore significant to opt for efficiency over reserve capabilities that may be useful in the event of an emergency. One result is that a lack of unused capacity, or "headroom," in an electric grid's power lines frequently turns what might otherwise be an isolated system failure or local problem into a large-scale power failure.67 Such attitudes also tend to persist even after such incidents, as has been the case with just-in-time manufacturing, calls to rethink such practices post-September 11 proving short lived.68

It also stands to reason that with some aspects of a complex system's functioning typically ill understood, decisionmaking in this area is likely to be of poorer quality than in other cases. In some instances, buffers are inherently difficult to build, as with certain kinds of interconnection. Scale-free networks, such as the internet, tend to have critical hubs or nodes that are connected to a far larger number of elements than average (as opposed to random nets such as the highway system, where each element connects with only a few others).69

The internet, for instance, is held together by a relative handful of web pages of disproportionate value. The same is also the case with the air travel system in the United States, which relies on hubs where passengers catch connecting flights. One consequence of the "hub-and-spokes" system is the ease with which disruptions can translate into major, widespread delays. Recent research has shown that networks such as these may be highly vulnerable to coordinated attacks, the critical threshold for the propagation of a "contagion" through them being virtually zero. Additionally, attacking as few as 5 percent to 15 percent of the elements in such a network can bring down the entire network.70 Such possibilities have been hinted at by the internet's susceptibility to spam email and viruses, single examples of which, such as the "Love Bug," inflicted billions of dollars in damages—and in the case of the Love Bug, also remained pervasive a year after its supposed eradication. Even though the key nodes in a system can be better protected, they are not always easy to identify; in any event, the connectivity of systems rules out a "silver-bullet solution."71

Tighter coupling between the components of today's complex societies, and in particular the vulnerability of scale-free networks to coordinated attack, mean that an attack on any one point can have wider effects, reducing the "tolerance for breakdowns and errors" anywhere along the line.72 The tendency is also toward more components, as niches for new ones are created and new subsystems added, so that more points exist to be attacked. A larger effort is therefore required to protect a larger number of targets, a rough analog with defending a longer front, making for an abrupt increase in a society's security burden without a comparable expansion in its resources. This has already been the case, the relative rise in U.S. security expenditures since September 11 being far greater than economic growth for the period of that rise in a process that may be in only its early phases, and which, given the nature of the conflict, may also have no clear end point.

This is very much what is suggested in the prospect of "terrorist-proofing" modern society, as by improving ventilation systems and providing better safeguards for energy distribution systems as called for in a June 2002 report of the National Research Council.73 This was in line with a historical pattern: an attack on a single target leads to the fortification (terrorist-proofing) of entire classes of targets indefinitely, as with the heightened precautions surrounding the boarding of airliners following September 11. Indeed, in this event it also led to the fortification of facilities not targeted by the terrorists in the attacks, such as the broad effort to upgrade harbor security. The resulting effort to deploy U.S. law enforcement personnel in foreign ports to clear ships bound for the United States raises another key point: a characteristic of complex systems is an openness blurring the boundaries, the absence rather than presence of connection becoming what is difficult to establish. Where security is concerned, this fosters a propensity to keep "push[ing] the border outwards."74 In the case of the war on terror, any state that is seen to harbor terrorists is regarded as a threat to be met with force as necessary, effectively placing the whole planet within that border and pushing the perimeter out to its theoretical limit.

Even so, the cost of even the most ambitious counterterror effort may seem paltry as compared with arms races between great powers. It is well worth remembering, however, that empires facing no great power threats have been overwhelmed by unconventional threats against which they attempted to fortify themselves. Following the barbarian attacks and the spike in banditry of the third century, the Roman Empire shifted from preclusively defending its borders to creating broad areas of military control, and ultimately civilians were safe only behind fortifications.75 The attendant loss of key linkages, the sacrifice of economic productivity to security accompanying such circumstances, and the massive cost of fortifying the empire (and other, related controls) fed into a pattern that eventually drove the Roman Empire to collapse.


Submarines and Space Power II

By Nader Elhefnawy

Reproduced with permission from the January 2004 issue of The Submarine Review, a quarterly publication of the Naval Submarine League, P.O. Box 1146, Annandale, VA, 22003.

With every major conflict fought in recent years, American forces have demonstrated new capabilities, and much of that has been related to the development of space power, particularly in areas like reconnaissance, navigation and communication. There is one realm, however, where these enhanced capabilities have comparatively little effect and that is beneath the sea.1 Submarines are broadly immune to space-based surveillance, at least in the absence of truly effective non-acoustic sensors. This gives them the potential to slip past aerospace surveillance in performing missions like attacking shipping with torpedoes, laying mines, gathering intelligence, launching cruise missiles and landing special forces teams.

In other words, they would afford a power which has lost aerospace and surface superiority to an opponent to continue fighting. However, it is conceivable that their stealth may allow them to play an even more active role in conflicts increasingly geared toward space activity in the future. The move generally is toward more versatile submarines, capable of carrying a broad assortment of payloads, and also toward their tighter integration with other fires in military operations.2 The conversion of four Trident missile submarines into platforms dedicated to launching cruise missiles and landing special forces teams is a major step in this direction. It is also possible that submarines could also play a more active role in space warfare than has generally been thought possible to date.

Exercising Space Control "Earthside"
Of course, space conflict remains highly hypothetical. Nonetheless, the American military is moving toward a doctrine of space control.3 In the event of a conflict with a high-tech opponent, shutting down their space launch capabilities may therefore be a primary task for U.S. military forces. While this conjures up images of killer satellites, in the shorter term space is principally significant as a conduit of information, making space forces a tool of "force enhancement" rather than "force application," as Barry Watts recently put it.4 Moreover, the reality is that while satellites may be built to function in space, they are built, launched from and controlled from Earth. This has led some observers to suggest that attacks on space systems may be a less efficient way of pursuing space control than targeting the information flows from the space systems to the air, sea and land units using them, perhaps through attacks on the "Earth-side" infrastructures facilitating those flows.

Accordingly the ability of submarines to deploy cruise missiles or special forces teams against land targets like ground stations would let them play a significant role in weakening an opponent's space capabilities. Particularly given the preference for coastal facilities for space launches, and the capability of submarines to approach a hostile coastline undetected and loiter there for long periods, they could also target space launch sites, destroying space vehicles (or for that matter, ballistic missiles) in boost-phase.

Submarines can also be discretely deployed to "space choke points," points which satellites being launched must pass over on the opposite side of the planet on the way to orbit. For instance, one writer has observed that a single naval vessel in the South Pacific could have shut down the Soviet space program in a conflict, provided it mounted the appropriate missiles. It has since been suggested that the idea's usefulness has declined with the growth of the commercial space industry and alternative types of floating or aerial launch platforms, widening the options of the countries using them. However, political and security concerns might narrow those options where the launch of explicitly military systems by a belligerent state in wartime is concerned, so that the idea can not totally be discounted.5

Submarines as Space Launch Platforms
Of course, one possible way of making a launch capability more survivable in the face of an increased threat from submarines or other systems may be to rely on relatively compact, mobile launchers, which can now include floating platforms such as the Sea Launch system. Such a system has obvious advantages. Seventy percent of the world's surface is water, greatly widening the range of possible launch points-and in the event of a conflict, the amount of territory that an opponent would have to cover, a key issue when such launches are threatened by hypersonic air-to-surface missiles. This also simplifies the problem of getting a satellite launcher into an equatorial position, since access to a suitable launch site on land is not required, something the Sea Launch system-a joint American-Ukrainian-Russian-Norwegian venture-is expressly designed to do.6 First demonstrating its system in 1999, the company has launched several satellites since October of that year.

Nevertheless, surface-going ships would be relatively easy for a sophisticated military to track, which would not be the case with submarines. Systems based on submarines can hide from aerospace power and enjoy lengthy loiter times even in hostile waters. They would also expose their location only at the moment that they go into action, making them highly suited to "shoot-and-scoot" tactics. Indeed, even that may cease to be necessary, given the prospects for systems like supercavitating ballistic missiles (or as the case may be, space rockets).7 While this idea may seem radical, in actuality submarines have been taken for granted in this role-as launchers of long-range ballistic missiles which are capable of putting a satellite in space. This potential became a reality when in 1998 the Technical University of Berlin successfully launched a satellite from a Russian Delta IV-class submarine, using a converted submarine-launched ballistic missile.

The question, of course, arises as to what use such capabilities might be put. The most obvious is the launch of anti-satellite weapons, and this possibility also has not entirely escaped notice, even if it has received relatively little discussion in recent years.8 In the 1970s and early 1980s, the Navy explored the use of a sub-launched Poseidon ballistic missile to put an anti-satellite missile into orbit.9 Nevertheless, such an approach poses some significant problems. A space launch from a submarine may be easily taken for a ballistic missile launch and the opening shot in a nuclear attack, so that such an approach carries with it some risk of escalating a conflict.

Additionally, while submarines have widely proliferated, the vast majority of these are small, conventionally-powered boats like the German Type 209 or the Russian Kilo suited principally to attack operations in coastal waters. Such submarines are poorly suited for space launch operations, in contrast with the nuclear-powered or ballistic missile submarines presently operated by only a handful of nations, namely the members of the United Nations Security Council. The list is not expected to get much longer in the near future, though India has announced interest in such systems. Admittedly, this leaves a few states with systems of this kind, and certainly more could acquire them if they proved sufficiently advantageous. Besides, the miniaturization of satellites and launch vehicles, and a willingness to deploy smaller loads of them, would let smaller subs perform this function; after all, not every ballistic missile submarine must be an Ohio or a Typhoon.

Submarines and Directed-Energy Weapons
Moreover, the capacity of submarines to attack space systems already in orbit is not limited to their space-launch capability. While missiles are the most obvious way submarines have of performing these missions, they could also be performed by a sub mounting a directed-energy weapon comparable to the Mid-Infra-Red Chemical Laser (MIRACL). Aside from the economy such systems may afford in destroying thin-skinned launch vehicles, the MIRACL possesses a demonstrated anti-satellite capability.

Laser weapons, certainly, are not without their problems. Smoke, bad weather, fog and dust can significantly reduce their range, which not only means that their effectiveness will frequently be reduced, but suggests some obvious countermeasures against laser weapons. It also means that submarines would have to be surfaced to get much use from their weapons, whereas they can fire their missiles while submerged. Nevertheless, such exposure would be much briefer than is the case for a surface ship, and work could be done to further reduce the comparatively small signature of a surfaced submarine.

The size and weight of today's directed-energy systems is also a problem, the MIRACL system weighing around two hundred tons. Reductions in the size of laser weapons, however, are widely anticipated, and there are presently plans to pack the MIRACL's power into something a tenth that size, a twenty-ton system that could be airlifted in two cargo containers inside of a C-130 transport. There is also a great deal of optimism about solid-state laser technology, who foresee it creating an effective battlefield laser small enough to mount on a fighter aircraft or even a jeep, and proponents of such systems are arguing that a revolution in this area is imminent.10 The move toward electric drive in naval vessels, including submarines, makes them well-suited to mounting solid-state lasers, which could derive their power from such a drive rather than cumbersome stocks of chemicals. A real breakthrough in this area would enable laser weapons to be built into smaller submarines, widening the number of potential users. Additionally, unlike the case with missile systems, gravity would not be a factor, so that the users of Earth-based laser systems need not worry about being on the "wrong end of the gravity well." On the contrary, Earth-based systems are more physically accessible to their users than their counterparts in space where supply, maintenance and communications are concerned, and their design less constrained by factors like size and weight, giving them a possible edge.11

Consequently, while it may be difficult to imagine any opponent the United States is likely to face turning its submarines into space launchers (save perhaps for a large peer competitor); it is much easier to picture a future adversary mounting a compact laser weapon, at least a couple of decades down the road. So armed, even a relatively small number of such submarines-a force potentially within the reach of a 2020s equivalent of a rogue state-could try and wreak havoc by fighting a submarine-based guerrilla war against American satellite networks.

Arguably, if equipped with the requisite missiles (and perhaps even more promising, directed-energy weapons), submarines can perform in the anti-space role. Aside from impacting how the United States or other nations may use their submarines in the future, this underscores a larger issue, namely the likelihood of low-cost counterspace approaches and systems, here exemplified in a sub outfitted for the anti-satellite mission.

Such a possibility raises two important points. First of all, traditional land, sea and air forces, including submarine forces, should not be neglected in the pursuit of space-based systems-or the capabilities of other states in these areas overlooked. Second, the United States, while likely to win any conceivable confrontation in space, is not invulnerable in this area. Critical military space systems may prove quite vulnerable down the road even to minor opponents, should armed satellites and attacks on space objects become a routine, accepted practice in warfare (to say nothing of the civil and commercial space systems of increasing import to the world economy). Consequently, the most effective way to use America's lead in space may be as part of a broader strategy to at least slow down this more fundamental kind of militarization. While a subtler tack than space control or space dominance, it may provide the greater level of security in the long run.

1 Nader Elhefnawy, "Submarines and Space Power," Submarine Review, October 2001, pp. 71-76.
2 Floyd D. Kennedy, Jr., "Transforming the Submarine Force: Integrating Undersea Platforms into the Global Joint Strike Force," Air and Space Power Journal 16.3, (Fall 2002).
3 Karl P. Mueller, "Totem and Taboo: Depolarizing the Space Weaponization Debate" (8 May 2002), pp. 15-16. Accessed at[1].pdf. This article also appeared in Astropolitics, Spring 2003.
4 Barry Watts, The Military Use of Space: A Diagnostic Assessment (Washington D.C.: Center for Strategic and Budgetary Assessments, 2001).
5 Lieutenant Commander J. Todd Black, "Commercial Satellites: Future Threats or Allies?" Naval War College Review 52.4 (Fall 1999).
6 The Sea Launch company web site is accessible at
7 Steven Ashley, "Warp Drive Underwater," Scientific American, April 2001.
8 More ambitiously, submarines may be used for "rapid space force reconstitution," replacing satellites destroyed in the course of fighting, deployed submarines perhaps carrying a cargo of spare satellites and the launch vehicles for them, though this would be a much longer-term concern.
9 Gary Federici, From the Sea to the Stars (June 1997). Accessed at The Defense Advanced Research Projects Agency also developed a manned anti-satellite space cruiser, also to be launched by a Poseidon. For descriptions of this and other early ASAT concepts, see Norman Friedman, Seapower and Space (Annapolis, MD: Naval Institute Press, 2000).
10 Ian Hoffman, "Warfare at the Speed of Light," Oakland Tribune, October 19, 2003. Accessed at,1413,82~1865~1709759,00.html.
11 Nader Elhefnawy, "Four Myths About Space Power," Parameters 33.1 (Spring 2003), p. 126.

Thursday, October 23, 2008

Beyond Columbia: Is There A Future for Humanity in Space?

By Nader Elhefnawy
Originally published in THE HUMANIST, September/October 2003, Vol. 63 No. 5.

On the first day of February 2003, the space shuttle Columbia started its return to Earth at the end of a sixteen-day mission. While the mission had apparently performed "without a hitch," the shuttle suffered a series of failures in the last minutes of reentry, followed by a cutoff of communications with Mission Control around 10:00 AM (CST). Forty miles above Texas, the shuttle exploded and killed its crew of seven astronauts almost seventeen years to the day after the Challenger disaster, an incident high in the minds of those looking on.

In the wake of the Columbia crash, observers asked whether human space flight was worth the risk. While these statements didn't surprise in light of the tragedy, the weakness of the voices replying in the affirmative did. It wasn't only the experience of watching the tragedy unfold on television that left those voices so shaken. The truth was that in the eyes of pundits and the public alike, human space travel had become a luxury, a stunt, a boondoggle rather than a legitimate field of endeavor, let alone a gateway to the future.

Of course, technologies are typically oversold at the beginning of their careers, commonly leading to much disenchantment before they go on to redeem their promise. Space travel, like so many of the technologies that preceded it, will also go on to redeem itself. There will be a future in space--one for human beings rather than just their machines. However, few seem to agree at present. The reasons why the appeal of space has palled are well worth exploring in any serious consideration of the future of space activity, if only to avoid repeating past errors.

One reason may be the all-too-fashionable perception that human beings must endlessly adapt to technology--as though technology is some force of nature, external and alien to them and entirely beyond their comprehension or control. The widespread view today is that if something can be done, then it must be done, leaving one to either embrace the tidal wave of technological change outright or brood about how things have become out of control. That is to say, there is no middle ground--no hope that choices can be made and that human beings need not be the tools of their tools.

Such a position is fundamentally disingenuous and, over the longer haul, dangerous. The belief in such an inability, a condition of permanent and irremediable future shock, is inherently at odds with the extreme long-term perspective in which progress in space must be considered given the environment's extraordinary technical demands and the sheer largeness of the task.

The conquest of space could never have proceeded so quickly as some of its enthusiasts once imagined, but the approach to space has also not been as sensible as it should have been. For all its possibilities, it must be admitted that the space program did begin as a stunt, and the high points in the history of space flight look like little more than a succession of stunts. Sputnik's launch on the fortieth anniversary of the Russian Revolution was an attention-grabbing commemoration intended to give the Soviet Union a propaganda victory by technologically upstaging the United States. It succeeded only too well, and a stunned United States replied with a stunt of its own, firing Explorer-1 into orbit and getting the space race going in earnest.

The Soviet Union sent Yuri Gagarin into space, but American astronauts Alan Shephard, Virgil "Gus" Grissom, and John Glenn followed him shortly thereafter. President John F. Kennedy announced that by the end of the decade the United States would put a man on the moon, and indeed the U.S. did so, but after Neil Armstrong set foot on the moon--thereby beating the Soviets to the punch--there hardly seemed to be any point in continuing such efforts. No manned craft returned to the moon after Apollo 17 in 1972. The program having come to an end, and the moon apparently unoccupied, the Soviets--who had just launched the world's first space station, Salyut-1--seemed on the verge of making a real effort to establish a lunar colony by the end of the decade.

However, the United States' development of a space shuttle placed political pressure on the Soviet program to produce the equivalent, forcing it to abandon the lunar mission. The International Space Station (ISS), today built by the United States with Russian help, had initially been presented as a free world counter to the Soviet lead in space stations. As Karl A. Leib wrote in his essay, "Entering the Space Station Era" (which appeared in the 2002 collection Space Policy in the Twenty-First Century), the ISS "even had a good Cold War name, Freedom."

The United States and Soviet Union were by no means the world's sole spacefaring states, but the efforts of other countries were comparatively minor and their manned efforts nonexistent. China, which could easily be the next country to put a human into space, promises to be no different; its plan to put a human in space by 2005 and travel to Mars simply screams of another stunt. Overall, the tendency has been to develop highly visible projects that cost too much and perform too little with no follow-up. In the case of the Apollo missions, one of the last complete sets of plans for the Saturn V rocket were said to have been given to a Boy Scout recycling drive. And while this may be just an urban legend, the fact that it circulates at all is telling. The space shuttle, far from taking humans to the moon again, is restricted to an orbit of no more than a few hundred miles above the Earth. In the course of a mere 113 missions, two orbiters have been lost, which is a disastrous performance overall and far short of its initial promises. The cost of the ISS have risen many-fold over the initial estimate, compelling continual cutbacks in the station's mission and capabilities, while the aforementioned travails of the shuttle program jeopardize its future, the Columbia disaster a grave setback for this program as well.

However, it must be remembered that the stunts produced their share of useful spin-offs and held out the promise of something more to be accomplished, and space programs have had some meaningful and lasting successes. The Hubble Telescope captured incredible images of the heavens, shed light on the birth of the universe, and helped discover the hundred-odd planets already known to lie beyond the Earth's star system.

In addition, space has become an area of practical activity. Certainly, visions of asteroid mining and space industry haven't come to pass, and even space tourism, excepting the occasional Dennis Tito, hasn't been a serious proposition. Yet the communications, navigation, and Earth-monitoring satellites girdling the globe are a major part of the modern world's infrastructure, and the commercial space sector is large and growing. While it has been frequently oversold, the Satellite Industry Association nonetheless reported over $86 billion in satellite industry revenues for 2002 alone.

However, this doesn't change the fact that the original promise has failed to materialize. The space activity that thrives today is an annex of the super-information highway. Today's pertinent space efforts look in at the Earth rather than out to the rest of the solar system, let alone beyond it. At some point, the Space Age simply gave way to the Information Age, and it has seemed in recent years that the final frontier wasn't outerspace but the virtual terrain of the Internet--a change visible even in fiction. In the pilot to the Star Trek series, the heroes keep the ship's log on a yellow note pad as they careen about the universe at warp speed. Even James Blish's novel, They Shall Have Stars, which is astonishingly prescient in its depiction of a future where the U.S. space program has withered, describes humankind's establishment of posts all over the solar system by the early twenty-first century. Meanwhile, artificial intelligence is a thousand years away.

Today, by contrast, a new car is likely to contain seventy computers or more, and information technology experts like Ray Kurzweil claim that computers with human intelligence will be ubiquitous in a generation's time. These advances hardly raise an eyebrow, while talk of a piloted mission to Mars in the next twenty years may as well be science fiction, for better or worse. Tom Wolfe's The Right Stuff is often credited with capturing the spirit of the space program in the 1960s. However, Tom Pynchon's 1973 novel Gravity's Rainbow, with its demonization of all things rocketlike, seems to represent better what has happened since. Taking their cue from him, the cyberpunk novelists of the 1980s may not have had anything like the same level of antipathy, but it was cyberspace, not outer space, they looked to for the most part.

The path to the future was once lit by the tail fire of a rocket streaking to heaven and beyond, but the rocket-ship has been forsaken for the microchip, so much so that future generations may well laugh at the degree to which the displacement occurred--the promises pinned on networks of computers, and the expectations not of what space exploration might help to accomplish in a century or a millennium but of what their mere existence would bring about now. Yet the abatement of the hype can be seen as an opportunity to renew the debate--and to some extent, this is already happening. As a conduit of information, space has become a cornerstone of U.S. military superiority, and defense ministries around the world are in serious discussion about going beyond this to the weaponization of space. Space-based weapons can play a role in ballistic missile defense schemes, assist militaries in pursuing "information dominance" by controlling the enemy's ability to use satellites for gathering communications and intelligence, and even transport troops or attack targets on Earth from orbit. Planetary defense against asteroids, meteors, and comets is gaining the recognition due to it.

The requisite technologies aren't so exotic or farfetched as they may seem, a fact obscured by the extreme difficulty of the ballistic missile defense mission offered today as the principal justification for the weaponization of space. As science fiction author Larry Niven observed (admittedly, somewhat hyperbolically), anything worth doing in space could be a weapon. Designs for armed satellites, spacecraft, and space stations date back to the earliest years of the Space Age. Both the United States and Soviet Union developed a range of antisatellite missiles during the Cold War, and this may be modest next to what could have happened if the the two countries had not agreed to hold back that side of the space race.

Consequently, the militarization of space is only in its beginnings and its long-term course is still very much in question given the real technical and political problems it raises. Many seem to accept that it will progress along these lines, even to view it as inevitable given its high costs, likely destabilization of international relations, and the probability that it will offer fewer returns than its proponents suggest, but the point is that the matter is being discussed, which is more than can be said for civilian ventures in space. Indeed, it is now a cause for worry that security is becoming the sole consideration of national policies on space, the sky viewed as little more than military high ground. This tendency may be more powerful in the wake of the Columbia disaster. There are calls to scale back or dismantle programs and expenditures on civilian space activities, squandering opportunities too long ignored, as investments in military space efforts expand apace--ironically, at the expense of the security presumably being sought. Arguably investing more heavily in civilian space activity will be the key to making space systems more secure, through the creation of a more robust, extensive, and hopefully survivable infrastructure in orbit, and less directly, the more balanced policies that would diminish the level of threat.

Of course, this is all easier said than done. Part of the problem would seem to be that modern societies may have already reached and passed a point where, however worthy a project may be, they are so taxed by their own, ever less-rewarding complexity that they are hard-pressed to bear any additional costs, as Joseph Tainter suggests in The Collapse of Complex Societies. With every step more costly, and individual and institutional patience diminished by the compression of planning horizons, it becomes infinitely easier to find reasons not to do something than the reverse--to pursue what the sociologist Zaki Laidi termed avoidance strategies in his book, A World without Meaning. Nondefense-oriented space ventures are particularly vulnerable to such avoidance strategies, given their big-ticket cost, lack of a strong supporting lobby, and principally long-term justification. After all, many of the activities imagined by futurists are a long way from being economically viable and providing the kinds of rewards that may make them appealing to governments or businesses that are seriously attempting to solve a problem or turn a profit. The natural resource shortages most severely affecting the world (like fossil fuels, water, farmland) won't be alleviated by the ores of the asteroid belts. The Baroque quality of many proposed schemes--like building massive solar collectors in orbit that would beam the energy generated back to Earth in the form of microwaves--has only meant setbacks. The advantages of low-gravity environments for delicate separation processes or the growth of better crystals haven't sufficed to attract industry into orbit.

The solutions to Earth's problems will need to be found on Earth, but this isn't to say that a significant expansion in the quality and quantity of civilian ventures in space will never be viable within our lifetimes. At the very least, reducing the cost of space flight is a logical course of action. The construction of an affordable and reliable space plane would be a good first step. And given the rising demand for satellite launches to support tasks ranging from environmental research to guiding lost drivers to carrying an expanding volume of digital communications, this would be an eminently practical, even profitable, activity. Space elevators, or skyhooks, present another option, which also help ameliorate a growing problem with space garbage, especially if the skyhooks can be used to replace satellites in some functions, such as communications or Earth observation. An elevator capable of lifting payloads to geosynchronous orbit may also be surprisingly affordable. One estimate suggests a $10 billion price tag, well within the limits of what the private sector could afford, for a system that could cut per-pound space launch costs by a factor of a hundred or more.

With launch costs down, scientific research could widen. It may also be possible to construct platforms capable of servicing satellites in orbit or assembling space systems shipped up in parts while developing the core of an industrial base around which more ambitious efforts could grow. It may become practical, even advantageous, to manufacture some products in orbital conditions, then ship the product back down to Earth (especially if nanotechnology comes into its own), or to assemble, in orbit, systems shipped up in pieces. And over the long haul, if the human species is to grow and expand, it can't ignore the energy, resources and space of the rest of the solar system; and, indeed, the rest of the universe is too vast to ignore, a point Konstantin Tsiolkovsky ably made a century ago. Settling people on other worlds to alleviate population pressures on Earth, scattering the human seed as an insurance policy against disaster or simply seeing the greater universe may not be a sound proposition today, but the economics of such an endeavor could fundamentally change.

Of course, predicting major technological change is always chancy, and technological developments in space have in particular defied the efforts of futurists. It is reasonable to think that launch costs could be cut by 2020, opening the door to scientific and commercial activity in orbit, but it is also reasonable to think that, despite the potential commercial rewards of such a course, the effort will simply not be made. Whatever course is actually pursued, however, simple survival will require that humanity stop framing the relevant questions in terms of what they can do for technology rather than what technology can do for them, while reaffirming the primacy of reasonable human ends over the outer limits of technological possibility.

The Rise and Fall of Great Space Powers

By Nader Elhefnawy
Originally published in the SPACE REVIEW, August 27, 2007

In Warren Ellis's graphic novel Ministry of Space, a ruthless Royal Air Force officer uses captured German rocket scientists and Holocaust gold to launch a British space program at the end of World War II. Britain puts the first man in space in 1949, and not long after, has solar power stations in orbit, moon bases and Martian colonies, salvaging Britain's position as a great power, and turning the British empire into the world's first space empire.

Ministry, a Sidewise Award winner, is an alternate history rather than a counterfactual, driven, as Ellis explains, more by pre-war fantasy than the actual possibilities of Britain's post-war situation. (At the very least, could any amount of Nazi loot compensate for Britain's wartime exhaustion, or its industrial inferiority relative to the U.S. and the Soviet Union?) Nonetheless, Ellis's story is very well thought out at many points, particularly in Britain's quickly proceeding from "first ever" stunts to turning a macroeconomic profit on space sufficient to affect the global balance of power. Britain can let go of the Suez Canal through which its oil moves when Nasser nationalizes it in 1956 precisely because it is building solar power stations in orbit that make oil politics irrelevant to its national well-being.

This is precisely what no space power has done to date, and until that changes, space remains an adjunct to activities on Earth, space systems limited to servicing terrestrial economies by collecting and relaying information from one point on Earth to another--and space programs entirely subject to the ups and downs of those economies. The Soviet Union, the only space power that may be said to have "fallen" to date, did so not because of frustrations with its space program, but because its Earth-based economy stagnated and unraveled in the 1970s and '80s.

The point may seem so basic as to not need stating, but state it one must because that moment of transition will be an epoch-making change in the development of space, one that had been widely expected to have arrived by the early twenty-first century in certain circles. As Robert Heinlein put it in his essay "Where To?" by "2000 A.D we could have O'Neill colonies, self-supporting and exporting power to Earth," as well as "a permanent base on Luna." Indeed, he was sure that even if the United States failed to capitalize on the "endless wealth . . . out there for the taking," and its potential to solve "not one but all of our crisis problems"--employment, inflation, pollution, population growth, energy, shortages of nonrenewable resources--other countries would surely do so. If there was to be no American moon colony, then Germany would establish one, or Japan, or possibly the Soviets or the Chinese.

This obviously did not come to pass, and Heinlein's argument that a moon colony twenty years after 1980 is no more implausible than a moon landing was twenty years after 1950's Destination Moon can not but arouse some skepticism, even as a broader audience begins to take a second glance at these ideas. One may not hear the term "O'Neill cylinder" very often, but there has certainly been a revival of interest in space as a source of energy, whether through solar energy satellites, or the mining of the moon for helium-3. (See my article from last January, "The limits to growth and the turn to the heavens.")

This upsurge of interest may represent the anxieties of the moment more than any real move in this direction, of course, and as a practical matter can do little to alleviate the causes of those anxieties. The plans are too long range to do anything about the price of oil this year or the next, or if the peak oil theorists are correct, the big crunch due in the next decade. Helium-3 may not be a practical energy source for decades, if ever, and in either case, a great deal of work likely remains to be done both lowering the cost of space launch, and reducing the size and weight of the payloads needed to get a space-based infrastructure up and running. (See "Diversifying Our Planetary Portfolio.") Still, if these or other such plans were realized they would mark the end of the time when space was just a critical node in terrestrial information flows, and the beginning of one in which space itself provides substantial, tangible, essential resources.

It may also mark the start of our groping our way back to those grander earlier visions, with all their implications. Asteroid mining on a scale sufficient to have macroeconomic significance, or transfers of Earth's population into space colonies large enough to matter in demographic terms, would mean the return of extensive development to the importance it once enjoyed, resetting the rules of today's efficiency-obsessed economic game. If carried far enough, it could create the postmodern equivalents of the maritime powers of the "Columbian era." Just as seafaring nations like Portugal or the Netherlands became the seats of much vaster, far-flung colonial empires, today's leading industrial countries (or larger groupings like the European Union) could become the centers of space empires extending from near-orbit to the asteroid belt and perhaps beyond, as Ellis's alternate Britain did. Space power would cease to be a symbol of or prop to national power, as they are today, and become instead its foundation. (Indeed, such thinking may well underlie the current round of moon missions planned by the United States, China and virtually every other country that can hope to pull one off.)

Of course, this sort of space-age mercantilism has never seemed to be the only possible future, and it may well be that the notion of "great space powers" will prove hollow long before that point. The idea that space should be used by all for the benefit of all is an old one, going back at least to Nikolai Fedorov, and well established in the law regarding space, particularly the 1967 Outer Space Treaty. While its arms control provisions may be its most frequently discussed aspect as of late, Article I holds that the
use of outer space, including the moon and other celestial bodies, shall be carried out for the benefit and in the interests of all countries . . . and shall be the province of all mankind.
The treaty very specifically holds that these bodies will be open to "use by all States without discrimination of any kind, on a basis of equality" with all enjoying "free access to all areas of celestial bodies." Article II of the treaty underlines the point by asserting that "Outer space, including the moon and other celestial bodies, is not subject to national appropriation" by any means, not only including formal claims of sovereignty, but use or occupation as well.

Such regimes can be reversed, with many observers terming the 1982 United Nations Convention on the Law of the Sea just such a reversal, "territorializing" much of the world's oceans by extending territorial waters, as well as through zones of lesser but significant control, like Exclusive Economic Zones. The world in 2007 seems to be moving in a very different direction than it had appeared to be in 1967, and with a change in the perceived opportunities, as well as the international balance of power, states might decide their interests would be better served by another arrangement. (Indeed, it would not be the first time the Outer Space Treaty was challenged, eight equatorial countries attempting to do so in 1976 with the Bogota Declaration, in which they asserted that the portion of geosynchronous orbit over their national territories belonged to them.)

Nonetheless, there is reason to think governments will go on preferring the current one. In a future where the world's economy depends on an energy source mined in space, as seems possible to some, the moon could well become the next Persian Gulf, and sharing control may be the only way to avoid a potentially disastrous conflict--which was the rationale behind agreements like the Outer Space Treaty in the first place. (It may be hoped that the solar system will allow plenty of room for everyone to expand, but mercantilism and great power conflict tend to go hand in hand.) Indeed, as the derision with which much of the international community reacted to Russia's planting of its flag at the North Pole earlier this month indicates, the day when countries could claim territory in this manner may be far behind us. Meanwhile, since it remains to be seen just how the broad positions of the Outer Space Treaty will be translated into a framework of practical rules governing the actual use of space in these ways, every possibility remains that even if countries can not claim space, those regulations may afford ample room for the pursuit of national interest.

The legalities and their associated politics, however, are but one constraint. Whatever the economics of space development prove to be in the future, fiscal reality today dictates that what was originally to be America's space station Freedom is now the International Space Station, reflecting its funding on an international, even global basis. (The station, originally intended as a response to the Soviet space program, is not only a beneficiary of Russian participation, but, ironically, has been highly dependent on Soviet-designed launch vehicles for its operation.) Much more ambitious projects, like Helium-3 mining, may have to be organized on a similar basis, just to raise the needed amount of capital. Under those conditions some states may have greater weight at the negotiating table than others, but in the final analysis their room for maneuver is limited because they can not go it alone.

Then again, the political will for such cooperation has proven disappointing time and again, subject to the same kind of backsliding as, well, space development. There seems to be little public interest in greater funding for government-run space programs--while a large part of it continues to see privatization as a panacea for public sector failure. Multinational corporations, the biggest of which have values that dwarf the Gross Domestic Products of all but the industrial heavyweights, seem just as capable as government of raising the capital the task requires, and the X Prize has given a public relations boost to enthusiasts of private efforts.

Yet, unfashionable as it may be to say so, there are grounds for doubt here as well. Despite its hype, business tends to walk beaten paths. (The privately funded SpaceShip One put passengers into orbit in 2004--over four decades after Alan Shepard and Gus Grissom performed the same feat.) It also tends to seek government subsidies which render marketplace pieties dubious, especially when the risks are so large and the capital demands so great. We may, as a good many of the dreamers hope, see heroic venture capitalists blazing a path across the heavens--but can one totally discount the possibility of Halliburton landing an obscenely padded, no-bid, cost-plus contract to build the first Martian colony that helps sour public opinion on the enterprise?

In the end, despite assurances that the future of space development clearly lies in one direction or another, the field actually remains wide open. However, whether it proves to be a scene of old-fashioned realpolitik where powers rise and fall in the manner described by Paul Kennedy, George Modelski and innumerable others; of international cooperation in which space development brings the world closer together; or the predominance of private enterprise in a borderless market as broad as the reach of our spacecraft; how, and indeed if, we go about the task will as much as anything reveal the shape of our economic and political future.

Wednesday, October 22, 2008

On Dark Ages

By Nader Elhefnawy

Originally published in THE FUTURIST (November-December 2007, pp. 14-19). Used with permission from:

World Future Society
7910 Woodmont Avenue, Suite 450
Bethesda, Maryland 20814 USA.

I spend a lot of time thinking about the future-maybe too much. As a professor of literature, I often teach and write about science fiction. As a writer on security issues, I'm often thinking about the shape of future war and future peace. In this kind of work it is routine for projections, planning documents, and studies to look to 2025, 2050, and even beyond. In the process they posit a future where science fiction has turned into science fact. Thinking about the future in such ways, and coming into constant contact with the thoughts of others about the same things, I find myself exploring the ways people used to picture the future, and all the things that didn't happen-the bad as well as the good.

Naturally, I can only wonder how people in the future will look back on the present-and about all those in the present who suspect there may be no one able to do so. During the lats few years, there's been an explosion in books with words like "collapse," "catastrophe," and "dark age" in their titles. While millenarian religion always seems to be doing a brisk business, there is also no shortage of secular doomsday scenarios at any given moment.

A natural disaster like a large meteor impact or the eruption of a supervolcano might wreck the world in one fell swoop. (David Keys's Catastrophe, in fact, argues that a massive volcanic eruption in the sixth century did bring about the collapse of the ancient world.) The Cold War may have ended, but the risk of large-scale nuclear war remains, particularly the risk of a war beginning accidentally. (This almost happened in the "Norwegian rocket incident" of January 1995, when the Russian military mistook a weather rocket for a ballistic missile.) Relatively innocent scientific research might unleash a technological catastrophe on the world, high-energy particle accelerators tearing open the fabric of the universe, a tidal wave of tiny robots turning the planet into gray goo as Martin Rees describes in Our Final Hour.

A number of unhappy factors have combined in recent years to boost the discussion, however. One is concern about a shrinking supply of oil amid high energy prices and war in the Persian Gulf. Another is the destruction of the natural environment by the activity of a rapidly growing human population, and in particular a widening recognition of human-driven climate change. Still another is an apparent growth of irrationalism and a rejection of science, evident in religious fundamentalism, New Age fads and the like, the subject of Carl Sagan's last book, The Demon-Haunted World. While not comparable to concerns about a major nuclear war, terrorism has also fed such worries, with biological weaponry, computer attacks and so forth causing some to argue that a few quick blows could bring modernity crashing down all around us.

Conservatives may worry less about resource shortages or the environment, and view religiosity in any form as a positive development, but find other causes for worry. Population growth in and of itself also may not bother them much, but the disparities inside that growth often do. Low birth rates in the industrialized world and rapid population growth in poor countries sending waves of immigrants to the former cause them great consternation. They also worry about the widespread questioning of traditional attitudes toward nationalism, culture, race, sex, religion, capitalism and so forth, which they see as opening the gates to barbarians within and without.

Of course, there are also writers who go to the other extreme and dismiss such concerns completely. In The Idea of Decline in Western History, Arthur Herman promises to trace the history of the idea rather than pass judgment on it, but he ends up rejecting thinkers on the subject as a collection of pathetic neurotics and concludes his study on a triumphalist note.

The History of Civilization Collapse
While Herman may dismiss the idea, the fact remains that advanced societies have collapsed in the past and protracted "dark ages" have followed, and it seems only natural to ask why they did so. Why do the problem-solving abilities of societies give out? Why is it that instead of going on forever forward and upward, societies so often stagnate, decline and collapse, leaving behind little but ruins for archaeologists to pick through? In other words, was the process inevitable, or could something have been done about it? Learning the answer to that question might tell us which of the many seemingly catastrophic threats to our survival we should be most concerned about, or whether, as Herman argues, we aren't unnecessarily fixated on catastrophe.

As Herman's study attests, no small number of thinkers has attempted to address these concerns, especially during the last two centuries. Not every story those writers tell is the same, but there is a great deal of overlap in their accounts of particular declining societies, and declining societies in general. Values once adhered to seem irrelevant, and institutions that worked before no longer do so (or at least, it seems that way). Governments become less effective at collecting taxes from their citizens, and at providing them with the services that justified such exactions. Insecurity rises due to widespread crime, intensified class warfare, and fighting among elites themselves. Achievement in the arts and sciences drops off (or at least it seems that way). In the end a society is left susceptible to threats that it might once have coped with successfull and the barbarians--once easily held at bay--are suddenly in the Colosseum.

Moral vs. Material Decline
While these thinkers recount many of the same incidents and trends, the theories they propose as to why these things happen vary widely. They do, however, tend to fall broadly into one of two categories-mystical explanations, and materialistic ones. Many of the "mystical" writers are rightly criticized for being weak on cause and effect, but they often identify a culprit nonetheless, such as the exhaustion of a people's "life force," or the genetic impoverishment of a once-triumphant nation. Others point to a nebulous moral decline, or the replacement of an intuitive or spiritual approach to life by barren rationality, a phase that may initially have been fruitful but, carried far enough, means decadence. Oswald Spengler, Arnold Toynbee, Pitrim Sorokin, Christopher Dawson and many others developed theories along such lines. Their thinking has more recently been echoed by Pat Buchanan in books like The Death of the West.

The writings of the materialistic theorists are similarly varied, but they usually find economic explanations for decline. In recent years, scholars applying complexity science to the problem, such as Joseph Tainter in The Collapse of Complex Societies and Peter Turchin in War and Peace and War have added considerable theoretical sophistication to this approach. It is by no means new, however, and Carroll Quigley's 1960 book The Evolution of Civilizations is an outstanding example.

For sociologist Quigley the key to success or failure is a society's "instrument of expansion." This is a social mechanism enabling it to accumulate and invest resources in economic, political and cultural enlargement. Medieval feudalism, early modern mercantilism and laissez-faire capitalism are just a few examples of such systems. After early successes, these mechanisms produce diminishing returns, which clash with the rising expectations of a population that had likely been expanding up to that point. The resulting economic scarcity, insecurity and inequality lie at the root of the ills that follow, including the "moral decline."

Consider the case of ancient Rome, a popular one given the over-reliance of many of these writers on Classical history in developing their "universal" theories of civilizational rise and fall. Starting in the third century B.C.E., Roman agriculture began to shift away from a foundation of small, independent farmers to plantations worked by slaves. The farmers went into the cities-and the legions-where they participated in a sequence of brutal, class-driven civil wars and the conquest of the Known World, a process that destroyed the republic and ushered in the reign of the emperors. That reign became increasingly oppressive, the empire weaker and weaker economically, militarily, demographically and culturally, and in the end the barbarians overwhelmed it.

Writers of a more mystical bent see the formerly austere Romans corrupted by a loss of religious faith, an influx of foreign cultural elements and the temptations of wealth and urban living. The result is the popular image of depraved elites wallowing in cruelty, sensuality and luxury, while the rabble lived only for bread and circuses. Rather than enabling renewal the spread of an otherworldly, pacifistic Christianity is commonly blamed for undermining the last of the original virtue of the Romans, providing an object lesson in the danger that alien ideas will fill a moral vacuum. The only wonder is that the empire lasted as long as it did, given the circumstances.

Economics-minded writers instead point to the limits of economic development for preindustrial, agrarian societies. The Roman Empire was sustained by territorial expansion, and especially the opportunities expansion brought to acquire slaves and plunder. These were eventually exhausted, however, and the empire was left managing many unprofitable territories that drained its resources. Attempts to redress the problem often worsened it, for instance the debasement of the currency (which set off a wave of inflation) and the increasing tax burden (which the wealthy shifted away from themselves and onto the poor, who had less and less to tax).

In response the government became increasingly heavy-handed, ineffective and torn by usurpers and civil wars, this instability rising right at the same time as the pressure from land-hungry barbarians. This strangled the empire's commerce and economic productivity, and in particular its insecure, overburdened farmers, often driven to abandon their land and turn bandit or join a spreading manorial economy. The resulting feedback loop of declining productivity, state weakness and insecurity drove the western empire to collapse.

Questioning the Inevitability of Civilization Decline
So, is the process of civilization decline inevitable, or can something be done about it? There are writers who argue that Rome's fall was indeed inexorable. Philosopher Oswald Spengler took the organic analogy of a civilizational life cycle to such an extreme that he mathematically charted the future of Western civilization through the third millennium. Still, even he recognized the possibility of societies arresting their own decline. Civilizations can bring much of their strife to an end by uniting in a "universal empire," the way that Rome united the Known World of its day, an idea that can also be found in Toynbee and Quigley. They may even enjoy a "golden age" of sorts, as Rome did in the second century C.E. under the "Five Good Emperors."

Such actions, however, are just stopgaps unless the underlying causes are dealt with. This is much more difficult to do, especially if one leans toward mystical explanations. Several writers, like Toynbee and especially Sorokin, see the only real way out in a religious renewal.

Many of the materialistic authors also offer a grim prognosis, but they are less prone to insist on the certainty of decay. Quigley, for instance, saw a way out in the replacement of a failed instrument of expansion. When feudalism failed in the fourteenth century, centralized, mercantilist nation-states appeared in Western Europe. When mercantilism hit a wall, financial capitalism came along. While he saw the Western world as having been in another such crisis since 1929, he did suggest a possible way out, based on molecular technology and renewable solar energy. (Intriguingly, many observers who have never read Quigley now regard molecular technology and solar power as the driving technologies of the future.)

While not framed as the narrative of a civilization's rise and fall, David Hackett Fischer's The Great Wave presents a pattern of crisis similar to the one Quigley described. He identifies crises in the fourteenth, sixteenth, eighteenth and twentieth centuries, each coinciding with a wave of inflation, the last ongoing at the time of his writing. Fischer notes, however, that better technology and organization each time around made the crisis less severe than the one that preceded it. Even the Great Depression of the 1930s saw nothing like the famines and epidemics that made Europe's population implode in the early fourteenth century.

Of course, the same technology and organization made the war that ended the decade the most destructive in history. Troubled societies usually have no shortage of astute observers diagnosing their ills and recommending workable solutions for at least some of their problems. The weak link in the chain tends to be politics, the capacity of societies to change their collective behavior when a given way of doing things stops working. This is very difficult to do in the face of old habits and vested interests.

This challenge may loom especially large for Americans, who may be more attached to recent attitudes and behaviors than any other major nation because of the cherished successes those approaches seem to have brought the nation in the twentieth century--global economic predominance, victory in wars hot and cold. There is also the very nature of those attitudes and behaviors. The brand of rugged individualism Americans celebrate sits uneasily with talk of a common good. Decades of culture war and market fundamentalism have also left their mark, the results memorably described in Morris Berman's Dark Ages America.

Today's generation appears to be one of cyberpunk anti-heroes, alienated and alone for all the promised connectiveness of their technology, abiding by no rules in its scramble to survive and succeed, and incapable of even imagining a different sort of world. However, no cultural moment lasts forever, and it's not impossible that this phase has just about run its course.

In either event, the toughest part of any effort will probably not be the availability of wealth, technology or ideas, but getting societies to use these resources to take serious action. This will mean recovering lost social capital, not in the sense of bringing back a stifling conformity, but drawing people out of their solipsism. It will mean restoring rationality and depth to a political discourse divided among a confusion of ideologically-slanted outlets preaching to their respective choirs and the superficial, tepid dialogue of the mainstream, and widening the too-narrow range of ideas that can get a hearing from a general audience. It will mean the cultivation of a mind-set that Thomas Homer-Dixon in his recent The Upside of Down terms "prospective," able to cope with uncertainty and complexity in its efforts to "prevent or forestall horrible outcomes," if necessary through fundamental, far-reaching solutions. And it will mean "idiot-proofing" those solutions so that they can survive the hostility of the vested interests which invariably appear.

As Quigley notes, it was not possible for state-building monarchs, the rising middle classes and rebels from the long-suffering peasantry to defeat the feudal aristocracy's resistance to change outright, but they did succeed in going around them, and built the modern world in doing so.

Tuesday, October 21, 2008

In The News

New OECD report on inequality in the advanced countries.

Strategic Deception and the Chinese Military Space Program

By Nader Elhefnawy
Originally published in the SPACE REVIEW, July 9, 2007

In May the Pentagon issued its Annual Report to Congress on the Military Power of the People’s Republic of China. The absence of an explicit overarching Chinese “grand strategy,” the ambiguity of China’s “no first use” policy on nuclear weapons, its red lines regarding intervention in Taiwan, and the vagueness of its definition of what would constitute an attack on its sovereignty or territory are all highlighted in the report. The same goes for the possibility that much of this is due not only to “uncertainties, disagreements, and debates that China’s leaders themselves have about their own long-term goals and strategies,” but “a deliberate effort to conceal strategic planning,” consistent with
the traditional roles that stratagem and deception have played in Chinese statecraft. Recent decades have witnessed within the [People’s Liberation Army] a resurgence of the study of classic Chinese military figures Sun-tzu, Sun Pin, Wu Ch’i, and Shang Yang and their writings, all of which contain precepts on the use of deception.
This report was followed up just a month later by Deputy Undersecretary of Defense for Asian Security Richard P. Lawless testifying before the House Armed Services Committee that there is a “deliberate effort on the part of China’s leaders to mask the nature of Chinese military capabilities.” Where military space activity is concerned, such thinking has been encouraged by China's actions over the past year—its reported use of a laser on a US satellite, its antisatellite missile test in January—and has already contributed to greater spending on military space programs such as space situational awareness. (See the Defense News article “China Sat Test Spurs U.S. To Boost Space Spending.”)

On the face of it, this all seems plausible enough, but it should be remembered that it is easy to make too much of the idea of a menacing Chinese strategic culture of deception. It plays into simplistic stereotypes about Eastern and Western strategic cultures that obscure far more than they reveal (a problem incisively examined in an article by Patrick Porter in the Summer 2007 issue of the U.S. Army War College Quarterly Parameters). While China’s dictatorship may be more opaque than many other governments, the reality is that institutional politics and government secrecy exist everywhere, and China certainly has no monopoly on deception, least of all in this realm. Indeed, the US’s own red lines regarding Taiwan were deliberately ambiguous between 1979 and 2001.

In fact, another act of distinctly non-Chinese, Cold War-era strategic deception suggests a different rationale for China’s military space policies. Back in the early 1980s, the U.S. Army rigged a series of ballistic missile defense test to convince the Soviet Union that America’s efforts in this area had made much more progress than they actually had. (You can read about it in a report of the U.S. General Accounting Office, which you can find here.) The incident is worth remembering as a reminder that deception can work both ways, not just concealing capabilities, as China has been accused of doing, but concealing the lack of a capability. This makes it conceivable that the brash, provocative actions China has taken over the last year, which did anything but mask Chinese capability, are motivated by a desire on the part of China to instead mask its comparative weakness.

According to a report issued by the Federation of American Scientists and the Natural Resources Defense Council last November, Chinese Nuclear Forces and U.S. Nuclear War Planning, China’s nuclear arsenal may be considerably more limited than Western experts generally believe. On the conventional level, China still has an air force in which MiG-19 and MiG-21 knock-offs are the most numerous aircraft, an army in which locally produced versions of the T-54 still comprise the bulk of the inventory, and a submarine force built around Romeo-class vessels—1950s-era technology all. Moreover, even if its widening space surveillance capabilities and missiles, lasers, and jammers will enable it to target US satellites, China’s own numerous satellites are at least as vulnerable to attack by far more capable US forces—including presently operational electronic warfare units, and an array of US laser systems (like the Airborne Laser) which will be deployed in the years to come. Additionally, China’s overwhelmingly terrestrial counterspace forces would be susceptible to air and missile attacks by superior American forces.

In the end, the most that Chinese counterspace forces are likely to be for decades to come is a space age version of the old idea of the fleet in being, a “risk” force unable to win the war, but capable of deterring an enemy through the possibility that it will exact an unacceptable price for a victory. (See my own article on the issue, “Four Myths About Space Power.”) This, too, would not be unprecedented for Chinese policy, given that this is more or less the threat China can mount against Taiwan, which it is too weak to take by invasion (an assessment in which the Pentagon’s May report concurs), blockade effectively, or even beat down with non-nuclear missile strikes. Acts which send alarmists into a frenzy only enhance the credibility of such a strategy.

This possibility is especially worth considering since the trend in American history has been to overestimate the danger from potential opponents—often, far beyond what prudence, or even strategic pessimism, call for (a history examined by, among others, James Chace and Caleb Carr in America Invulnerable: The Quest For Absolute Security From 1812 to Star Wars). That pattern, if anything, makes it all the more likely that Chinese strategists would opt for such a course with respect to the United States. Despite their frequent caricature in the scholarship as obsessed devotees of Sun Tzu, it rarely occurs to commentators that they might be following Sun Tzu’s dictum of weakening an opponent by drawing American attention to a less relevant object, so as to leave the places where they really intend to strike unguarded—fixing US attention on Chinese military space capabilities, for instance, while making their real moves down here on Earth. China’s trade surplus with the United States, its amassing of vast reserves of foreign currency (dollars included) and financing of American borrowing, and its ability to combine its economic weight with the animosity of other states to US policies to form alliances, may over the long run prove to be far more important to the balance of power between the two countries than its antisatellite test.

Subscribe Now: Feed Icon