I’m finishing up my translation of Clausewitz Book Two into Kid, and I was struck by the last few paragraphs of Chapter Five, including the passage below:
A far more serious menace is the retinue of jargon, technicalities, and metaphors that attends theses systems. They swarm everywhere–a lawless rabble of camp followers. Any critic who has not seen fit to adopt a system–either because he has not found one that he likes or because he has not yet got that far–will still apply an occasional scrap of one as if it were a ruler, to show the crookedness of a commander’s course. Few of them can proceed without the occasional support of such scraps of scientific military theory. The most insignificant of them–mere technical expressions and metaphors–are sometimes nothing more than ornamental flourishes of the critical narrative. But it is inevitable that all the terminology and technical expressions of a given system will lose what meaning they have, if any, once they are torn from their context and used as general axioms or nuggets of truth that are supposed to be more potent than a simple statement.
He also talks about narrowness of thinking, the dangers of vanity in critical analysis, and misuse of historical examples in critical studies. I found it funny, in a dark way, how relevant this passage is to the jargon, sloppy thinking, and ineffectual analysis which are alive and well in contemporary writing on war, strategy, and security–never mind the frequency with which Clausewitz’s own words and ideas are taken out of context or contorted to fit a writer’s point. I guess some things never change.
In a painful public relations turn for Iran following its threat to close the Strait of Hormuz; US sailors (both Navy and Coast Guard) have rescued Iranian mariners twice in the past week. The first incident was a dramatic rescue from Somali Pirates reported by C.J. Chivers.
In a naval action that mixed diplomacy, drama and Middle Eastern politics, the aircraft carrier John C. Stennis broke up a high-seas pirate attack on a cargo ship in the Gulf of Oman, then sailors from an American destroyer boarded the pirates’ mother ship and freed 13 Iranian hostages who had been held captive there for more than a month.
The rapidly unfolding events began Thursday morning when the pirates attacked a Bahamian-flagged ship, the motor vessel Sunshine, unaware that the Stennis was steaming less than eight miles away.
It ended Friday with the tables fully turned. The captured Somali pirates, 15 in all, were brought aboard the U.S.S. Kidd, an American destroyer traveling with the Stennis. They were then shuttled by helicopter to the aircraft carrier and locked up in its brig.
Yesterday’s rescue was a little less dramatic, unless you were one of the Iranian sailors on the sinking ship…
Pentagon spokesman George Little said Tuesday that the crew of a U.S. Coast Guard cutter rescued the mariners after getting distress signals from the Iranian cargo vessel Ya-Hussayn.
“It was hailed by flares and flashlights from the Iranian cargo dhow and the dhow’s master requested assistance from the cutter indicating that the engine room was flooding and deemed not seaworthy,” said Little.
The U.S. Navy says the U.S. Coast Guard transferred the Iranian crewmembers to safety aboard the U.S. cutter Monomoy. A Navy statement quotes the owner of the Iranian vessel as thanking the U.S. seamen for rescuing the sailors, saying that without the Americans’ help, they would be dead.
Both of these incidents point to the professional nature of our naval forces. The Navy and Coast Guard should be proud of the way they represented their country and the goodwill they generated in the Middle East and specifically inside Iran. While this clearly doesn’t fundamentally change anything about the relationship between the US and Iran, it may help to ease tensions.
<<Speaking of tension, I want it to be noted that I have managed to make it this far without cracking on the Coast Guard in any way or questioning why exactly they were 5000 miles away from any US coastline>>
However, the part of the story that I specifically want to focus on is from the official blog of the US Navy.
When English and Arabic bridge to bridge hails from Kidd failed to sort out the situation aboard the pirated fishing vessel Al Mulahi, the CO of USS Kidd, Commander Jen Ellinger, and her team cleverly thought to try other languages. Chief Petty Officer Jagdeep Sidhu, a gas turbine electrician Chief from India who speaks Hindi, Urdu, Punjabi was able to communicate to the Captain of the pirated vessel in Urdu which the [Somali] pirates did not understand, this tipped the Kidd that that the crew was being held hostage. Other languages spoken by Kidd’s crew include Hindi, Urdu, Punjabi, Cambodian, Thai, Spanish and Chinese to name some.
This jumped out at me for a couple of reasons. First, and I promise not to get all Lee Greenwood on you, but there is something purely American about the fact that our Navy is literally made up of people from around the world. This should make you proud. Second, Chief Sidhu doesn’t have a linguist MOS (yes, I know the Navy actually calls them rates, but nobody outside the Navy understands this and it makes the rest of us crazy when you try to explain it). Third, it doesn’t seem like the languages spoken by the crew was in any way tailored for region since the list is missing a couple of key ones like Farsi and Pashto. Now this could simply be me reading too much into the text of the blog, but the implication is that it was pure luck that Kidd sailors spoke a language that allowed them to communicate with the Iranian captain (bonus that it was a language not spoken by the Somali pirates). However, without this ability to communicate with the captain of the hijacked vessel, the whole situation could have played out very differently.
The issue of language seems like an important one to highlight since the President and Secretary of theDefense have just released the newly tailored ‘Strategic Guidance’. By any reading of this document, it pivots on an increased emphasis on the Navy with a focus on the Pacific and a step away from “large land wars in Asia,” to quote a certain former Secretary of Defense. While there is much to discuss in this document (go read Gulliver’s take here and here and Jon Rue’s take on it here), there seems to be an opportunity to take a some lessons that our COIN forces have been learning (often the hard way) over the last decade.
One of those lessons is the importance of understanding local languages and the cultures we operate in. In this regard we were horribly unprepared for our operations in Afghanistan and Iraq. Organizationally, we made strides in these areas by increasing access to linguists (both military and civilian), standing up Human Terrain Systems and eventually deploying so often that many military personnel learned local languages and customs by osmosis. On balance, these all had positive effects on our cultural understanding, but they were too little, too late. Unfortunately it does not appear that the Army and Marine Corps will be able to retain this skills since they have not been properly institutionalized. According to a set of GAO reports released in 2011, as we were in the process of winding down operations in Iraq, the Army and Marine Corps are sill not properly tracking and sustaining language training.
The Army and Marine Corps have not developed plans to sustain language skills already acquired through predeployment training. The services have made considerable investments to provide some service members with extensive predeployment language training. For example, as of July 2011, over 800 soldiers have completed about 16 weeks of Afghan language training since 2010 at a cost of about $12 million. DOD and service guidance address the need to sustain language skills and the DOD strategic plan for language, regional, and culture skills calls for the services to build on existing language skills for future needs. However, we found that the services had not yet determined which service members require follow-on language training to sustain skills, the amount of training required, or appropriate mechanisms to deliver the training. Although informal follow-on training programs were available to sustain language skills, such as computer-based training, these programs were voluntary. In the absence of formal sustainment training programs to maintain and build upon service members’ language skills, the Army and Marine Corps may miss opportunities to capitalize on the investments they have already made to provide predeployment language training for ongoing operations.
So, this is where we come back to the Iranian fishermen and the US Navy. Barring something unforeseen, in the very near future we are not going to have hundreds of thousands of soldiers and Marines conducting large scale operations throughout the Middle East. That means that within the traditional military (ignoring Special Forces for the purpose of this post) a much larger portion of the “Hearts and Minds” burden will fall on the US Navy. I have no idea what the language and cultural training standards currently are for the US Navy or how well prepared they are to interact with the populations that they may encounter. However, as we pull back from the Middle East and refocus on Asia Pacific, it seems very likely that our opportunities to interact positively with populations that are skeptical or downright hostile to the US will be reduced. That means that when we do have those opportunities to interact, such as with the Iranian sailors over the last week, it is critically important that it is positive.
My last post for Gunpowder & Lead began with the entirely accurate observation that few forms of writing are consistently less satisfying than “five myths” pieces. Several colleagues — including Gulliver at Ink Spots and even G&L‘s own Sky Gerrond — took this to mean that I simply dislike listicles. Not so. I have nothing against listicles, so long as one understands the limitations of that genre, but find that “five myths” pieces tend to be a uniquely weak form of writing and argumentation. All of which is a rather long wind-up to explain why today you’re getting my own listicle, on the five trends that are likely to shape the U.S.’s national security environment over the course of the coming decade, through 2020.
#5: The U.S.’s Strategic Pivot Toward the Pacific
President Obama’s visit to the Pentagon’s briefing room to announce the U.S.’s new strategy for a scaled-down military was in fact the first such visit by a sitting president. His presence there was fraught with symbolism that was matched by the significance of his announcement. The Telegraph goes so far as to argue that the strategic move toward the Pacific that Obama announced is of historical significance: “Future historians will probably conclude that this was the week when America’s entire foreign and defence strategy pivoted decisively away from Europe and towards the Pacific. More ominously, it might also mark the onset of a new, if concealed, arms race between the U.S. and its aspiring rival, China.”
This move toward the Pacific in America’s military and strategic posture is clear. Data points demonstrating this shift can be seen by the coming reduced U.S. military presence in Europe, as well as a new forward base in Australia to which thousands of American troops are headed. Clearly, as most every commentator on these issues has noted, the containment of China is one of the reasons for the U.S.’s evolving military posture — or, to use one of those rare Friedman-isms that is actually useful, perhaps this posture should be better understood as “containment-lite.”
I put the U.S.’s turn toward the Pacific as number five on my list because it represents a conventional set of national-security problems: competition between nation-states, perhaps even great power rivalry. But I think that the national-security environment over the next eight years is in fact going to be defined by newer issues, the kind of concerns that don’t neatly fit within traditional security paradigms. In part, I think the Pacific is unlikely to be the security issue that characterizes this decade because I don’t expect the U.S.-China rivalry to sharpen significantly in the next eight years. Obviously, there are wildly varying estimates of the future of Sino-U.S. relations among analysts. But while my views are subject to evolution as the facts change, there are a few reasons that I don’t think this issue will dominate the coming decade.
The significant economic interlinkages between the U.S. and China give both countries an incentive to avoid, say, actual shooting wars. But more significantly, it seems that China has more to fear from the new security environment than does the U.S. Internal fragmentation is a real possibility for China, something that is likely to constrain the country in dealing with America. This is because it seems that China is keenly aware, strategically, of the perils of diverting valuable resources toward military confrontation with the United States — including arms races that fall short of escalation to violence. An example can be glimpsed in China’s nuclear arsenal. For about twenty years after it became a nuclear power, China essentially lacked a second-strike capability. Though it has tried harder to establish a deterrent force since around 1985, it has done so at a rate that some observers consider inexplicably slow. One might conclude from this example that China doesn’t see nuclear weapons as a powerful tool of statecraft, but I think the deeper lesson concerns China’s decision-making about its management of finite resources.
If there were a major conflict between China and the U.S. during this decade, the most likely flashpoint is one of the other major trends I will discuss shortly, natural resource scarcity.
#4: Technological Changes Empowering Small Groups and Non-State Actors
Powerful examples of how technological changes have empowered small groups and non-state actors emerged over the past year, in the form of the “Arab Spring,” the August riots in Britain, and to a lesser extent the Occupy movement. Observers have widely varying views of the impact of the Arab Spring and Occupy movement, with passionate voices on both sides of those developments (seeing them as net positives or negatives), but it is worth noting that thus far the impact of technological change has played out in largely, though not entirely, non-violent ways.
The Arab Spring has, of course, not been entirely non-violent, as events in Libya make clear. But generally observers have interpreted this sequence of events as organization empowered through technology for positive ends. It is a rare individual indeed who will shed a tear for the deposed Arab dictators (although the situation of minority religious communities in these countries is a very real concern). But advances in communication technology that allow more effective organizing can also be used to advance ill intentions. A good example of technologically-empowered organizing for a far less noble cause can be seen in the riots that rocked Britain for four days back in August. In a must-read article published last month, Wired lucidly explains the role of BlackBerry Messenger in stoking that unrest. For example, Wired details a scene from Enfield that makes clear the advantages enjoyed by hyper-networked rioters (told through the eyes of Nick de Bois, one of Enfield’s MPs):
De Bois was standing outside the sealed-off zone, behind one line of police, in an open area that led to the train station. As he watched in amazement, more and more people—some disembarking trains at the station, some stepping out of cars—continued to pour into the plaza. Riot police were convoying in, too, but their numbers couldn’t possibly keep up. And even if they did, it was impossible to definitively separate the would-be rioters from the bystanders.
Right behind a line of armor-clad police who had successfully contained a riot, this new crowd of hundreds was gearing up to touch off a second riot. As 7 pm approached, face coverings went up, and a small group walked past de Bois with a crowbar. Gangs began to break windows throughout the plaza—one local jewelry store lost nearly $65,000 in stock. Police would descend on a group, but then the crowd would disperse, only to reconstitute itself someplace else a few minutes later. Part of the issue was a peculiarity of British policing: Largely because most cops lack guns, they can’t easily carry out mass arrests, even in emergencies. Instead, each arrestee is physically accompanied by individual officers for booking. With their numbers already stretched thin, the police could not take looters off the streets without further depleting their own ranks.
But there was also something strange about the character of this riot, and these rioters—something that seemed to make the violence unstoppable. At base, it was their confidence: their surety that, as they streamed out of their cars and trains, or as they milled around in small groups, or even after they were dispersed by police, they would always find one another in sufficient numbers.
In the U.S. we have also seen “flash mobs” used for robberies in at least five cities. The use of technological empowerment for ill ends is likely to be an increasing issue — particularly if our political system remains ineffective, and discontent continues to rise. Moreover, this kind of unrest can be exploited by a variety of bad actors.
I should note also that the technological empowerment of non-state actors feeds into the reason I argue that the potential for Chinese internal instability may deter it from significant conflict with the United States. In 2010, for example, China experienced “180,000 protests, riots and other mass incidents—more than four times the tally from a decade earlier.” And, as the Wall Street Journal notes, that count alone “doesn’t tell the whole story on the turmoil in what is now the world’s second-largest economy.”
#3: Political Dysfunction
Political dysfunction, as noted above, can be an accelerant for technologically-empowered non-state actors using the tools at their disposal to cause chaos. If people lack confidence in the government’s ability to govern or to reform itself, they may resort to self-help measures.
Robert Gates gave an important speech in Philadelphia back in September, shortly after he stepped down as the U.S.’s defense secretary, in which he said we are now in “uncharted waters when it comes to the dysfunction in our political system.” Gates outlined three major drivers of this predicament:
- A redistricting process that has created an increasing number of safe seats for both parties in the House of Representatives. As a result, the primaries in these districts are more important than the general elections, and “candidates must cater to the most hard-core ideological elements of their base.”
- The erosion of consistent strategy for addressing the critical issues that our nation faces. Gates notes that the U.S.’s strategy remained relatively constant through the Cold War, even through leaders as different as President Carter and President Reagan. In contrast, Gates stated, “when one party wins big in a ‘wave election’—of which there have been several in recent election cycles—it typically seeks to impose its agenda on the other side by brute force.” This makes consistent strategy more difficult, and thus erodes the U.S.’s ability to address the major challenges it faces.
- An increasingly partisan media in which extreme positions are given more prominence. Gates stated: “When I entered CIA 45 years ago last month, three television networks and a handful of newspapers dominated coverage and, to a considerable degree, filtered extreme or vitriolic points of view. Today, with hundreds of cable channels, blogs and other electronic media, every point of view, including the most extreme, has a ready vehicle for wide dissemination. You can’t reverse history or technology, and this system is clearly more democratic and open, but there is also no question that it has fueled the coarsening and, I believe, the dumbing down of the national political dialogue.”
One interesting aspect of Gates’s last point is that it again represents a system that is having trouble adapting itself to the changes wrought by technological advances — in this case, how advancing technology changes the media environment. All of this amounts to an erosion of the moderate center, which Gates calls “the foundation of our political system and our stability.” If we have a government that cannot govern effectively, it may find itself unable to effectively address the various other challenges that comprise this list.
#2: Natural Resource Scarcity
The impact of natural resource scarcity can be discerned in multiple areas, but the potential for steeply rising oil prices is of particular importance. Oil prices are currently at their highest level ever for this time of year, so obviously the U.S. may see extremely high oil and gas prices over the course of 2012.
There are a couple of implications to rising oil prices. First, rising prices risk economic whiplash. Oil hit over $145 a barrel in July 2008, a few months before the U.S. economy collapsed. I am obviously not blaming oil prices for what transpired in September 2008: the sub-prime mortgage crisis was the proximate cause. But just as clearly, high oil prices — which had the U.S. sending over $500 billion a year overseas — limited the American economy’s flexibility in dealing with the other challenges and crises. Given the U.S.’s dependence on the automobile, and also on imported oil, how long can the current economic recovery continue while prices rise?
Also, natural resource scarcity, and high energy prices in particular, drive up the price of food. Rising food prices are certainly felt in the United States, but they are felt even more acutely overseas. One of the driving factors behind the Arab Spring was in fact rising food prices, and the difficulty citizens were experiencing in having their basic needs met. When sky-high expectations (as have undoubtedly accompanied the Arab Spring) go unfulfilled, extreme ideologies can take hold. So this overarching trend of resource scarcity may help to breathe new life into the major challenge the U.S. focused on in the past decade — al Qaeda — at a time when many analysts are all too eager to declare it dead.
#1: America’s National Debt
The national debt constrains America’s ability to deal with all of the various significant challenges that it now confronts. It is a national-security issue in itself; indeed, I agree with former Joint Chiefs of Staff chairman Adm. Michael Mullen that the national debt is the top national security threat that we face. Indeed, our debt continues to rise despite the current round of government cutbacks — thus indicating that we will face steeper cuts in the future.
The debt limits our ability to project power and deal with challenges in multiple parts of the world. It presents challenges at a global, national, and local level. When the national debt is viewed in light of technological changes that can facilitate unrest, a feedback loop could emerge. Government cutbacks may drive up unemployment and force scaled back social services, which can drive unrest (making people feel they have less to lose by rioting, for example) — and in turn, these cutbacks mean that the state has less capacity to undertake policing measures against increasingly organized forces of unrest, and less capacity to repair damages thereafter.
So these problems very much interrelate. Further providing a perspective on our national debt, Harvard University historian Niall Ferguson wrote in 2009 that America’s “ability to manage its finances is closely tied to its ability to remain the predominant global military power.” Not mincing words, Ferguson added, “This is how empires decline. It begins with a debt explosion. It ends with an inexorable reduction in the resources available for the Army, Navy, and Air Force.” This is why, Ferguson says, observers are correct to worry about the U.S. debt crisis. “If the United States doesn’t come up soon with a credible plan to restore the federal budget to balance over the next five to 10 years, the danger is very real that a debt crisis could lead to a major weakening of American power.”
I was all set to offer up my own thoughts on the results of DoD’s strategic review, but came across something that I fear will be proven correct.
Overall, the new strategy is still one of champagne tastes on a beer budget. It requires the U.S. military to be capable of too many missions and to do too many things. While it proposes being more judicious in our choices of where, when and how to intervene abroad, no administration has demonstrated any self-discipline in this area. We have 100 U.S. soldiers in Uganda. If we cannot even see our way clear to leaving the Lord’s Resistance Army unmolested, where won’t we go and who won’t we fight?
I didn’t agree with much else Dan Goure wrote in the post from which the above paragraph was taken, but it was worth reading for that.
As my friend Gulliver notes in his excellent review of the review, this “strategy” doesn’t appear to set any priorities. I fear that I’ll be lumped in with the libertarian set, or worse, Ron Paul, but this so-called strategy and the idea of significantly reducing the growth of the Department of Defense is meaningless unless we’re prepared to revisit our assumptions on the utility of military force. Moreover, we have to rethink what constitutes “vital American interests” when considering military action. I propose we drop the ‘vital’ from that cliché – it assumes interests are automatically at stake, which is not a valid assumption, and frames the choice as one of vital interests or just interests. Instead, we should be thinking in terms of interests or no interests. Intervening in, say, Libya is either in our interest or it isn’t.
The strategic review is meaningless because as Goure notes, no administration has demonstrated any self-discipline in choosing where, when, and how to intervene abroad. I see no evidence that this will change in the future. The War Powers Resolution was meant to provide a check, but if the current Congress is the norm, then the President will be able to do whatever he/she wants. So, although this review is supposedly setting out a roadmap for a leaner military and a smaller Department of Defense, force is still likely to be a growth industry.
Few forms of writing are consistently less satisfying than “five myths” pieces. The genre, by its nature, tends toward shallow analysis and the propagation of conventional wisdom under the guise of puncturing conventional wisdom. But even for a weak genre, Fawaz Gerges’s new piece at the Huffington Post is noteworthy for the way it gets basic facts wrong, couples sweeping epistemological errors with an overarching arrogance, and erects its own myths while purporting to cut down “fantasies” about al Qaeda.
Gerges’s piece begins on a bad note, asserting without explanation that the recent uprisings in the Arab world have “hammered a deadly nail in the coffin of a terrorism narrative which has painted Al-Qaeda as the West’s greatest threat.” This statement, expressed with such certitude, represents a gigantic unproven assumption about which multiple Ph.D. dissertations could be authored. But the piece gets even worse in the second paragraph, in which Gerges declares: “Shrouded in myth and inflated by a self-sustaining industry of so-called terrorism ‘experts’ and a well-funded national security industrial complex whose numbers swelled to nearly one million, the power of Al-Qaeda can only be eradicated when the fantasies around the group are laid to rest.” Let’s leave aside the fact that, as Will McCants points out, it is laughable that Gerges places himself outside the “terrorism industry”: this is at its essence dishonest argumentation. Gerges is stating that all who disagree with him, by necessity, have suspect motives and should be distrusted.
That is a remarkable statement, especially because — like so many “five myths” pieces — Gerges in fact peddles several pieces of conventional wisdom while insisting that he is the one puncturing widely held myths. After all, when we have an administration claiming that al Qaeda has “been reduced to just two figures whose demise would mean the group’s defeat,” the idea that al Qaeda is dying or dead isn’t exactly revolutionary. Perhaps, rather than impugning the motives of those who do not share his outlook, Gerges should be more modest in understanding that many widely-held assumptions of the past decade have been proven wrong — and Gerges himself is no exception with respect to having a record of botched predictions. Of course, the fact that Gerges has been wrong before in pronouncing al Qaeda dead doesn’t mean that he won’t be right one day. So let’s take a look at Gerges’s arguments, and some of the “myths” that he punctures.
Myth: Al Qaeda Has Been Operational for More Than Two Decades
This is by far the most puzzling of Gerges’s various “myths.” He writes that, contrary to “the conventional terrorism narrative,” al Qaeda “has not been a functional organization with the goal of targeting the West for the past 20 years.” The reason behind this argument is that no leading figures within al Qaeda called for targeting the U.S. at the end of the Afghan war in 1989. Indeed, Gerges writes: “Even after the catalyst for change in bin Laden’s thinking — the American military intervention in the Gulf in 1990 and its permanent stationing of troops in Saudi Arabia — the group did not translate this hostility into concrete action. Rather, it was during bin Laden’s time in Sudan in the mid-1990s where [sic] he combined business practices with ideological indoctrination.”
The passage is puzzling because it is dead wrong. Contrary to the assertion that al Qaeda has not targeted the West “for the past 20 years,” it was exactly 20 years ago that al Qaeda first translated its hostility into concrete action (or, to be more precise, 19 years and one month ago). In 1992, al Qaeda orchestrated a December bombing of two hotels in Yemen that housed U.S. soldiers en route to the Horn of Africa for Operation Restore Hope (a U.N.- sanctioned humanitarian mission to Somalia). Al Qaeda also took concrete action by sending military trainers to Mogadishu prior to the October 1993 downing of a U.S. helicopter in Mogadishu. Most observers are skeptical that these trainers played a role in this infamous incident, but my point is not that al Qaeda got results: rather, the point is that al Qaeda did indeed take “concrete action” twenty years ago, a fact that isn’t difficult to ascertain.
Myth: While Al Qaeda Central Suffered a Defeat with the Loss of Bin Laden, Local “Branches” Will Continue to Try to Attack the West
This is another puzzling “myth” for Gerges to try to bust. Here is Gerges’s complete refutation of the idea that branches of al Qaeda will continue to try to launch attacks:
The material links and connections between local branches and Al-Qaeda Central are tenuous at best: far from being an institutionally coherent social movement, Al-Qaeda is a loose collection of small groups and factions that tend to be guided by charismatic individuals and are more local than transnational in outlook. Most victims are therefore Muslim civilians. Further, these branches tend to be as much a liability for the long term strategic interests of Al-Qaeda Central as they are assets. Abu Musab Zarqawi, the emir of Al-Qaeda in Iraq, proved to be Al-Qaeda Central’s worst enemy. He refused to take orders from bin Laden or Zawahiri and, in fact, acted against their wishes, according to his own desires. Like Zarqawi, local groups or franchises — like Al-Qaeda in the Arabian Peninsula (AQAP) or Al-Qaeda of the Islamic Maghreb — which the terrorism narrative often paints as being closely aligned and commanded by Al-Qaeda Central in fact have proven repeatedly that they run by their own local and contextualized agendas, not those set among the inner sanctum of Al-Qaeda Central.
Okay, what is missing from his refutation of this “myth”? That’s right — any refutation at all. If you pay close attention, the claim that local branches are not closely linked to al Qaeda’s central leadership doesn’t in fact mean that they won’t continue to try to attack the West. Gerges even names AQAP as a group that isn’t “closely aligned and commanded by Al-Qaeda Central” — but what has AQAP done over the past two years? It has successfully placed three bombs on board airplanes destined for the United States in the attempted Christmas Day bombing of 2009 and the subsequent ink cartridge plot of October 2010. Gerges in fact mentions the Christmas Day bombing without noting that it was orchestrated by AQAP, which supposedly is not going to try to attack the West — perhaps because mentioning that salient fact would puncture his own myths. (It is worth further noting that AQAP’s emir, Nasir al Wuhayshi, was an understudy of bin Laden’s; and AQAP was set up in a manner similar to al Qaeda central. For that reason, Leah Farrall, a former senior counterterrorism intelligence analyst for the Australian federal police, wrote in Foreign Affairs that AQAP is best understood as a branch of al Qaeda rather than a franchise. After all, it “was created by, and continues to operate under, the leadership of core al Qaeda members.”)
Moreover, is it actually true that “the material links and connections between local branches and Al-Qaeda Central are tenuous at best”? How does Gerges know this? The notion that the connections between al Qaeda central and its affiliates were tenuous had become the conventional wisdom among terrorism analysts (or, some might say, had hardened into a myth) before bin Laden was killed. And yet, as an Associated Press report published shortly after bin Laden’s death noted, analysts who examined the information recovered from his Abbottabad compound came to believe that bin Laden “was a lot more involved in directing al Qaeda personnel and operations than sometimes thought over the last decade,” and that he had been providing strategic guidance to al Qaeda affiliates in Yemen and Somalia.
So, analysts were wrong about al Qaeda’s central leadership being operationally irrelevant before. Do we somehow know that the links are tenuous now? The answer is a resounding no. Let me quote myself from December 30: “The methods of communication now being used by Zawahiri are the kind of methods the world’s monarchs would have used 200 or 300 years ago: couriers. This avoidance of e-mail and electronic transmissions that could be uncovered by SIGINT limits our visibility of the network.” In other words, the evidence most definitely is not there to bear out Gerges’s claim about the lack of relation between AQ’s core and affiliates — and those who previously adhered to this view turned out to be wrong when the Abbottabad documents gave us more visibility. This reinforces my point about the need for modesty when making definitive judgments — and an important part of being modest is distinguishing between what one knows and what one does not.
Myth: The War on Terror Has Made Americans Safer
As Jeff Emanuel noted on Twitter, I have also argued that the “war on terror” has not made us safer — so I am not going to take issue with the fact that our approach has been problematic. Hell, I wrote a whole book on this point. But even here Gerges manages to completely misrepresent the extant literature. “U.S. counterterrorism measures like drone attacks further fuel anti-American sentiments and calls for vengeance,” he writes. “Yet neither the U.S. national security apparatus nor terrorism experts acknowledge a link between the new phenomenon of bottom-up extremism and the U.S. War on Terror, particularly in Afghanistan-Pakistan.”
Wait, what? Terrorism experts and the “national security apparatus” do not “acknowledge a link between the new phenomenon of bottom-up extremism and the U.S. War on Terror”? Well… in July I stated in an interview, “Turn to Somalia, where we’re escalating drone strikes. What we’re doing there is a tremendous mistake…. Look at the history of al-Qaida in the Arabian Peninsula. It found aid and comfort from the tribes in Yemen after U.S. airstrikes ended up killing a number of tribal leaders in the hunt for Anwar al-Awlaki. That’s a result of us not really knowing the terrain. We carry out strikes without knowing the second-and-third order effects of what’ll happen.” Then there is the well-known New York Times op-ed from David Kilcullen (a fixture of the U.S. national security apparatus) and Andrew Exum, examining drone strikes in Afghanistan-Pakistan. It is unsubtly entitled “Death from Above, Outrage Down Below.” There is terrorism expert Marc Sageman, who writes in Leaderless Jihad of the connection between bottom-up extremism and the war on terror, stating that the “presence of even one American soldier in uniform in Iraq will trump any goodwill policy the United States attempts to carry out in the Middle East.” And these are just a few examples off the top of my head; I could likely provide more than a hundred quotes on this point from terrorism analysts and others in the “national security apparatus.” If Gerges is going to argue that an entire body of analysts have a gigantic blind spot, he should have at least a passing familiarity with what those analysts actually say.
Conclusion: So, Is Al Qaeda Dead?
Gerges’s piece begins where it started, with the assertion that the Arab uprisings have killed al Qaeda. “Tyranny, dismal social conditions, authoritarian political systems, and the absence of hope provide the fuel that powers radical, absolutist ideologies in the Muslim world,” he writes. “If the Arab awakenings of the past year manage to fill the gap of legitimate political authority, they will annihilate the last dregs of Al-Qaeda and like-minded local branches.”
Maybe? It’s not clear how awakenings in the Arab world can “annihilate” al Qaeda’s central leadership in Pakistan. How will the Arab uprisings annihilate al Shabaab in Somalia? And the chaos in Yemen has resulted in anything but an annihilation of AQAP. I have written previously about why the “Arab Spring” doesn’t inevitably sound the death knell for al Qaeda, and I won’t repeat those arguments here. Suffice it to say that it’s ironic that a piece asking us to critically assess the conventional wisdom to puncture fantasy in turn offers up its own set of seemingly unexamined myths.
This is not to say that one cannot reasonably argue that al Qaeda is in decline. Will McCants and William Roseneau make reasonable arguments to that effect here, and other scholars whom I respect have advanced similar points. But in engaging in this debate, it is vital to be humble in assessing what we know and what we do not, and to be careful with the facts we bandy about. Gerges’s article is a perfect model for how this discussion should not proceed.
During my radio interview today on Jon Justice’s show, Jon asked an interesting question. “Are we in a place in history right now,” he said, “where we won’t see a conflict on a massive scale because of how developed the nations are? Any type of major event would bring so many factions into it that we almost couldn’t get out. I just can’t envision us getting into another World War-type scenario.”
It was an interesting question — and a fair one that will certainly be asked again in an era of declining defense budgets — but I had to answer with a rather emphatic no. For a bit of historical perspective, about 100 years ago prominent European liberals thought that war had become increasingly unlikely because the intertwining of European economies made warfare prohibitively expensive. This argument was made most prominently by eventual Nobel Peace Prize winner Norman Angell in his 1910 book The Great Illusion. The First World War, of course, disproved this rather optimistic assessment of the future of armed conflict. But in another way, Angell was right: World War I was prohibitively expensive, a war in which it can be said that there were no real victors.
Today, we can rather confidently predict that another major conflict would be incredibly costly to whomever takes part. Certainly the U.S. will be quite reticent to commit its forces to another major conflict anytime soon, given the astronomical costs of the Iraq war, and the massive debts that the country has occurred. I think this reticence is justified: that is one reason that I opposed from the very outset the U.S. military intervention in Libya (a foreign policy decision that increasingly appears to have had significant negative unintended consequences). But one resounding lesson of the past hundred years is that unpredictable things will happen when it comes to armed conflict.
Most recently, very few strategic thinkers envisioned an event like 9/11 in advance; and indeed, these attacks heralded the rise of violent non-state actors as a strategic challenge, even to the world’s most powerful country. As I have argued, violent non-state actors are likely to pose an increasing rather than diminishing challenge over the course of the coming decade. And the fact that violent non-state actors are a significant force provides an answer to Norman Angell’s basic argument, as it might be applied today: though major conflicts are likely to be terrible for nation-states economically, non-state actors’ interests are not tied to those of the countries in which they find themselves. They won’t be deterred by the same strategic factors that might deter nation-states. As I argued in my latest book, the economic costs of conflict can in fact work to violent non-state actors’ benefit: one of al Qaeda’s key strategic goals over the past decade was to grind the U.S. down economically, and the jihadi group was quite successful in doing so.
Moreover, even outside the sphere of non-state actors, history rarely proceeds in predictable patterns. Multiple developments could suddenly usher in large-scale armed conflict: tensions fueled by resource scarcity, the escalation of civil wars or non-state violence into full-blown state-to-state fighting, a surprise attack on the global supply of oil, the rise of expansionist ideological parties in any number of vital countries, even a miscalculated nuclear launch in South Asia or elsewhere.
The unpredictability of armed conflict is one reason that, when it comes to current debates about counter-insurgency, I’m skeptical of the idea that the singular lesson of our recent experience is that we should never again put ourselves in a position where we are fighting against an insurgency. Surely, the position that we should be extremely hesitant to do so is reasonable, worthy of discussion; so too is the position that our current military posture is not worth its costs. But, at the end of the day, is never getting involved in another counter-insurgency situation our choice alone? Or not getting involved in another large-scale armed conflict?
I just finished Will McCants’ lovely little book, Founding Gods, Inventing Nations (not to be confused with his earlier work, Much Ado About Prom Night) (okay, so that’s probably a different Will McCants) (or maybe that’s just what he wants us to think?).
Anyway! I think Caitlin’s planning on a real review, which is good, because one of us has studied religion and history extensively and one of us is me. And Founding Gods deserves a real review, which this isn’t. Instead, I’d like to offer some disjointed thoughts and modern parallels that I’m sure Will did not intend anybody to make. Sorry, friend. You should’ve known better.
Caveat lector of this blog post: I’ve taken a lot of cold medicine just before writing this. Caveat lector of Founding Gods: you need a dictionary handy should you wish to read this (which I recommend you do!) – there are many, many Big Words, some of which Will probably made up. You may wish to have Wikipedia close by as well, unless you’re very familiar with the histories of most early civilizations. Also, I recommend reading this backwards – read the book’s conclusion first, then go back to Chapter 1 and read the conclusion of that, then read the full chapter, etc. Founding Gods is short but dense, and it’s easy to get caught up in the details and lose sight of the broader arguments. This is not the Will McCants who rides around in a banana – this is Serious Academic Will McCants, though he does use the phrase “new kids on the Mediterranean block” and makes a sly reference to “winter is coming” (p. 15) (apparently the ancient conception of that idea requires people to build greenhouses, not armies and fortresses – see, you’ll learn things!).
Will’s central idea – that elites used their interpretations of the origins of culture and civilization to shape their political, social, and intellectual environment – seems fundamentally reasonable. I have no basis of knowledge from which to evaluate his scholarship or evidence as presented, but if the origins of a cultural artifact or technai matter, then it’s logical to assume that elites will interpret or modify those origins to suit their needs. In antiquity, the question of whether a technology or type of knowledge was human-derived (and therefore less acceptable and possibly sinful) or taught to humans by a divine being (and therefore assumed to be beneficial to humanity) was worthy of debate, because the origin of the technology determined the acceptability of its pursuit or study.
There’s certainly modern evidence that origins matter. We’re unlikely to debate the divinity of the origins of modern technology now, of course, but the question of etiology, or origination, remains salient. While I don’t wish to engage in the specific debate, the recent back-and-forth between Andrew Sullivan and Ta-Nehisi Coates over the origins and use of intelligence research (1 2 3 4 5 6 7) seems to parallel ancient debates over the acceptability of the use of certain forms of knowledge. While antiquity dealt with more abstract and undocumented innovations such as the invention of clothing, in the Sullivan-Coates debate, the specific question of whence arose research into human intelligence is knowable. For Coates, the ahistoricity of Sullivan’s initial argument is abhorrent, because the history of the research is, broadly speaking, evil – its originators pursued it for racist ends to determine who is considered worthy of society’s resources, and by that token future research into the subject should be pursued carefully, with deep sensitivity to who it impacts. For Sullivan, the history should be noted but should not be allowed to preclude further research. The origins of this research are less important to him, but by engaging and ultimately dismissing Coates’ argument that the research was initially undertaken with evil intent, Sullivan demonstrates the importance of etiology.
Maybe there’s another modern parallel in genetically modified food; there are differing opinions on its origins – OMG Monsanto is evil! v. OMG Monsanto will save the world! – and there are legitimate debates to be had about its use and the implications thereof, which may also feed into value judgements about its origins. In addition to the exchange-of-information value of these debates, they also serve to locate the debaters within their own communities, and to define and reinforce said communities as they jockey for position within broader society and culture.
In short, humans care where their knowledge comes from, and therefore will use the origins of knowledge to for their own ends. That may seem prosaic, and it is, but contrast this with, say, great apes’ use of tools – this is also a technological innovation, but apes seem oddly unconcerned with where, how, or why they gained this knowledge, and do not use the origins of tool-use to promote, say, chimpanzee culture over orangutan culture. I should’ve stopped a paragraph ago, huh?
Switching trains of thought entirely, I found particularly fascinating the ancient ambivalence towards ironsmithing and metallurgy as expressed through cultural ascription of its origins to either a god, an angel, or a human. In a section discussing the Qur’anic depiction of David as a divinely-inspired creator of armor-smithing culture, Will explains how this departed from pre-Islamic understanding of smithcraft:
… This is not something early Jewish and Christian scripture would attribute to God or to a biblical hero. God has nothing to do with iron, and those who originate smithcraft are sinful; moreover, the application of this technology to the crafting of weapons and armor leads to bloodshed and ruin.
The suspicion of smithcraft and of those who practice it went beyond Judaism and Christianity, as may be inferred from Hesiod’s linkage of the deteriorating of the five races and the development of ironsmithing. It was, as Fritz Graf points out, a suspicion held by many in the ancient Mediterranean world. … Prefiguring Qur’an 57:25, Pliny remarks, “Iron is an excellent or detrimental instrument for human life, according to the use we put it to.” But elsewhere he focuses on the destructive results of matellurgy: “Nothing [is] more pernicious (than iron) for it is employed in making swords, javelins, spears, pikes, arrows – weapons by which men are wounded and die, and which causes slaughter, robbery and wars.”
I find it comforting that there’s nothing new to our debates about whether particular technologies or uses thereof are good or bad; in some ways we’re just continuing a long tradition of disagreement (hey, I take comfort where I can get it). Too, norms change; even as smiths were reviled and feared in ancient culture, in colonial America, gunsmiths were prized for their rarity and their talent. In Alexander Rose’s American Rifle, he relates an anecdote from Lewis and Clark’s expedition to the Pacific in which “Le Borgne, a one-eyed Indian chief, threatened to massacre the Corps of Discovery but said he would make an exception for ‘the worker of iron and the mender of guns.'”
To the extent there’s any larger point to be made out of this, it might be that as a unit of culture or technology matures, its origins become less important and its applications matter more. We don’t care who invented ironsmithing anymore; we do care to what use we put said iron. Or maybe the point is just that you should pick up a copy of Founding Gods, Inventing Nations. It’s not an easy read, but it’s rewarding. Yeah, let’s go with that one.