The following is a guest post by Dr. Leah Windsor. Dr. Windsor is a Research Assistant Professor in the Institute for Intelligent Systems at The University of Memphis where she directs the Languages Across Cultures and Languages Across Modalities labs. From 2014-2019 served as PI for a Department of Defense Minerva Initiative grant, using computational linguistics to analyze political communication in international relations.
Why are we seeing an uptick in discussions about non-mainstream theories about the origin and spread of Covid-19? In my recent social media feeds, I have noticed more skeptical discussions about the pandemic, and it’s a struggle to know how to respond. On the one hand, I know that emotions are strong – even predictive – influences on our choices. When we believe something, it is a part of us. Telling us that our facts are wrong is equivalent to telling us that we are wrong – that our reasoning, beliefs, and decision-making processes are wrong.
What we believe is a part of us, which helps explain why when confronted with contradictory evidence, people tend to double down on what they already believe rather than integrating the new information into their beliefs and thinking. So if I respond to a social media post with information that counters a friend or family member’s current beliefs, it’s more likely they will believe that I am wrong, or the outlier, than their current beliefs. It’s hard, even existentially dangerous, to question the beliefs we hold dear.
One of the most unsettling findings of our media and radicalisation research was the way in which the suffering of certain individual women is turned into a cause by radical Islamic groups that leads to violence by men in those women’s names. The availability of digital media, combined with a certain doctrinal entrepreneurialism by those using religion to justify political violence, has resulted in the widespread dissemination of amateur video clips depicting a specific woman’s plight and calling for reprisals. If you want to understand the link between online propaganda and offline action, it appears that representations of women’s bodies and their “honour” are often central. My project colleagues and I document two such cases in a research article published this week. Dua Khalil Aswad, an Iraqi teenage girl of the Yazidi faith, was stoned to death on 7 April 2007 by a Yazidi mob consisting of tens of men, mostly her relatives, for eloping and spending the night with a Muslim man. Her death was recorded on a mobile cameraphone by a bystander and circulated on the internet. It was eventually picked up by NGOs and international media, where the killing was framed in terms of human rights abuses. However, the clip was also identified by so-called ‘mujahideen’ in Iraq, namely Al-Qaeda in Iraq and affiliated groups. They claimed Dua was killed because she converted to Islam. They argued her killing demonstrated how non-Islamic faiths violate human rights (they know how to call upon human rights discourse too), and that this warranted the mujahideen bringing their own kind of justice to Dua’s killers. Between April and September 2007 a series of high-profile retaliatory attacks saw the individual and collective killing of hundreds of Yazidis and the wounding and displacement of more. One of the jihadist groups involved in these attacks, Ansar Al-Sunna, posted a video justifying their violence. Dua’s death was woven into a longer strategic narrative perpetuated by jihadists concerning a war between Islam and other faiths.
Three years later, in 2010, we found considerable religious tension in Egypt and the Arab world stemming from several cases of young female Coptic Christians in Egypt who had allegedly converted to Islam and were forced by the Coptic Church, with the aid of the former Mubarak security forces, to return to Christianity. The alleged plight of these women became the subject of media debates, street demonstrations and protests by Muslims and counter-efforts by Copts in Egypt, inflammatory editorials, online speculation, and finally, violence against innocent people. One of the most prominent episodes occurred in July 2010. Camilia Shehata, a female Copt Christian in Egypt, disappeared, and allegedly converted to Islam. She then returned under the shelter of the Coptic Church and released various videos to explain her case. Her story was amplified by Christian and Muslim groups alike, but subsequent attacks in her name occurred in Iraq rather than Egypt. Al-Qaeda in Iraq took hostages in a Baghdad church in October 2010 and announced on YouTube:
Through the directions of the Ministry of War of the Islamic State of Iraq, and in defence of our weak and oppressed, imprisoned Muslim sisters in the Muslim land of Egypt, and after detailed choices and planning, a small group of jealous Mujahideen, beloved servants of Allah, launched an offensive against a filthy center of Shirk [the Church] which Christians in Iraq have for so long taken as a place from which to wage their war and plot against Islam. By Allah’s Grace, we were able to capture those who had gathered there and take control over all entrances.
The Mujahideen of the Islamic State of Iraq give the Christian Church of Egypt 48 hours to clarify the condition of our Muslim sisters imprisoned in the churches of Egypt, and to free them all without exception, and that they announce this through the media which must reach the Mujahideen within the given time period.
The Iraqi government chose to attack the hostage-takers rather than negotiate. The hostage-takers detonated their suicide bombs in the church and 53 people died.
These events confirm one thing we know: terrorist groups can derive asymmetrical benefit from digital media, since content from individual lives and incidents can be rapidly reframed to bolster longstanding narratives such as the notion of a clash between Islam and other religions. But what struck us as particularly significant was the degree of contingency involved. The line from the initial acts to the eventual victims and the way in which events are incorporated into others’ narratives seems chaotic, escaping the control of the initial actors. The economy of exchange through media is irregular: digital footage may emerge today, in a year or never, and it may emerge anywhere to anyone. The concept of agency becomes complicated. The span of things done ‘by’ Al-Qaeda is beyond its control. Is distributed agency something new, only made possible by digital connectivity, or have social and religious movements always depended upon – and hoped for – a degree of contingent taking-up of their cause?
While we cannot know why the Yazidi man with a digital camera recorded the stoning of Dua (or why he recorded others recording it with their cameras), the increasing recording of everyday life certainly produces more material for political and religious exploitation. As we have seen, this allowed Al-Qaeda to instantly reframe a woman’s life as a “sister’s” life to shame men into action. If the killing of Neda Soltan during the Iranian election protests in 2009 represented one face of today’s mix of gender, violence and digital emergence, the cases of Dua and Camilla show another.
The Heartland Institute, a conservative think-tank well-known for its skepticism about climate change, placed the above digital billboard for 24 hours along the Eisenhower Expressway in Chicago this past week. For $200, they bought a lot of publicity.
“I don’t want government telling me what I can do and what I can’t do because I’m an American. But in Monongalia County you can’t smoke a cigarette, you can’t smoke a cigar, you can’t do anything. And I oppose that because I believe in everybody’s individual freedoms and everybody’s individual rights to do what they want to do and I’m a conservative and that’s the way that goes.
But in Monongalia County now, I have to put a huge sticker on my buildings to say this is a smoke free environment. This is brought to you by the government of Monongalia County. Ok?
Remember Hitler used to put Star of David on everybody’s lapel, remember that? Same thing.”
Right, it’s the same thing.
Actually, there’s no need to draw an analogy comparing Hitler and anti-tobacco views to terrorists and concern for global warming. The climate skeptics have sometimes gone straight for the jugular:
When Christopher Monckton, the hereditary third Viscount Monckton of Brenchley, attended an Americans for Prosperity meeting in Copenhagen on Wednesday night that was interrupted by chanting youth [climate] delegates, he was reportedly furious. But no one expected him to go back and berate them.
Yesterday he ambushed a small group of students from the non-profit SustainUS inside the Bella centre. One of them texted her friends for help. Several arrived, including Ben Wessel, a 20-year-old activist from Middlebury College, Vermont.
Monckton then repeatedly called them “Nazis” and “Hitler Youth”.
[Aside: Would this be a good time to point out that some of the professional climate skeptics were tobacco skeptics once upon a time? And I mean literally the same people, such as Frederick Seitz, S. Fred Singer, Richard Lindzen. ]
Index on Censorship hosted an event at the Free Word Centre in the hope of teasing out the various strands of this conflict. The title – “Is climate change scepticism the new Holocaust denial?” – may have seemed provocative, but it picked up on phrases used by panellist George Monbiot, who in the past has described the two stances as equally immoral and stupid. When asked if he thought climate sceptics’ evidence for their claims was as flakey as that of Holocaust deniers for theirs, Monbiot concurred. Questioned on the use of the term “deniers” to describe his opponents, Monbiot said he simply could not think of a better way of describing them, though he recognised the implications the term could carry for some.
CBS TV journalist Scott Pelley went that route back in 2006: “If I do an interview with Elie Wiesel,” he asks, “am I required as a journalist to find a Holocaust denier?”
As a political scientist interested in political communication and the prospect of deliberative democracy, these examples are fairly disheartening.
Just a few weeks ago, I wrote here at the Duck about Godwin’s law: “if you mention Adolf Hitler or Nazis within a discussion thread, you’ve automatically ended whatever discussion you were taking part in.” Or, as one US News political columnist wrote, there is “an unwritten rule in public speaking: comparisons to Hitler and Nazi Germany never work.” One oft-mentioned corollary to Godwin’s law suggests that “whoever mentioned the Nazis has automatically lost whatever debate was in progress.”
In this case, I fear that “if both sides do it,” then we are all doomed.
At the end of the show the contestants have to make one last decision over the final jackpot. They are each presented with two golden balls. One has “split” printed inside it and the other has “steal” printed inside it:
If both contestants choose the split ball, the jackpot is split equally between them.
If one contestant chooses the split ball and the other chooses the steal ball, the stealer gets all the money and the splitter leaves empty-handed.
If both contestants choose the steal ball, they both leave empty-handed.
It is similar to the prisoner’s dilemma in game theory, however, in this game the players are allowed to communicate.
Indeed, the communication is the interesting element of this particular play of the game:
Here is the “Golden Balls” situation using simple 2×2 game matrices:
In this game, steal is likely the dominant strategy. If you are certain your opponent is going to split, then it is superior to steal in a single play. You win. If you are certain your opponent is going to steal, then you are indifferent between stealing and splitting, though many people would likely steal just to avoid being made to be the sucker (thinking of relative gains).
Indeed, if we ignore cash values and make the sucker result the 4th-ranked payoff given the logic I’ve just provided about relative gains, then this game would then be a single-shot prisoner’s dilemma game. The dominant strategy is steal (defect). Obviously, preferences over outcomes should determine the strategy one employs in a game. Generally, however, simple game theory assumes utility maximization and the outcomes here are technically the same.
In any case, in this video from “Golden Balls,” player 1 (the man on the right in the brown shirt) has attempted to turn this situation into a different game — chicken, I think — by trying to add a perceived payoff that is worse than playing the sucker in a prisoner’s dilemma.
In chicken, the common story is two teenage drivers head directly for one another at high speed. If they both swerve (yield), this is the mutual split result. If only one swerves, s/he is the chicken and the other player wins. If both continue driving towards one another, they have a horrible accident.
Here, if player 2 selects steal with the knowledge that player 1 is definitely going to steal, then the total prize possible will be ZERO. However, if player 2 lets himself be exploited, then player 1 has dangled the (unenforceable?) promise of sharing the winnings after the show. Effectively, player 1 has attempted to transform the situation by creating the image of a shared victory even when the other player yields. It would be kind of like a fixed boxing match. The payoff comes after the participant takes the dive.
Generally, if one earns a reputation for selecting steal (never swerving) in the game of chicken, then no others will want to play this game with you because their best option is to split (swerve/yield). Why select the outcome that will assuredly result in a disastrous outcome? Unfortunately, one cannot earn a reputation for unyielding play in the first confrontation with an unknown player.
However, in his Introduction to Herman Kahn’s On Escalation, Thomas Schelling recommended that a chicken player should throw the steering wheel out the car window to signal a firm commitment to the steal (not swerve) strategy. Such a player has signaled to the opponent that the result is out of his hands. The best that can be hoped is to avoid disaster.
In this case, to influence Player 2’s choice, Player 1 has essentially communicated that he is tossing the steering wheel out the window.
The catastrophes of Rwanda and Bosnia led to a debate in the 1990s about the warning-response gap. Conflict prevention and early warning systems did not seem up to scratch. Third parties intervened too late, if at all. Spending was skewed towards mitigating the effects of conflicts, not on stopping them happen in the first place. The spread of satellite television brought conflicts into more immediate public vision. It was feared this created a CNN effect whereby policymakers were forced into military intervention for humanitarian causes to satisfy a more globally-aware public opinion. But this meant only those conflicts caught on camera would be responded to. The overall picture was a mess, it was argued. International relations lacked an effective system of warning-response.
A new study has cast doubt on these assumptions. This opens a space for a more analytical approach to how media, NGOs and intelligence agencies provide warnings and how states and international organisations can decide to respond. The Foresight project has spent three years analysing under what circumstances warnings are noticed, prioritised, and acted upon. The team, led by Christoph Meyer, has looked at a series of case studies offering various degrees of warning and response, including Estonia, Rwanda, Kosovo, Macedonia, Darfur, and Georgia. They have interviewed responders from the UK, US, Germany, the UN, EU and OSCE and analysed media and NGO reporting around these conflicts. In short, they’ve done a lot of the empirical work that was missing from the 1990s debate. What have they found?
First, Rwanda could not have been prevented. Valid warnings only emerged when conflict was escalating, not pre-escalation. Those who suggest a lack of political will or ignorance on the part of decision-makers have misinterpreted the warning data available at the time. Second, those providing warnings anticipate what responders want to hear, and provide them with that. Decision-makers hate surprising warnings which don’t fit their mental models of how the world works. They are overloaded with situations they’re already dealing with and favour responding to emerging conflicts that look like ones they’ve dealt with before. Third, decision-makers are as likely to respond to warnings from preferred journalists or NGOs rather than intelligence from their own state agencies. They trust lone, grizzled hacks or aid agencies they might be funding. Fourth and finally, for all the usual factors of resource-availability, credibility of warning sources and so on, military and aid responses are often a matter of context and chance, neither of which social scientists handle particularly well.
At a discussion of the findings yesterday, Piers Robinson, author of The CNN Effect, made the point that journalists cannot be relied on to provide early warnings in the future. The study indicates it is too dangerous, insurance is too expensive, and they are driven by news cycles in which what is happening trumps what might happen. Robinson also suggested that the Foresight project misses the systematic relation media and NGOs have to political power. Vietnam, Iraq and Afghanistan all point to the fact that journalists only question a war when leading politicians have already expressed dissent. Journalists don’t lead, they follow. While the former BBC journalist Martin Bell might argue for a ‘journalism of attachment’ that ‘cares as well as knows’, mainstream media organisations do not employ journalists to undertake moral crusades to warn states that if they don’t act in Rwanda, Georgia or wherever, there’ll be trouble.
Will citizen journalism and data mining of social media conversations around the world lead to improved warnings? This is the question decision-makers have been asking recently. They want to know how to integrate warning data from journalists, social media, NGOs and intelligence channels. In theory, the warning-response gap should shrink to zero. The time between an event and the state knowing about it promises to disappear with the right technology and tools to mine Big Data. But decision-makers are often of an age or disposition not even to understand Facebook and Twitter: there is a generational anxiety they are missing out on something and the kids have all the answers, and a cultural faith that free information will lead to the best outcomes. No discussion can develop until someone has mentioned ‘Arab Spring’ and ‘if only we had known’. But anyone who has done social media monitoring knows it requires a lot of qualitative know-how and interpretive work to get any sensible findings.
And as the Foresight study shows, decision-makers will still pick up the New York Times or turn on the BBC and trust their favourite reporter, even though those reporters might no longer be able to go to the countries they’re reporting on. Hence, for all the promise of communication technology, foreign policy is still about the human factor and cognitive biases. Understanding the warning-response gap in the next decade will involve some careful unpicking of the interplay over time of stressed, confused people in media, humanitarian and government agencies.
A new report was released yesterday, ‘Suspect Communities’, comparing how UK media and government have framed Irish and Muslim communities since the 1970s. The authors find that the ideas underpinning counter-terrorism measures and the way politicians, policymakers and the media discuss who might be responsible for bombings have not changed over four decades. The key finding is that ambiguity surrounding who is an ‘extremist’ or a ‘terrorist’ has led to hostile responses in everyday life – at work, in shops, on the street – from members of the public who think they are under threat from Irish-sounding or Muslim-looking people whom they associate with that threat. Hence, the report implies that government and media language is impacting on the everyday lives of communities judged suspect and everyone else who must live with them. In a debate in Parliament yesterday, the solution put forward by many was greater sensitivity of language by elites and more dialogue between the stigmatized, the elites, and the majority society.
While useful, the debate needs to go further. The crux with such reports is their method. This research team first analysed thousands of media texts and government documents, and found these to consistently frame these communities as suspect (and as communities, not individuals). They then did focus groups with members of those suspect communities to hear about living under suspicion. What the team did not do is try to explain why journalists or policymakers would consistently produce stigmatizing material. The consistency of the stigmatization suggests its nothing to do with any individuals, but a function of the institutional practices and professional imperatives of the fields of journalism and security policy. Most journalists don’t want to be racist. They think that by allowing a ‘moderate’ and ‘militant’ Muslim to debate they are providing balance – journalists don’t usually understand that they are reducing threatening and non-threatening minorities to equivalents in the eye of the non-Muslim audience. And policymakers know full well that homogenizing a community to tell it to ‘stop harbouring terrorists’ is not going to please everyone, but they really don’t want another bomb going off and will try any means to stop it. These are the pressures they face, and criticizing their language choices isn’t going to remove those pressures. So, if we are to move towards societies in which entire groups are not routinely lumped together as dangerous and disloyal, we need to begin to unravel these institutional and professional logics. A truly critical project would address these power relations and daily trade-offs instead of simply decrying the consequences.
This is an important topic. The Suspect Communities report supports a longstanding research finding (UK here, US here) that those who feel stigmatized tend either to retreat from public spaces (‘keep your head down’, ‘keep your mouth shut’) or become angry and try to resist slurs by turning them on their heads (reclaiming ‘queer’ in the 1970s, jihadi chic in the 2000s). Either way, the result is fear and alienation, which reduces trust on all ‘sides’ and makes reconciling interests and grievances through democratic institutions much more difficult.
(Written with Alister Miskimmon) Following the death of Osama bin Laden, political pressure is mounting for an early scaling down of British military troops presence in Afghanistan ahead of David Cameron’s deadline of 2014 for the end of Britain’s combat mission. With this in mind the British defence establishment is trying to understand their role in Afghanistan since 2001. Much of this soul-searching has focused on trying to explain why British forces have not been able to pacify sections of the Afghan population. Their explanation is that they have not been able to project the right storyline to Afghanis. They feel that they are being out-communicated by the Taliban, losing out to a more effective strategic narrative. This is presented as one reason Britain and NATO have failed to win hearts and minds. An example of such thinking was witnessed in Westminster this week in a session of the House of Commons Defence Select Committee. General Sir Nicholas Houghton, Vice Chief of the Defence Staff, identified a critical moment as Britain’s efforts at “poppy eradication at the time of the deployment”. “In the minds of some local Helmandis, and within the narrative of the Taliban,” he said, this created the “idea that these [British] forces are coming here to eradicate your poppy and take your living away.” Ultimately, “that worked against us in terms of strategic narrative.” The incredulity of our most senior military officers that they could not convince Afghanis in Helmand of their good intentions suggests that they think of communication as an easy solution; as if finding the right strategic narrative would solve their operational problems.
Such a stance exposes the lack of clear goals in the first place. Failure to convince Afghanis stems more from a lack of clear British strategy than the ability of Taliban forces to present a more convincing counter narrative.
In our fast moving media ecology, projecting a coherent message is a challenge. However, there are some instances when governments are able to deliver a clear narrative. For example, the killing of Osama bin Laden was so clear it did not need to be explained – least of all to the United States’ citizens seen celebrating on the streets of American cities after the President announced the mission. President Obama did not even engage in the ensuing debate about the legal status of such an action. He let his actions speak for themselves.
Once war has begun, strategic narratives are about keeping domestic audiences on side, not about convincing those who you are invading. When hostilities begin it is too late to convince them. Trying to tell a reassuring or uplifting story to Afghanis that is contradicted by what they see and hear on the ground only opens up space for Britain to be accused of hypocrisy – a narrative with a long precedent in Central Asia and the Middle East.
Unidentified ‘British security officials’ are telling journalists there is a possibility that sections of the Irish Republican Army (IRA) could attack next Friday’s royal wedding in London. At an event I attended this week, Patrick Mercer OBE, Conservative MP for Newark and member of the All-Party Parliamentary Group on Transatlantic and International Security, warned that the three security threats facing Britain are Al-Qaeda inspired terrorism, violence ‘attached’ to student protests, and ‘Irish terrorists’ attacking the royal wedding. Mercer questioned the wisdom of holding a royal wedding so close to Easter, a time with historic significance for Irish republicans. The Easter Rising insurrection against British rule in Ireland began on 24 April 1916. The wedding date is also close to the 30th anniversary of the death of republican prisoner Bobby Sands, who died on hunger strike on 5 May 1981. Don’t we understand ‘how Irish terrorists think’, asked Mercer. Yet, talking informally to journalists in London, I discovered many didn’t want to raise the matter because it might appear to strike a negative note and alienate readers at a time many view as one of national celebration.
If there is a threat of violent attacks on the wedding – and it is unlikely security services would make details public even if there were evidence that there was a threat – what would be an effective way to communicate it? Where does the balance lie between informing and scaremongering? Government and journalists will face the same dilemma at the Olympics in a year’s time so it will be interesting to follow how it plays out in the next week.
Nobody has come close to explaining how strategic narratives work in international relations, despite the term being banded about. Monroe Price wrote a great article in the Huffington Post yesterday that moves the debate forward. As I have already written, strategic narratives are state-led projections of a sequence of events and identities, a tool through which political leaders try to give meaning to past, present and future in a way that justifies what they want to do. Getting others at home or abroad to accept or align with your narrative is a way to influence their behaviour. But like soft power, we have not yet demonstrated how strategic narratives work. We are documenting how great powers project narratives about the direction of the international system and their identities within that. We see the investments in public diplomacy and norm-promotion. We have not yet demonstrated that these projections have altered the behaviour of other states or publics. Does the Arab Spring show these narratives at work? Many leaders in the West and protestors taking part in the Arab Spring promoted a narrative about the spread of freedom, often conflating this with the hope and vigour of youth and emancipatory potential of social media. Of course this narrative may be bogus, as Jean-Marie Guéhenno argues in yesterday’s New York Times. However, the key point Price makes is that narratives set expectations, regardless of their veracity. Narratives defined what NOME leaders were expected to do: step aside! We can see the power of narratives by seeing what happens to those who defy them. Mubarak and Saif Gaddafi both gave speeches where they were expected to align with the narrative. The narrative set the context and expectation for how they should behave. But they did the opposite of what was expected. Price writes:
From a perspective of “strategic narratives,” Mubarak and young Gaddafi were speaking as players in an episode, set by key actors, international and domestic, who had the expectation that their wishes as to the playing out of the drama would be fulfilled. Their speeches did not match the sufficiently accepted script, in the case of Mubarak, or the incomplete outlines of one, as in the case of young Gaddafi.
Who has successfully promoted an overarching narrative? Obama, Cameron, Sarkozy? Where did the ‘Arab Spring’ narrative come from? How does an overarching narrative play out in each country? What room does it leave for individual governments and public to create their own destinies? In the next year, building up to a debate at the International Studies Association (ISA) convention in San Diego in April, we will be exploring this.
The question of human over-population of our planet seems to resurface every few decades, driven by fears that there are too many people to feed, clothe and shelter, or that the sheer volume of human beings working, travelling and polluting is causing environmental damage. But the persuasiveness of such claims is weakened empirically and normatively. In terms of facts, it does not help the over-population claimants that every time the population question is raised, humanity seems to deal with the problem. People do find food, clothing and shelter. And in terms of values, the notion of limiting or reducing the number of human beings appears a slippery slope to calls for coercion and perhaps eugenics in the name of ‘the greater good’. But in 2010 the question is being asked again.
At Royal Holloway last night, Professor Diana Coole presented early analyses from her new three year project, Too many bodies? The politics and ethics of the world population question. She is interested in why the question is re-emerging now and why it is in developed countries that calls are loudest for something to be done, according to her analysis of media and policy documents. Size of world population seems to have causal links to the development of climate change, water and food security, managing waste, and preserving diversity. The Royal Society’s working group People on the Planet raises this explicitly, as did the Stern Report – though neither recommended any proposals to intervene in human population numbers. As Coole argued, the tools we have for managing demography – fertility, mortality and migration – are all political minefields. Governments quietly manage birthrates through tax and welfare regimes and campaigns on family planning, but few policymakers in liberal democracies would explicitly institute a one-child or two-child policy for families.
It is interesting that Coole, a critical theorist in the continental tradition, should be asking why the population question remains a taboo. Materiality, vital matter, the non-human and post-human futures have all been on the critical theory agenda recently, in IR and more broadly. People are not the only things that matter. This scholarly focus parallels public-political claims for ‘sustainability’ in which the maintenance of ecosystems are considered more pressing than the continuation of humanity and certainly more pressing than economic growth. Might it be that a new strategic narrative will be formed and brought to bear on policy, a ‘smaller, better humanity’ narrative? Population projection statistics are ambiguous and can easily be used to support Malthusian stories. And Coole’s project may unpick the factual and normative discourses that silence talk of the population question, so that the better-smaller narrative — if that is what is being formulated — can be heard.
(Cross-posted from https://newpolcom.rhul.ac.uk/npcu-blog/)
Lundry quickly runs down the importance of infographics and data visualizations in the political realm. Bottom line: people are hard wired to learn through visualization, and infographics can be very powerful tools in political battles over ideas and policy:
It amazes me that we haven’t seen a faster uptake among professional politicians of data visualization, especially considering the sheer number of political operatives, consultants, and strategic communication firms. All it takes is about five minutes watching C-SPAN to realize that these folks are due for a major upgrade in the infographics department.
I also love Lundry’s updating of a famous H.G. Wells quote
Visual Statistical thinking will one day be as necessary for efficient citizenship as the ability to read and write.
Personally, I think you need both visual and statistical in there, but in general I agree wholeheartedly with the sentiment.
(You simply can’t offload supplies from ships without dock cranes. You can’t land planes full of relief shipments and inflatable hospitals without a functional control tower. To save lives, search and rescue crews must get their equipment from tarmac to disaster zone efficiently. Helicopters need landing zones not decimated by rubble. And most importantly, military folks with the choppers need to be able to communicate with the civilian aid agencies who have the supplies.)
A lesson for human security specialists may be: is some level of international governance over basic infrastructure going to be necessary to resolve coordination problems like this in the future? There’s a lot of talk in the MDGs about development aid for food, vaccinations, school supplies, but how about for construction of roads, ports and control towers that can withstand natural disasters? This would seem to be a prerequisite for effective civilian-military response in such scenarios. An international community that can trace nuclear materials or close an ozone hole could establish and implement such standards if it chose – half the problem is lack of political imagination.
Second, my bet is the US military is thinking hard about what its prominent (yet inevitably sluggish) role in this disaster means for its maritime force posture. Climate disasters like this may become more prevalent and the humanitarian fall-out presents security risks if they occur in areas near US borders; the US is positioning itself as a global humanitarian hegemon bent on rebuilding nations ravaged by state failure and disaster. All this has important implications for naval readiness as well as strategic communication. Galrahn at Information Dissemination made a cogent set of points in this regard:
There have been 3 Admirals on C-SPAN in the last 6 months, and only once was it on an issue related to the sea – that was the BMD change. Every other time you see an Admiral on C-SPAN it is Mullen or the topic is prisoners at Guantanamo Bay. The media is focused on Haiti, and the symbol of American power is going to be the largest thing everyone can see – USS Carl Vinson (CVN 70). Be visible, take pictures from the air that include the carrier, and turn USS Carl Vinson (CVN 70) into a symbol of hope. The Navy doesn’t have a single Admiral actually in a Navy post today (which means Stavridis and Mullen don’t count) who is recognizable by the average American, but every American knows what a Nimitz class aircraft carrier looks like – as does the rest of the world. Showcase the ship, because it is a symbol and symbolism matters in soft power. The whole world is watching.