Thursday, January 13, 2011

How not to do media analysis

How not to do media analysis

By Robert Shone on Wednesday, January 12th, 2011 - 2,807 words.

Two analyses of media reporting on Iraq were published recently. One was a simple quantitative comparison of the coverage received by two different death counts in the Wikileaks story (by the website, Medialens). The other was a more wide-ranging study, Pockets of resistance: British news media, war and theory in the 2003 invasion of Iraq, by Piers Robinson, Peter Goddard, Katy Parry, Craig Murray and Philip Taylor.

For reasons which should become clear, it’s worth comparing the two, very different, approaches behind these analyses.

First, Medialens, which counted references to “Iraq Body Count” (IBC) and “Lancet” in media coverage of Wikileaks (between October 23 and November 4, 2010), and found that IBC received much more coverage than the Lancet studies of Iraqi deaths. Although it’s not directly stated by Medialens, this analysis seems to be presented in support of their assertion that any mention of the Lancet studies is “a rare feat for our media”. Medialens also asserts, in the same article, that IBC is a “one-stop shop” for journalists, and that the media is “ignoring much higher death counts altogether”.

Unfortunately, Medialens failed to inform readers of the following facts:

1. IBC worked with Wikileaks on the Iraq war logs. (The Lancet study authors didn’t).
2. Julian Assange cited IBC’s analysis in many interviews. (He didn’t cite any comparable analysis by the Lancet authors, since there wasn’t any).
3. IBC’s John Sloboda was on the panel at the Wikileaks press conference – Assange directed questions to Professor Sloboda. (The Lancet authors were not present).

In other words, one would expect IBC to receive coverage in the Wikileaks story over the period for which Medialens searched. Why would Medialens omit these facts which explain the IBC coverage? The editors of Medialens (David Edwards and David Cromwell) also failed to mention that the relatively small amount of data recorded by epidemiological surveys (eg the Lancet studies) would be of little use in analysing the Wikileaks material, and that this explains the lack of involvement by epidemiologists (eg the Lancet study authors).

With the addition of these omitted facts, Medialens’s “media analysis” takes on a different meaning. What seemed like a “damning indictment” of the media (and, indirectly, of IBC – if we accepted Medialens’s odd logic) turns out to be nothing of the sort, and the analysis itself is revealed as redundant (and pointless). It’s almost as if Medialens had “revealed” that media coverage of Wikileaks mentioned “Assange” more than, say, “Chomsky”, and that this proved that Chomsky was “marginalised”.

Not long after this ill-conceived attempt at media analysis, Medialens decided to criticise a broader – and rather more credible – media study (the above-mentioned Pockets of Resistance) in a lengthy (over 6,000 words) two-part ‘Media Alert’ titled, ‘What Happened To Academia?’. As is often the case with Medialens, their long critique tends to reveal more about their own (mostly ideologically-based) approach than that of the study they criticise. The following passages are particularly deserving of comment:

We have long been fascinated by the silencing of academe. How does it work in an ostensibly free society? What are the mechanisms that bring the honest and outspoken to heel? (Medialens, December 14, 2010)

This sets the tone for what follows. Academe is being “silenced” – a blanket generalisation, with a conspiriological metaphor (I’ve previously commented on Medialens’s fondness for the “silence” metaphor). Is it not possible that far from being “silenced”, “Academe” simply hasn’t produced work which fits Medialens’s worldview? (Or, to quote George Monbiot’s criticism of Medialens, it hasn’t produced work which conforms to their “narrow and particular doctrine”). Or perhaps Medialens isn’t looking in the right places? I see a lot of new, innovative academic research with implications for media analysis, but it tends to be in non-traditional fields (eg mapping of political beliefs in neurological, cognitive and linguistic studies, etc).

Medialens then moves its focus from academics to journalists: “The truth is that even the best mainstream commentators are not allowed to direct serious criticism at their own media, at their own advertisers, at the interests that control their media.” (Medialens, December 14, 2010)

Another “truth”, in the form of a generalisation over “not allowed”. As we’ll see, Medialens tends to put such “truths” before inconvenient facts, exceptions to the “rule”, etc. The language used by Robinson et al is somewhat different – it’s not about asserting “truths”, but about reporting findings and making qualified statements based on those findings. For example, the finding that Channel 4 conformed “largely” to an “independent model” of reporting over Iraq. Medialens responded to this as follows:

Ironically, Channel 4 News, which you claim “conformed largely to the independent model” of reporting (p.173), in fact led the way in the media dismissal of the 2004 Lancet report. On October 29, 2004, Channel 4′s science correspondent, Tom Clarke, was one of the first journalists to pass on government smears as obvious fact. (Medialens, December 14, 2010)

In fact, it’s not “ironic” that there are examples which run counter to a finding which doesn’t rule out counter-examples. However, it is ironic that the counter-example chosen by Medialens turns out, upon close scrutiny, to look nothing like a counter-example. We can easily check Tom Clarke’s Channel 4 remarks, as Medialens provided a transcript at the time. Far from passing on “smears”, Clarke states clearly that the government “dismisses” the Lancet report – his own comments about the study’s “main weakness” make sense in light of recent scientific research which casts doubt on the reliability of importing epidemiological methods into conflict studies (eg the Human Security Report 2009/2010). In hindsight, we know that the Iraq epidemiological studies (ILCS, Lancet 2004, Lancet 2006, IFHS/WHO) have produced vastly differing estimates, suggesting that there are indeed serious reliability problems (a fact that’s not altered by the government’s apparent desire to dismiss – or “smear” – the Lancet 2004 estimate at the time). Clarke’s comment that the Falluja data “distorted” the study was correct (in the sense that it was an extreme statistical outlier – it’s why the Lancet authors excluded Falluja data from many of their presented findings). In their original alert, Medialens misread or misrepresented Clarke on this point.

So, in response to the Robinson et al findings on Channel 4 coverage, Medialens provide an unconvincing counter-example. There are probably better counter-examples – but to use them, one-by-one, to refute this kind of study is a little like objecting to the findings of the British Crime Survey by arguing, “I know they’re wrong because a friend of my aunty was burgled twice last month, and my brother-in-law was robbed last June”. The point that’s apparently missed by Medialens is that some sort of quantititive evaluation of Channel 4 coverage would be needed to counter the claims of Robinson et al. Medialens has never produced the type of analysis needed – their efforts at quantification are limited to some very basic, questionable searches, etc (like the above misguided IBC/Wikileaks example).

In part two of their critique, Medialens begin with the following statement:

In our reply to Piers Robinson, below, we try to show how ‘objective scholarship’, like ‘objective journalism’, all too often filters out what really matters. Moreover, as in journalism, the scholar’s obsession with objectivity tends to promote the interests of power. Why? Because mainstream academics and journalists are deeply and unconsciously biased.(Medialens, December 15, 2010)

Another blanket generalisation from Medialens, another putative “truth”: mainstream academics are “deeply and unconsciously biased”! This raises the important question: is it not possible for intelligent, informed adults to debate serious issues without resorting to the redundant banality of the retort, “Well, you’re biased!”, which is what Medialens’s claim seems to boil down to. It doesn’t help that they further assert that “mainstream academics” (all, most, or just some?) are “unconscious” of their bias. The last time I came across such absurd, sweeping generalisations about “mainstream academics” was in the work of David Icke.

It’s not all about generalisations and deeper truths for Medialens, however. At certain points in their critique they offer more specific suggestions: “The point we are making is that there were a small number of key facts, issues and sources that had the potential to derail the government case for war.” For example, they think Robinson et al should have taken account of media references (or lack thereof) to Scott Ritter’s claims over WMD, since Ritter was a “key source”, and “not just another source”. (Ritter, a former chief UN weapons inspector, had written that Iraq’s biological/chemical weapons would have long since become “harmless goo”).

Ever since I read a long 2002 Guardian piece by Ritter (an extract from his book), I’ve thought his claims warranted widespread, prominent coverage. In an August 2002 interview, Noam Chomsky commented that “Scott Ritter’s testimony on the topic [of WMDs] is compelling, and I know of no serious refutation of it”.

Later in the same interview, however, Chomsky remarked:

It should be added that there are circumstances under which Saddam might use WMD, assuming he has the capacity. If Iraq is invaded with the clear intention of capturing or more likely killing him, he would have every incentive to go for broke, since he’d have nothing to lose. But it is hard to imagine other circumstances. (Noam Chomsky, August 2002)

It seems odd, in hindsight, that Chomsky painted this scary picture of Saddam “going for broke”with WMDs. But it didn’t seem odd at the time. Compelling though Ritter’s testimony was, no intelligent antiwar campaigner would base their case against war on the gamble that Ritter was right (unrisky though that gamble may have seemed, given Ritter’s credentials). Rather, there was a much stronger case against war which required less of a gamble: that the burden was on the warmongering parties to demonstrate strong evidence pointing to an imminent WMD attack from Saddam (no such evidence was forthcoming, of course).

This sense that waging war required an infinitely stronger justification than the “case” provided by the US/UK authorities was, in my opinion, rarely communicated in media coverage – and this was a matter of framing. In fact, pro-war framing put the burden of proof on the weapons inspectors to demonstrate absence of WMD capability – and to demonstrate it quickly. Unfortunately, I think many media reports conveyed this perspective (while appealing to the sense of urgency/fear already induced by tabloids). Would it have hurt this pro-war framing for Hans Blix to say: “You’re not giving us enough time to demonstrate that Scott Ritter is right, so you’ll just have to accept what he says without corroboration from us”?

From that perspective, the biggest problem with most media coverage, to my mind, was not the lack of reporting of Ritter’s claims on WMD, but the frequent adoption of subtle (and not-so-subtle) pro-war framing. Given that framing, I doubt that Ritter’s testimony, even if it had been more prominently reported, would have stood much chance of “derailing” (as Medialens put it) the government “case” for war.

Still, Medialens deserves some credit for drawing attention to Ritter’s claims (as the Guardianhad earlier done. Peter Beaumont’s favourable Observer review of Ritter’s book, War on Iraq, also predated Medialens’s coverage).

We devoted our lives to studying media reporting of the pre-invasion and invasion periods in the first half of 2003. The patterns and limits of media reporting, the unspoken rules, were so clear to us – they could hardly have been more obvious. (Medialens, December 15, 2010)

It’s a pity, that in “devoting their lives” to this study, the Medialens editors never managed to produce a single substantial quantitative analysis of media coverage. In fact, the series of ‘alerts’ they produced can hardly be considered “analysis” at all – they’re obviously polemics. Steven Poole (author of Unspeak) wrote, in a Guardian non-fiction review, that Medialens has a “counterproductive tendency to bathe everything in childishly apocalyptic polemic”. Poole then added:

[Edwards and Cromwell] also affect to know what is going on “unconsciously” in journalists’ minds, and seem unaware that their own preferred descriptions of events are often just as rhetorically framed as the versions of the “psychopathic corporate media” (on which they nonetheless rely for factual reference). (Steven Poole, Guardian, October 3, 2009)

If you’re claiming to reveal media “patterns”, “limits” and “unspoken rules” (never mind “truths”) an analytical approach is needed. But Medialens’s rhetorical, ideologically-based approach starts with certain “rules” or “knowns” (eg derived from the Herman/Chomsky Propaganda Model) and then selects cases which show these to be “true”, whilst paying much less attention to (or at worst ignoring or denying) the cases which run counter to these “truths”. It’s a long way from the analytical/empirical approach.

A case in point is the “truth” promoted by Medialens on Iraq mortality reporting – that lower death counts are favoured because they are low (eg, Medialens’s claim that IBC’s figures are used “because they are very low”). In order to promote this “truth”, Medialens has to effectively airbrush a large amount of reality out of the picture. For example, the reality that the 2006 Lancet estimate of 601,000 violent deaths received far more media coverage than the much lower WHO estimate of 151,000 violent deaths.

The Lancet and WHO surveys were directly comparable (unlike Lancet and IBC) – both were peer-reviewed epidemiological studies, and they covered the same period. The WHO surveyhad a larger sample, better documentation and arguably superior quality control – several prominent experts have made clear their preference for it over the Lancet study (eg UN epidemiologist Paul Spiegel, and renowned demographer Beth Osborne Daponte). So, a media which favoured “lower” death counts while “ignoring much higher death counts altogether” (toquote Medialens) had the perfect, credible epidemiological study to counter (or “dismiss” or “smear”) the Lancet estimate.

But since reality is somewhat different from Medialens’s “truth”, things didn’t work out that way. Whereas the 2006 Lancet study enjoyed headline coverage on BBC1 News and BBC2 Newsnight on the day it was published, the WHO study didn’t get a single mention on the main BBC programmes on the day of its publication, and has rarely been mentioned since.

How do the Medialens editors respond to this piece of reality which is so relevant to their media criticism? By not mentioning it. While they have cited the Lancet estimates repeatedly in their prolific writings on Iraq, they’ve been virtually “silent” over the WHO estimate (I count only onedirect mention in the whole of their published output – plus one other mention in a quoted email from a third party).

So much for inconvenient aspects of reality. But for Medialens to attract followers, their “truths” must presumably mesh with reality in some “resonant” ways – and this is obviously the case. Large parts of the media clearly do have a lot to answer for over Iraq. At times the reporting was shockingly inept or “subservient to Power” (my mind goes back to the BBC’s “coverage” of France’s intention to veto the UN resolution).

There were important exceptions to this “rule”, however, as Piers Robinson et al document. John Pilger, also, has written that two national newspapers (Independent and Mirror) were “anti-war” (also the Guardian, to a lesser extent, Pilger claims). This is echoed by Robinson’s findings:

According to Robinson, the Telegraph, Times and Mail were “generally supportive, with most of the coverage falling in line with the coalition PR campaign”; whereas the Independent, the Guardian and the Mirror “were quite remarkable for the degree of criticism that they engaged in, even during the invasion phase, which according to the academic orthodoxy is quite a departure from the way a lot of communications scholars understand the media”. (Journalism.co.uk, September 24, 2010)

This isn’t what Medialens wants to hear. In Medialens’s worldview, the Guardian andIndependent are not allies, they are “complicit in war crimes”. Medialens’s editors claim that this is something they have “documented repeatedly”. But this isn’t the case. What they’ve documented is highly selective – a tiny (relative to the entirety) subset of examples to illustrate their claims. Sometimes the examples are good, sometimes feeble (as in the above Iraq mortality cases). The examples are taken mostly from the newspapers’ opinion pages – there’s also a selection of email dialogues with journalists (two prominent journalists have told me that Medialens excluded the parts of their dialogues which tended to contradict Medialens’s “truth”). Again, an analytical approach is lacking – there’s an unwillingness or inability to account for the extent of real-life examples which run counter to their “truths”, their blanket generalisations.

One of the bizarre things I witnessed while following Medialens in the run-up to the Iraq invasion was the almost daily posting (to Medialens’s message board) of links to antiwar comment pieces and cartoons in the Independent and Guardian – the response to these from Medialens’s subscribers was typically: “excellent article!”, “brilliant”, “spot on!”, etc. These direct reactions to a significant aspect of media reality were rarely allowed to interfere with the Medialens-promoted consensus – the “deeper truth” that these two newspapers were “complicit” in the move towards war.

Disagree with this article

No comments:

Post a Comment