The other day I was re-reading Ed Moed's blog post about how much media coverage depends on what other events happen to be happening at the same time. His point, in part, was that events get pushed out of the media by other events, and what we think of as news actually depends greatly on what other news is also being covered.
And this got me to thinking about how public relations measurement typically measures media outputs with media content analysis, but -- to my knowledge -- does not typically take the news environment into account.
My point is that Article X might result in ten zillion impressions, but if it was published on a day with really big other news (an extreme example would be 9/11/01) then those impressions just aren't the same as they'd be on a slow news day.
We all recognize that the news hole shrinks when something else big is hogging all the coverage. But what we're talking about here is not whether or not there's room for an article to get published, but whether or not there is room in the reader's head for another article. Or conversely, if there is no other news in a reader's head, does an article somehow gain extra importance?
I asked Katie Paine about this and she said she recognized the phenomenon from experience. As to how to deal with it, measurement-wise, she emphasized the importance of human experience behind data analysis: "That's actually why [KDPaine & Partners'] human intervention and hybrid approach is so important. The automated systems just give you numbers, we give you an explanation."
Does anyone out there adjust or modify their media analysis results to take into account the news environment? (I mean formally, with some kind of multiplier or variable that corrects raw impressions data for the impact of other news activity on the reader.)
Some might argue that correcting for the news environment doesn't really belong in media analysis, that it is part of some later stage of public relations' impact on the public's mind. If we put that in terms of outputs, outtakes, and outcomes, then what we are talking about here happens somewhere between outputs and outtakes: Between the outputs and outtakes lies the news environment, which may enhance or inhibit the results of impressions.
To my knowledge, measurement does not formally take this "news environment" phenomenon into account. How does the news environment affect PR's effectiveness? How does PR affect a person differently according to what other news they are experiencing?
In "Measurement's Empty Head: Measurement ignores the most complex part of PR" I wrote something that's pertinent here:
Contemporary PR measurement treats the mind of the media consumer like it's some sort of an empty box to be filled: The media dumps in impressions and we measure the outtakes and outcomes. But human beings aren't just empty heads, and they don't consume media in a simple and rational fashion... Measuring impressions is a whole lot easier than measuring what happens to those impressions once they're inside a person's head... What happens there is probably the most complex part of how PR happens, yet state-of-the-art PR measurement doesn't take it into account.
--Bill Paarlberg
Bill Paarlberg's September 7 comment about media measurement needing to take into account the media environment - in particular the amount of other news on the day which affects likely reader attention and impact - is a good one. Katie Paine has partly answered his question.
This is why good analysis must take account of context as well as the text. And that, in turn, is why media analysis must involve humans as well as automated machine coding and data crunching. Machines can read and categorise text fairly well. But they cannot 'see' context.
I have written in papers about four reasons why humans must be involved in research:
1. Contextualising – computers can read text, but cannot ‘see’ context (i.e. what is outside the text and data, referred to as exogenous information in analysis, and often important to interpretation);
2. ‘Pretextualising’ – my word for bringing to the analysis pre-existing in-depth knowledge of an industry or field using specialist analysts;
3. Write recommendations; and
4. Talk to clients to explain and interpret research (up to an including consulting).
Data is important, but it has to be interpreted. Otherwise it's just a bunch of numbers. That's why PR has to get beyond automated 'black box' tools believing they will do all the work for them.
Posted by: Jim Macnamara | September 20, 2007 at 03:55 PM