Concerning "The Wire" and Measurement: How does politics affect public relations measurement?
So I just finished watching the fourth season of The Wire. (The Wire is an HBO TV series about Baltimore. I watch it on DVD, and it's got to be about the best TV ever. Right up there with The Sopranos and Deadwood.)
One of the themes of the fourth season was "juking the stats," the way in which the police and education departments distort their results to make them appear more effective than they actually are. For instance, the cops would purposely not investigate homicides, in order to keep the official city homicide rate low. And the schools would classify as "proficient readers" those kids who were reading two years or less behind their actual grade level.
The big point here is that measurement, no matter how solid the data, will not be effective if the people who are doing it have some strong incentive to misrepresent the results. There must be a lot of public relations measurement programs that never get reported, or get distorted in the way they are reported, because the results are inconvenient or embarrassing or otherwise unwanted. So, what good is it to have the best measurement tools and the most accurate measurement program, when the results are destined by office politics to be ignored or distorted?
Here at The Measurement Standard we spend a lot of time and effort trying to convince people that measurement can improve the effectiveness of their public relations, and showing them better and more accurate ways to collect and analyze data. But we never talk about the politics behind measurement as an important aspect of the process. (Well, Katie and I do mention "getting buy-in from everyone involved" in her book, but only very briefly. As if all you have to do to get around awkward politics is to hand people a signoff sheet or something.)
So my purpose here is to propose that maybe we have been ignoring a very important part of the process. Maybe the office politics behind measurement can be just as important as actually getting the right data. (I notice I am not the first to bring this up. See Victor Tan Chen at In The Fray.)
So how can we get a handle on this? Who's doing research on this? How can we get an idea of how many measurement programs have been scuttled, or languish on a shelf somewhere, because the results were not quite what someone, or everyone, had hoped? -- Bill Paarlberg, Editor, The Measurement Standard