Today's NYTimes has an article about the huge disparities in U.S. immigration courts, with some surprising data on how widely asylum decisions vary between court locations and between judges within the same court. In a system that is meant to apply uniform judgements across cases, there is some serious variablity in decision-making: "...Colombians had an 88 percent chance of winning asylum from one judge in the Miami immigration court and a 5 percent chance from another judge in the same court." And: "...someone who has fled China in fear of persecution and asks for asylum in immigration court in Orlando, Fla., has an excellent — 76 percent — chance of success, while the same refugee would have a 7 percent chance in Atlanta."
We mention this here as a warning to those who do PR measurement and depend on human judgements for their data or analysis. Suppose, for instance, you were doing media content analysis and your human coders had as much variablity as those judges? Good luck getting useful results there.
Of course, everyone doing media analysis guards against that sort of bias by doing intercoder reliability assessment, right? Right?
Wrong, apparently. In fact, CARMA claims that: "CARMA Asia Pacific is the only commercial media analysis firm that carries out intercoder reliability assessment."
Doesn't that make you wonder about the reliability of your data? --WTP