What we didn’t learn in school, but did learn from our mistakes.
It's that back-to-school time of year again, so we thought we'd focus on learning—as in learning from your mistakes. Or in this case, learning from our mistakes.
Here at KDPaine & Partners we have a long-standing tradition: Every month we reward $50 to the person who makes the biggest mistake in public relations or social media measurement. “Biggest” being defined as the one we learned the most from. Having run my own companies for 23 years, I can tell you we’ve made some doozies along the way. And I'm happy to say we have learned from them.
(Here is a previous Measurement Standard article about public relations measurement mistakes and how to learn from them: How To Really Mess Up Your PR Measurement Program: Ten fatal PR research mistakes and how to avoid them.)
So, we figured, why keep all this great learning to ourselves? Let’s tell the world and with luck, none of us will make the same mistakes twice. (Around here, making the same mistake twice is a surefire way to win the “Mistake from which we learned the least award”: Kitchen clean-up duty.)
In no particular order:
1. No change is “minor.”
I can’t tell you how many people have asked me to make a “minor” change to either the publication list, search terms, or methodology. In truth, when you’re doing research, no change is ever really “minor.” Every change you make has an impact on consistency, continuity, and how you present results. Say you want to recategorize a product area from “software” to “solutions.” While it is relatively easy to go back into a data base and change that wording, it means that all your charts and tables need to be redone not just in Powerpoint, but in the database itself. Adding another competitor is even worse. Then you really do need to go back and reread everything you’ve already read for that additional competitor.
2. Maintaining a consistent universe is imperative.
So when you find that perfect blog post in a blog that you hadn’t monitored before? Well, it’s a lot more complicated than just adding it into your database. First of all, it’s outside of the defined universe of the study. So once you change the universe you either need to go back to the beginning and read all previous issues of those additional media outlets. OR you need to make sure there’s a callout on every chart explaining that this is the date at which you changed the criteria. Additionally, if it’s a competitive analysis, you need to go back and search any additional publication for news of the competition.
3. “It’s no big deal, we really only get a few hundred clips...”
That may well be true of the past, but probably not if you are launching a new product, or changing your marketplace, or how you do business. All of these things will increase the amount of coverage you are liable to get. So just because historically you only got 500 clips a month, doesn’t mean you’ll get 500 every month going forward. In fact, many’s the client that has thought they get 500 clips, and found out that when they “just add social media” the numbers go up to 5000 or more.
4. “Can we just add a little bit of social media?”
In our experience, once you begin to include social media in your measurement program, you can expect the volume to increase ten-fold. So no matter what, this is not a minor increase. It will increase your costs, your volume, and the complexity of your report.
5. “Now, that we’ve made that change, can you just update the last report?”
Yes, we can, but it will cost you as much as it did to create the first one. Re-doing any sort of a research report takes just as much time the second and third time as it does the first. Everything still has to be checked and rechecked, and all the notes, comments, headlines, and conclusions will change if the criteria and the charts change. So there is no such thing as “just updating.”
6. “I want to add a few more key spokespeople...”
No problem, if you are only looking at your own clips. But if this is a competitive analysis and you are looking to get a sense of whether you are getting your fair share of quotes, if you add ten new spokespeople, I also have to find their counterparts at the competition and add them in as well. And, by the way, the same is true for “I want to add some local coverage...”
7. “I know you only random sampled the data, but I want the total number of mentions and impressions...”
Lesson #1 in statistical analysis: Random sampling works because it gives you a statistically valid sample of the whole. Which means that percentages are valid, but the “total” numbers are only a partial of the entire universe of your clips. So if you get 10,000 mentions and you want a random sample of 20% of them, that’s 2000 items. So the total number of mentions analyzed is 2000, even though we started with a total of 10,000. Which means that metrics like OTS and total volume of mentions do not apply.
8. Not asking enough questions.
Too many people think that creating a survey is just a matter of plugging a few questions into Zoomerang and sending it out to clients. Wrong. There is a right way and a wrong way to ask questions, and you need to test any question extensively to make sure you will get the answers you want. The other element you need is variables. You need to make sure that you can parse your data by all the variables necessary to make good decisions.
For example, we recently did nationwide survey for a major non-profit. The initial results were puzzling, because they revealed that overall awareness had declined, even though the organization had tripled their media exposure in the past year. That kind of result makes data geeks like me nuts, so we like to delve into the data to figure out why. Fortunately we had asked enough qualifying questions so we could determine that the results varied tremendously by geographic region and by familiarity with the cause. Awareness among those familiar with the cause jumped 20%. But for those who didn’t care, there was no bump at all in awareness. Not entirely surprising, but it shows just how misleading national figures can be. So you need to make sure you ask enough questions, to generate enough variable to determine cause and effect.
9. Insufficient testing and/or unclear directions.
Here at KDPaine & Partners, every project we start has a clear – and tested – methodology. But that doesn’t mean that a new researcher, coder, or analyst can’t misread or misunderstand the data. So we make sure that after we collect and analyze the first 50 data items (blog posts, tweets, or survey responses) we review all the data to make sure we are getting exactly what we need.
10. Inconsistent date ranges.
One of the most exciting elements of any measurement project is to be able to correlate various types of data to observe impacts. By running parallel data streams from Google Analytics against media mentions you can learn all kinds of things about what moves the needle. But that only works if you have sufficient, parallel data points. So if your media mentions are collected and reported on weekly, but you are only getting monthly data form your web analytics team, it doesn’t work. You need to tell the analytics guys to feed you data weekly so you have sufficient data points to get effective correlations.
11. Trying to attach impression numbers (OTS) to social media.
My advice: Don’t even try. The problem with social media is that there really isn’t any standard data set like SRDS or the old publication audit companies to verify the number of eyeballs that actually see whatever you are getting out there. The best solution is a service like Compete that uses a panel system to determine a rough “circulation” number. The problem is that it is great at determining impressions for major outlets like The Huffington Post. But if your mention appears on what in traditional media we would call an “inside page” and is written by Sexy Susie who describes herself as a counselor, dating coach, trainer, and relationship blogger, Compete can’t tell you how many people see it because she writes on a subdomain of a subdomain. I have three active blogs, but each one defaults to TypePad which is the host of my blog.
12. Measuring what’s easy rather than what matters.
The first law of metrics is to only measure that which you can change. So if the CEO’s spouse is your official blogger, chances are, no amount of data will stop him or her from publishing. Likewise, you become what you measure. So if you measure activity, you do more activity. If you measure outcomes, you figure out how to get better outcomes. If you measure “likes” on Facebook or “followers” on Twittter you will no doubt spend most of your time, collecting “likes” on Facebook, and get some great numbers. However, if there is no evidence that Twitter or Facebook influences sales, market share or anything else your boss is counting, then you’re wasting a heck of a lot of time.
You need to understand what matters to the business, and how you contribute to that effort. Then you want to measure that, not implement some shiny new object that purports to measure success, but in fact is just a proxy for activity or hits or whatever other meaningless number that is out there.
Katie Delahaye Paine is the CEO of KDPaine & Partners, a company that delivers custom research to measure brand image, public relationships, and engagement. Write her to talk about designing a measurement program for your company or organization.
* Thanks, Randomness, for the illustration.
Comments