This is one of a series of posts by Forrest Anderson on how to improve your surveys. Visit his Reputation, Research, Relationships and Messages blog to read them all.
by Forrest Anderson — In my earlier blog post, 11 Tips for Doing More Successful Online Surveys, the third tip was "Design the Research Correctly." By this I mean: Don't do an online survey if some other kind of research is more appropriate.
Online surveys are a great research tool, but they are only one tool. Alternative tools are:
###
Forrest W. Anderson is a 30-plus-year veteran consultant in developing message architecture and managing corporate relationships and reputations. He works with organizations that are going through a change in strategic direction and that are concerned about what will happen if they mismanage their relationships with their key stakeholders. Forrest is a member of the IPR's Measurement Commission.
The Measurement Standard is a publication of Salience Insight. Salience Insight is the media measurement division of News Group International – a global provider of business intelligence and media resource services. Salience is a fresh, new global brand which incorporates the former UK-based Report International and US-based KDPaine & Partners, acquired in 2012.
Angela Jeffrey's Salient Insights
A few months back, I shared my sad tale of woe regarding missing the boat on social media measurement. I also shared how I remedied the situation through an intensive nine-month study of books, articles, white papers, conferences, and more. In the end, I distilled what I’d learned into a practical and comprehensive “how-to-do” guide that was published by the Institute for Public Relations: “Social Media Measurement: a Step by Step Approach …using the AMEC Valid Metrics Framework.”
If you too are a social media measurement dinosaur, you may want to download the paper right now! Or, if you’d like a more gradual approach, join me as I reprise the paper in an ongoing series for this column.
As a TMS reader, you already know many of the efforts that have been made to establish standards and guidelines for both traditional and social media measurement. These include:
Any of these links will take you a long way towards understanding what constitutes good measurement, and toward developing a solid measurement program. But over the coming months, I will focus on two measurement tools that should simplify everything:
1. The AMEC Valid Metrics Framework
I am a big fan of the original Valid Metrics Framework, and also of its newer adaptation for social media measurement by Richard Bagnall, Board Director for Gorkana, and Don Bartholomew, Senior Vice President Digital and Social Media Research for Ketchum.
The original Framework actualized the Barcelona principles with eight different measurement matrices, each providing an array of metrics ideas for assessing the three phases of PR (PR Activity, Intermediary Effects, and Target Audience Effects) through the communications funnel. The matrices address different kinds of PR programs including product/brand, reputation, crisis, non-profit, issues, education, and more. I will be sharing both the original version (since most programs involve both social and traditional media) and the Bartholomew/Bagnall newer social media model within this series.
2. The Eight-Step Social Media Measurement Process
This is a process I developed to make using the AMEC Valid Metrics Frameworks simpler, and to also camp on former work by Katie Paine. Not only does this step-by-step process utilize the Framework, but it includes other insights, comments, and tools suggested by other industry luminaries to make your journey simpler.
Here is my Eight-Step Social Media Measurement Process:
I will leave off here for today, and pick up with “Step One: Identify Organizational and departmental goals" in the next installment. And don’t forget: if you have any measurement needs or questions, I am just an email away at Angela.Jeffrey@SalienceInsight.com.
###
Angela Jeffrey is the new Managing Director U.S. for Salience Insight. A recognized measurement evangelist, thought leader, writer, and speaker for PR measurement and evaluation, Ms. Jeffrey created PRtrak™, one of the first analysis tools to cover print, broadcast, and internet coverage. Most recently, she founded and managed MeasurementMatch.com, a high-level consultancy matching PR clients with measurement providers. She is a long-time member of the IPR Measurement Commission and participates actively with AMEC North American Chapter.
The Measurement Standard is a publication of Salience Insight. Salience Insight is the media measurement division of News Group International – a global provider of business intelligence and media resource services. Salience is a fresh, new global brand which incorporates the former UK-based Report International and US-based KDPaine & Partners, acquired in 2012.
Bot visits are 62% of your web traffic, and half of that is malicious. This unsettling news is thanks to Incapsula's recent analysis of 1.5 billion bot visits to 20,000 websites in 249 countries. They provide a handy infographic to break down the unfriendly bot traffic iinto scrapers (5%), hacking tools (4.5%), spammers (.5%), and other impersonators (20.5%). Visit the Incapsula Blog to see the entire infographic (to the left is only a detail) and read their report.
The silver lining: Most bot visits don't execute Google Analytics' javascript tracking code, so most of this non-human inundation is not swamping your GA traffic reports. However, some bot traffic does show up there and you'd be wise to learn to identify it. See the Blast Blog, Dave Buesing, and, of course, Avinash Kaushik himself. --Bill Paarlberg, editor (Thanks to Business Insider's Chart of the Day for the tip.)
"NO! Layers. Onions have layers. Ogres have layers. Onions have layers. You get it? We both have layers.” – Shrek (2001)
by Michael White, Keene Communications
Stop declaring that your agency’s social media campaign is the most revolutionary piece of communications work since the Internet was invented. Social media has already happened and there's something much bigger on the way. If you think digital developments over the last ten years have been radical, prepare yourself for the next three years.
To explain, I need to tell you why Shrek, onions, and the Internet have so much in common: All three have layers. In its simplest form the Internet can be divided into the human readable content and the scripted technical structure which builds pages and links them together. These layers co-exist to form the World Wide Web. It is this very framework that allows our web browsers to display content in seconds, delivered by servers dotted all around the world.
In the public relations (PR) industry this system has become the foundation behind our social media campaigns and interactive marketing websites. When PR professionals attend fancy social media events to boast about their latest and greatest digital campaigns, remember this: None of their hard work would be possible without the Internet onion.
The Problem for PR: The Expanding Internet
We, the communications industry, are digital, and agency work is increasingly more about hyperlinks than physical press clippings. And as the Internet grows ever larger, we all face a huge challenge: Just how are agencies meant to deliver for clients when we must monitor the entire—and expanding—digital universe?
The answer lies in software. To offer any form of media monitoring service accurately today is impossible without its assistance. Some degree of automated media detection is needed, and in addition we need a system that can analyse web pages for meaning. Especially for online reputation management services, where it’s necessary to review a great many articles to understand different themes, who’s involved, and emerging story patterns. Online reputation management today is big data.
The Solution Is On the Way: The Semantic Web, Readable By Software
At Keene Communications we believe the solution for the "reputation management monitoring problem" the industry currently faces will arrive in the next upgrade of the Internet. This will be what the inventor of the World Wide Web, Sir Tim Berners-Lee, alluded to in 2001 as the Semantic Web. This new layer of the Internet onion extends its traditional boundaries to a web of data beyond the control of individual applications. The Semantic Web comprises specific web-scripted markup tags which make the meaning of the content of web pages readable by software. This is a significant upgrade to the traditional structural tags, which currently only inform browsers of the physical layout of web pages. The actual metatags behind the Semantic Web have been around since the late 1990’s, and are known as Resource Description Framework (RDF).
The Semantic Web and Semantic Analysis
When it comes to the Semantic Web, it’s important to make a distinction:
It will be semantic analysis software which will help drive our ability to monitor and comprehend the web’s big data moving forward. This software will understand the semantic meaning in web pages, link similar pages together through RDF (rather than hyperlinks), and allow content analysis. This means much, much more than just matching keywords on social networks. It is, instead, navigating the internet through the content that is published.
This isn’t science fiction—examples of the Semantic Web are cropping up all over the place. Just look how search goliath Google operates. When a user provides a search term an exact lexical match would not be appropriate for running a search query due to the synonymity of words. If, for example, results were delivered simply on the basis of an exact lexical match of “cheap gardening spades shop,” Google would deliver millions of extraneous results.
The search engine version of semantic analysis examines the patterns of words across indexed web pages (among other methods). In layman’s terms, the process assumes that similar words will be used within related contexts, discovered through the relationships between words. This form of analysis will be perfected even further if and when Tim Berners-Lee’s vision of the Semantic Web comes to pass. In reality, it is the responsibility of developers across the world to insert the correct semantic tags into web pages, and not just metadata for search engine optimisation strategy.
Sometimes semantic analysis even makes its way into smaller mainstream tools. At London’s eCommerce Expo earlier this year I learnt how a basic semantic engine underpins the popular home cooking app Whisk, allowing it to crawl recipe websites and deliver machine readable content to supermarkets.
Keene Communications is researching semantic analysis in response to the rapidly growing need to monitor increasing amounts of reputation data. Each week we get closer to building a fully automated semantic analysis system, and we anticipate good news on this in late 2014. We are developing a news delivery system which will ultimately be able to automatically categorise different stories. This will allow us to accurately track our clients’ reputations even as the web continues to expand.
For the PR industry the Semantic Web offers countless opportunities to enhance and redefine our offerings for clients. Its capabilities will not only be a giant leap forward when searching for viable methods to measure reputation online, but will also help re-imagine stakeholder analysis.
The Semantic Web is a brave and ever-closer new world. I predict that in the next three years it will overshadow the hype we still see around social media.
And what if my prediction doesn’t come true? I’ll sit down, watch "Shrek" and eat a raw onion.
###
Michael White is a consultant at Keene Communications. He devises and executes digital and social media campaigns. Email him at michael[at]keenecomms[dot]com or tweet him at @keenecomms.
The Measurement Standard is a publication of Salience Insight. Salience Insight is the media measurement division of News Group International – a global provider of business intelligence and media resource services. Salience is a fresh, new global brand which incorporates the former UK-based Report International and US-based KDPaine & Partners, acquired in 2012.
Here's a quick 3-step refresher course on how to use Google Analytics for PR, thanks to an exchange of blog posts earlier this year by two experts on the topic: Justin Cutroni (he wrote the O'Reilly book on GA) and Andrew Bruce Smith (he runs a GA training course for CIPR).
Step 1: From the Basics to Your Very First Dashboard
Start by reading Mr. Cutroni's post Google Analytics for PR. He shows you how to use GA to answer four basic questions about your website:
He also includes some explanation of real-time monitoring and how to set up a custom dashboard with advanced segments. You want an automated email of your dashboard from Google every day? He's got a link to that, too.
Step 2: You Got a Problem with Google Analytics?
Next read Mr. Smith's post, (prompted by the one above) and note his discussion of some of the challenges for PR people using GA:
You might recognize yourself in there, or some difficulties that you have faced.
Step 3: Upgrade to Serious Dashboard Chops
Now go back to Mr. Cutroni's blog for another post " A PR Dashboard for Google Analytics." He's built a GA dashboard with typical PR metrics that you can install for your site with three clicks. (Really, I just did it for The Measurement Standard.) He takes you through each section of the dashboard and shows you how to use and customize it.
###
Bill Paarlberg, Editor of The Measurement Standard, has been writing about public relations measurement for 20 years. He is editor of the award-winning "Measuring the Networked Nonprofit" by Beth Kanter and Katie Paine, and editor of two other books on measurement by Katie Paine, "Measure What Matters" and "Measuring Public Relationships." Visit Bill Paarlberg's page on LinkedIn.
The Measurement Standard is a publication of Salience Insight. Salience Insight is the media measurement division of News Group International – a global provider of business intelligence and media resource services. Salience is a fresh, new global brand which incorporates the former UK-based Report International and US-based KDPaine & Partners, acquired in 2012.
You've seen articles like this recent one at Ragan's PR Daily: 10 Top Tools for PR Measurement. Those ten are probably great tools, but I doubt if they are actually the best tools for you to use right now for that project on your desk.
I wish I could tell you the #1 public relations and social media measurement tool for you. But I can't. It's just too complicated.
The thing is, PR and social media measurement programs vary a lot. And there are many, many different tools available. Does that mean it's difficult to pick the best one for your project? Is it really a tough thing to decide which measurement tool is the right tool for the job?
Sounds like a great idea for a future article. For now, let's ask a couple measurement heavy hitters how they decide:
Angela Jeffrey says, "The best advice I can give for choosing measurement tools is: Don’t make “free” your number one criterion. You can spend all your time screwing around with the latest this and that, and never get the job done. Do a deep dive into researching what is available; make the case for some budget for the tool that brings the most functions into a single place; and buy it!”
Katie Paine says, "I've looked at a lot of tools, but I typically go with my all-around favorite: Excel. Within Excel you can create a database of your items, score them, create a Pivot Table and then do just about any analysis that you need. In the data toolset, you can run basic correlations, T-tests, r, and a bunch of other stats, against web analytic data and/or Facebook insights or Hootsuite data. If I was on a desert island and only had one tool, my choice would be Excel."
Stay tuned for more on this topic. -- Bill Paarlberg, editor (Thanks to joe-ks.com for the image.)
###
Seth Grimes’ recent post, “Never Trust Sentiment Accuracy Claims,” was probably designed to set off a lively exchange about social media analytics. It worked. His challenge to the validity of benchmarking accuracy against human analysts, or accuracy as a goal altogether, raises some of my favorite points.
(Full disclosure: I work for SAS, where we use both human and automated analysis. Our SAS® Social Media Analytics employs automated sentiment analysis.)
I love Seth’s core question: Is “as good as human” a valid objective? Yes, and no, I think.
After all, are we looking for “better” than humans, or more consistent? Is positive defined as containing positive keywords associated with a brand, or is it, rather, something we like? If it’s the latter, you need humans – but they’d better be well trained. If it’s the former, consistency wins.
We see these issues pop up all the time. Let’s start with who is good at what.
Humans, compared to computer chips, are better at extrapolating. For example, SAS offers a category of solutions we call “customer intelligence.” (Problem is, few others call it that.) I can teach human coders and computers alike to look for terms, e.g. CRM, marketing automation, marketing optimization, customer segmentation, loyalty program, and on. But real people often talk around marketing terms, without using the actual terms. So we teach our human coders the concepts behind these terms, and then they can spot other ways of describing marketing activities managed via technology. Computers can’t do that. While they can make some assumptions based on similarity, they still need the terms.
Computers, on the other hand, don’t make mistakes (assuming you’ve programmed them correctly). They don’t get distracted or change their minds. More importantly, they have no opinions. Example: When Bank Systems & Technology picks up our press release about SAS’ new security analytics platform, I’m thrilled. But is it positive? Not according to the computer, which looks for sentiment indicators. Simply repeating the availability of the product is neutral, a statement of fact. Computers can resist the urge to take intent into account when scoring. So, points for consistency over comprehension, right?
Well, it depends.
It’s the question that will never die: What do you want to measure? In this case, is it expressions of 3rd-party opinion or successful efforts to plant key messages?
And what happens when you encounter a word that screams “negative” for some but not for others? Most standard sentiment taxonomies rightfully consider “fraud” to be a negative term. But SAS offers a product aimed at detecting and preventing fraud. So for us, it’s neutral. As are terms like “killer” (as in “killer app.”). For one of our customers, “f***ing amazing” is what they strive for. No plug-and-play vendor can customize sentiment to accommodate individual needs.
So a lot comes down to defining your terms – “positive” in this case or, maybe “influence” in another (that’s for another column). But even more comes down to transparency and flexibility. Be skeptical. If your vendors can’t explain to you how they derive sentiment, step cautiously. If they say “natural language processing” but talk only keywords, send up a flare. And if they make lofty claims of accuracy and expertise before they understand your needs and the context, change the locks.
###
Diane Lennox is PR Services Manager for SAS, the leader in business analytics software and services. A 30-year veteran in marketing communications and writing for all media, she has spent the past six years supporting SAS' internal PR agency by managing the Global PR Resource Center (internal), acting as international liaison with dozens of country PR managers, guiding PR measurement and monitoring, overseeing communications and media training, supporting the blogging and social media program and providing SEO guidance. She does not do windows.
(Thanks to Across the Litoverse for the image.)
The Top 5 Unrealistic Expectations For Starting A Measurement Program
Expectation #1:
You expect to have a “measurement number” that will justify your budget and answer all questions.
Reality #1:
First of all, measurement isn’t a single number. Your CFO doesn’t go into the Board of Directors meeting with one number. So why would Communications — made up of such a myriad of different components — think it can demonstrate results with one number?
Secondly, measurement is a long-term process. It will take you time to identify changes and trends. Your manufacturing VP doesn’t report on the productivity for today, he or she reports on changes in productivity month-to-month, quarter-to-quarter, and year-to-year. That’s what you’ll be doing, too.
Expectation #2:
You’ve just gone through an incredibly complicated screening process to find the right vendor, and now you expect everything will be simple and easy.
Reality #2:
Like any research project, there’s a lot of data involved. And, as we know, data is never simple, no matter what your program or what tool you choose. You expect to be able to sign the purchase order and have your metrics magically appear. It doesn’t work that way. You are going to have to put in lots more time if you want to get useful data.
Expectation #3.
You’ve spent all this money, so you expect your data will be 100% perfect from day one.
Reality #3:
Expect glitches in the data, especially to start. Computers, like children, take a while to learn the rules. For the first few months you are going to be sorting out search terms and media outlet specifics, and you and your computer(s) are going to make some mistakes.
At first you will probably define search terms that are far too broad, and you will be inundated with mentions that have nothing to do with your organization(s). This is what is called a learning opportunity. For instance, in the world of social media, you’ll be amazed at what people think your name is. Just ask the folks at CarMax (the car-selling company) and CarMex (the lip balm.) Or SAS, the analytic software company, (not the airline or the Semester at Sea folks). Then you narrow your search and the next thing you know the product management types will be screaming because you missed something.
It’s probably going to take at least three months to get it right. You can shorten this cycle if you chose to work with a very clearly defined list of media outlets, defined by specific URLs. Which means not, for instance, asking for “Business Week blog.” In which case, you will discover that there are actually something like 170 of them. Which one, exactly did you have in mind?
Expectation #4.
You expect results right away.
Reality #4:
Expect that it will take time to get your ducks in a row.)
Unless you know how to do a Vulcan mind meld, you will need to spend time with your measurement vendor’s staff to very clearly define the parameters of your program. You will need to define subjects, messages, campaigns, and any other details you will be tracking. If you give your vendor the wrong messages, incorrect competitors, or out-of-date product names, then they will not know it until you discover it and tell them. If it’s on your website, they will assume it to be current. And if it’s not on your website and you haven’t told them about it, how will they know it exists?
Expectation #5.
You expect your first report to explain everything.
Reality #5:
Your first report will be confusing. No matter how clear the program seems to be, your first report is going to throw you for a loop. There will be charts that use language you may not understand. There will be charts that look weird. Until you see it visually up there on the screen, you probably won’t understand what you’ve ordered. That is the nature of data -- and remember, some of your early data will probably not be what you expected.
People will question your data and the methodology. Like any expectant mother you will need to push (back) hard. It will hurt. Odds are, you are going to have work at it some to get what you want from your measurement. Change is painful but in the end, you’ll have a beautiful product. Congratulations.
###
Thanks for the images to CatholicMom.com, BabyAnimalz.com, DesktopNexus, and People / Dreamworks.)
Katie Delahaye Paine is CEO of KDPaine & Partners, a company that delivers custom research to measure brand image, public relationships, and engagement. Katie Paine is a dynamic and experienced speaker on public relations and social media measurement. Click here for the schedule of Katie’s upcoming speaking engagements.
Hey, all you social media sentiment analysis and NLP geeks out there, here's a cool thing on its way: Truth checking software that flags written copy for possible falsehoods. Read "Bull beware: Truth goggles sniff out suspicious sentences in news" at Nieman Journalism Lab to find out more.
MIT grad student Dan Schultz is writing "...software that flags suspicious claims in news articles and helps readers determine their truthiness... His software is not designed to determine lies from truth on its own... [it's] designed to detect words and phrases that show up in PolitiFact’s database, relying on PolitiFact’s researchers for the truth-telling."
Shultz says, “The ultimate goal is to enable intelligent conversations about contentious issues.” True that. --WTP
###
Here’s another rave review for Measure What Matters, Katie Paine’s new book on public relations measurement and social media measurement. Reviewer Bob LaDrew over at For Immediate Release has a great many very nice things to say about Measure What Matters, including:
[Measure What Matters is] the best written, best argued book on social media measurement that I have read... A tremendously good book… it’s a treasure. AN ABSOLUTE DOOZY OF A READ. Makes great points… incredibly accessible and incredibly useful. I can’t recommend it strongly enough… The book is a joy to read for a language nerd.
Listen to the whole podcast here. To read excerpts from the book, visit the Measure What Matters blog. And you can click right here to order Measure What Matters right now at Amazon.
###
-- Bill Paarlberg, editor, Measure What Matters and The Measurement Standard
Jim Macnamara’s Measuring Up
Public relations measurement and social media measurement tools and services are a bit like beauty creams. They make bold claims to bring us admiration, perfect relationships, and even eternal youth and fame, but they are largely pseudo-scientific mumbo-jumbo and fail to deliver what they promise.
It seems that every day a new measurement tool or service arrives in the marketplace with much fanfare and a promise that it will provide the ultimate metric which will ensure us of the enduring gratitude of management and a competitive edge over our peers. Often these products and services are described with new buzzwords and some are hidden in a “black box” of software or the internal systems of service providers – a little bit like “anti-oxidant cream with hypotrichloride, X-enzyme, and diocryptin.”
A feature of ethical research is transparency and disclosure of the methodology used. This principle does clash with commercial requirements to protect IP. But researchers should at least divulge the broad outline and key steps of their methodology. If they don’t, they’re probably applying beauty cream to your communication campaigns.
Apart from the term “buzz” itself – which is an ill-defined and imprecise notion – some of the key buzzwords deployed in measurement are sentiment, engagement, impact, and influence. These are fine – if what is being measured is what it purports to be. However, a review of PR measurement marketing materials shows that many of these terms are used inappropriately and measure the wrong thing, or misrepresent what they actually measure.
Getting sentimental about sentiment
For instance, sentiment is defined in Webster’s Online Dictionary as “thought prompted by passion or feeling; a state of mind; feeling toward some person or thing; disposition prompting action or expression.” In simple terms, sentiment is an emotional or cognitive response in people’s heads. Therefore, it cannot be measured from a page of text using content analysis – and certainly not by automated machine measurement. Yet, frequently we are told that certain software systems and content analysis services measure sentiment.
“It’s bollocks,” as the British say.
Sentiment is human feelings – quite a different thing from the tone of media content. We have to measure what we say we are measuring, and then use the right method.
Celebrating engagement
Another key term that is used and abused is “engagement.” The difficulty with engagement is that it covers a wide range of responses, from the most basic human motor functions through a number of psychological levels such as involvement and elaboration, to various behaviours. These also span a wide range from simple actions such as clicking a mouse to becoming an evangelist for a product or cause.
Some measurement systems and services count click-throughs as engagement. In a very minor way, people clicking on a page or image is engagement. But they may hate it when they get there. Other measurement approaches count any form of response as engagement – such as commenting, rating, or entering contact details. But if the comment made or the rating given is negative, or contact details are given in order to complain, this is hardly a desirable form of engagement.
Serious social researchers know that qualitative information is very important; quantitative data only tells part of the story. Also they know that higher levels of engagement are important, rather than measures of how many people clicked on your site and then went to have a coffee and forgot about it. Higher levels of engagement that tell us much more than clicks, views, or even downloads include positive comments, ratings, and reviews, recommendations, requests for more information, subscribing, trialling, and so on. Measurement should clearly identify what type of engagement is evaluated and focus higher up the cognitive and behavioural engagement scale.
The big Is of measurement – Impact and Influence
Impact and influence are in the Miss World league of measurement – the ultimate prize to claim in the PR beauty stakes. So they are among the hottest buzzwords around the measurement world. Like sentiment, impact and influence can be affective (i.e., emotional). However, importantly, impact and influence go beyond sentiment in that they can be cognitive or behavioural change caused among people – ideally the ones you want to impact and influence.
But most forms and elements of impact and influence exist in people’s minds. Even when behaviour occurs, proof of causation (whether your communication caused the behaviour or whether it was a result of something else) requires investigation of affective and/or cognitive processes. These processes, and therefore outcomes such as impact and influence, cannot be measured by counting, scaling, scoring media texts, or crunching Web analytics. Yet some measurement tools and services claim to demonstrate impact and/or influence by making large leaps in logic from basic quantitative data such as volumes of clicks, clusters on social network maps, or low-level audience engagement. While key hubs and nodes in networks and audience engagement can give useful indicators of potential influence, they are what they are.
As one social researcher bluntly says:
“If you wanna know what people are thinking and feeling, you gotta get off your butt and your computer and go out and ask them.”
This leads to one of the great ironies of PR measurement. Thousands of practitioners are looking for solutions and hundreds of service and product providers are engaged in trying to develop the “next big thing.” Yet a suite of proven social science research methods have existed and been used productively for more than a century. Interviews, focus groups, surveys, and ethnography are well-established, reliable, and insightful research methods that are passed over by many practitioners in their techno-fetishism to find a new PR metric that measures everything (ideally with positive results).
That is not to say that innovation is not important. New developments in research such as netnography build on the foundations of ethnography (observation of behaviour) by adding online behavioural tracking. E-surveys make audience research easier, faster, and lower cost. Web 2.0 media offer many opportunities for listening, not just talking. Tracking and evaluating what people are thinking, feeling, saying, and doing is available for free every day if we listen. Web analytics do productively contribute to the stock of quantitative data for tracking audience reactions in the “blind spot” between distribution of outputs and outcomes. And content analysis, including some automated functions, helps us understand the vast amount of information that exists and continues to grow.
So computers and clever algorithms have an important role. But they are tools we use to help us understand human behaviour – and only that. No single metric or simplistic measure that purports to demonstrate sentiment, engagement, impact, or influence can explain human emotions, cognition, or behaviour.
Similar caution needs to be applied in examining other metrics in the PR measurement space such as media score, weighted media score, media index, and similar buzzwords. With many, you may as well multiply column inches of media coverage by your weight and divide by your age and call it your personal “PR power.”
I’ve tried lots of beauty creams in my time – and none of them worked.
###
Jim Macnamara, PhD, FPRIA, FAMI, CPM, FAMEC became Professor of Public Communication at the University of Technology Sydney in 2007 after a 30-year career working in journalism, public relations and media research which culminated in selling the CARMA Asia Pacific franchise which he founded to Media Monitors in 2006. He is the author of 12 books including The 21st Century Media (R)evolution: Emergent Communication Practices published by Peter Lang, New York in 2010.
Should you analyze your conversations for sentiment? If so, should you use computers or humans to do the job?
by Katie Delahaye Paine
Whether they bill themselves as listening tools, measurement services, or media monitoring firms, there are now more than 150 companies pitching their social media measurement services to overloaded corporate communicators. Almost all offer sentiment analysis, the art or science of gleaning how people feel about your brand by reading what they have to say.
Should You Do Sentiment Analysis At All?
Yes, it’s the latest shiny new measurement toy, but sentiment analysis is not always possible, or even useful. Carefully consider these two questions before you decide:
1. Do people express any sentiment at all in discussing your brand?
You can't measure sentiment if it’s not there. For many sectors (marketing a B2B product, for instance) it may well be that all the conversations out there are factual discussions. It may not be possible to glean any sentiment out of the conversations.
2. Do you have any direct interaction with customers?
Measuring sentiment is only useful if you can use your results. If you have no direct customer interaction, it will be very difficult to determine whether the expression of sentiment has any real impact on your business. Only if you are an online retailer, or are in a field where people make reservations, or register online, can you tie sentiment to customer behavior.
Should You Use Computers or Humans to Analyze Sentiment?
If and when you decide to go ahead with social media sentiment analysis, the biggest decision you have to make is whether you are going to use human or computer-automated analysis. Before you rush out and buy a sentiment analysis system, here are four questions that will help you decide if it’s computers or humans who will do the best job for you.
1. Do you receive more than 10,000 qualified mentions a month in social media?
That’s not counting spam, or copy generated by content farms, or mentions of a similar sounding brand (for instance, Carmax the car superstore vs. Carmex the lip balm). If your volume falls below this mark it may well be more expensive to program a computer than to use humans to accurately analyze your coverage. See the chart to the right based on our experience.
2. What level of accuracy is acceptable to your executive leadership?
Most automated sentiment analysis tools get sentiment right about 40-60% of the time. If that is good enough, you can use an automated system. If not, then to ensure a higher degree of accuracy you need to (at least) use humans to check random samples of your analysis and re-analyze (perhaps with humans) as necessary.
3. If you need a high degree of accuracy, do you have 20,000 qualified mentions?
Computer-coded accuracy increases with the number of mentions analyzed. It is easier to get a higher degree of accuracy if you have many mentions to work with. In our experience that number is about 20,000. See the chart to the right based on our experience.
4. What level of detail do you need from your sentiment analysis system?
If you need to track complex messaging, quotes, issues, positioning, or other esoteric details, computers will be complex to program and slow to get results from. Chances are you will need humans to get the job done with acceptable accuracy and reasonable speed.
5. Do you run numerous campaigns which will require different search terms, different message tracking, and different definitions of positive or neutral?
Again, computers can take weeks to reprogram, test, and fix. If you need fast turnaround on changes to your system, use a human.
###
(Thanks for the illustration: Personal Robot 02 by Franz Steiner. Portfolio here.)
Katie Delahaye Paine is CEO of KDPaine & Partners, a company that delivers custom research to measure brand image, public relationships, and engagement. Katie Paine is a dynamic and experienced speaker on public relations and social media measurement. Click here for the schedule of Katie’s upcoming speaking engagements.
You can't measure everything, so what do you measure?
One of the hardest parts of public relations measurement or social media measurement is to determine what you should measure. Organizations often communicate with several different customer or stakeholder groups, and with several different messages or goals in mind. And sometimes office politics is as important as the data. So how do you determine what really matters to your business or organization?
This checklist will help you do that. It guides you through preliminary information gathering, meetings, and survey research to help you understand what is most important to your company or organization, and what is most important to your audience(s). What is most important to measure will be what drives your audience(s) or customers to act. So work through the following to figure out what matters, and what the best way is to measure it.
The 4 “What Matters?” Questions You Need to Answer
Remember, the point of the following research, and the experience you gain in planning and carrying it out, is to help you to answer these four questions:
1. What is most important to your company or organization?
2. What is most important to your audience(s)?
3. What drives your audience(s) or customers to act?
4. What is the best way to measure it?
____________________________________________________________
And now a word from our sponsor: Learn more about how to measure what matters with Katie Delahaye Paine’s new book Measure What Matters.
“Fantastic primer on measuring social media from @kdpaine http://amzn.to/fFdEmj Fail to read at your own peril”
--Bob Garfield, co-host of NPR’s On The Media
Order Measure What Matters now at Amazon.com
____________________________________________________________
The How-to-Decide-What-to-Measure Checklist
1. Learn the Lay of the Land
__ Read up on your market, your marketplace, and really get to know the competition.
__ Check in with marketing, sales, business development, market research, and the customer intelligence group to get a clear and current list of market forces, as well as key customers and competitors.
__ Make a list of specific influencers including people, events, and issues.
__ Work through “10 Questions Every Communications Professional Must Be Able to Answer” and bring the ones you can't answer to the meeting below.
2. Get Everyone On the Same Page
__ Set up a meeting of everyone you will work with on your measurement project or that you will report to.
__ Set the agenda for the meeting, including:
__ At the meeting, achieve consensus on the five points above.
__ Summarize the meeting in a document that includes the Key Performance Indicators that you will be reporting on and the Dashboard of charts or tables that you will need to present.
__ Get sign off on those KPIs and the Dashboard.
3. Prepare to Do Preliminary Research
(All of the following will not necessarily pertain to your particular situation.)
__ Based on the KPIs, make a list of the data you will need to report on.
__ Select a web analytic and/or CRM tool.
__ Create one or more unique URLs and landing pages so you can directly tie web activity to results.
__ Make a list of the engagement data you will need, including some of the following:
__ Talk to whoever within your organization manages the website and collects web data to determine what data you already have or can collect.
__ Decide if you need any additional tools.
__ Create an RFP for web data collection and analysis.
__ Select a survey tool.
__ Make a list of audiences you need to survey.
__ Determine if there is a list (of names) available in-house. Do you need to purchase a list?
__ Make a list of any perception data you will need, including:
__ Draft a list of questions to which you need answers.
__ Identify a professional expert, academic, or internal or external partner to create and test your survey questionnaire(s).
__ Provide your list of questions to the expert.
__ Review the proposed questionnaire(s).
__ Test the proposed questionnaire(s).
4. Do Preliminary Research
__ Field the survey.
__ Review the crosstabs to make sure you have the data you need.
5. Analyze & Report Results
__ Analyze the results and draw conclusions.
__ Put all relevant data into a KPI table.
__ Look for significant failures: Where did a program not deliver?
__ Look for exceptional successes.
__ Drill down into the data to determine cause and effect.
__ Pull most relevant charts and data into a PowerPoint presentation.
__ Report results and make recommendations.
Miscellaneous___________________________________________________________________________________________________________Now,
6. Decide What Matters
Use your results and the experience of this preliminary research to answer the 4 “What Matters?” questions in the introduction above.
###
Katie Delahaye Paine is CEO of KDPaine & Partners, a company that delivers custom research to measure brand image, public relationships, and engagement. Katie Paine is a dynamic and experienced speaker on public relations and social media measurement. Click here for the schedule of Katie’s upcoming speaking engagements.
Anne Holland’s Which Test Won received The Measurement Standard’s Best New Measurement Product of the Year Award for 2010. Here is a perfect example of why Which Test Won’s A/B testing is such a powerful measurement tool.
Would you have suspected that adding the phrase “Join 14,752 others and get free updates!” would have such a strong negative result on signups? That seven-word social proof addition to an email signup box produced a strong counterintuitive result that faked out many experienced marketers.
Read more at Which Test Won’s DIYThemes Blog Test.
--Bill Paarlberg, Editor, The Measurement Standard
The Measurement Standard is a publication of KDPaine & Partners, a company that delivers custom research to measure brand image, public relationships, and engagement.
Can’t See the Forest for the Tweets?
Other articles about influence in this issue of The Measurement Standard include:
And, if you just want to learn how to determine the top 100 most important news sources for your business, here you go: “How to Determine Which Influencers Matter to Your Business.”
- 15 Years Measuring Influence: Lessons Learned While Herding Chickens
- Your Guide to Influence: Tools, Techniques, and Recent Conversations
by Katie Delahaye Paine
With all the recent talk about measuring influence in this way or that, sometimes people aren’t seeing the forest for the tweets. No matter how you measure it, here are five things to keep in mind about Influence:
#1: Influence is more than social media.
Read Jon Berry and Ed Keller’s The Influentials and learn about how and what really influences behavior. It’s a wonderful data-driven analysis of who influences what.
#2: Influence is not reach.
The leader in this misnaming contest is Klout, who says it wants to be the Nielsen of new media. Essentially, what they are calling “True Reach” is their version of impressions—and just as worthless.
I applaud their ambition, but that assumes that their magic number will tell everyone how “important” a particular outlet is. The problem is that any given outlet may be important to me, but impotent in a different marketplace. And an outlet with anemic Klout scores may be incredibly influential in a niche marketplace. (Never mind the problems Klout has with bots that game the system, see “Klout Is Broken”.) For a list of recent posts discussing Klout, see Your Guide to Influence: Tools, Techniques, and Recent Conversations.
#3: Behind all influence is action.
Influence, according to dictionary.com, is “the capacity or power of persons or things to be a compelling force on or produce effects on the actions, behavior, opinions, etc., of others.”
Please note that the operational words are “produce effects.” In other words, Justin Beber may have millions of fans on Twitter, but it’s a pretty good bet that he’s never going to produce any effect on my business. The point is that influence is not reach, it’s the ability to cause action: the power to produce an effect or an outcome.
So if you are a defense contractor and there are only 200 people on the planet that can legally buy your product, chances are it’s not the number of followers you have on Twitter that matters. What matters is that you are in some way, shape, or form reaching those 200 people and the 2000 or so people that influence them.
#4: Behind every influencer is a real live human.
Influence is not a list. It can’t be used like those old Bacon’s directories or MediaMap lists. Influence implies a personal, persuasive relationship between the individual and the audience being influenced.
No IR or AR professional would dream of relying on mass emails to explain a new strategy to a financial or industry analyst. Today’s influentials are no different – in fact they are frequently the same people you used to try to influence when they were in their old media jobs.
So why suddenly, just because technology enables it, do people think they can substitute personal relationships for electronic ones? I’m not saying you necessarily have to have person-to-person or voice contact, but at the very least you need to read what the person has written or posted, and understand what gets them excited.
If you just rely on words, you will assume that, just because this blog is called The Measurement Standard, I’m going to be interested in your new more accurate mechanical probing device. And you will get marked as spam and derided forever in the Bad Pitch Blog.
#5: Influence is not the Holy Grail. Not even the Golden Goose.
Influence is not some magical metric that will help you measure all your results. It is not going to get you a raise or a gold star or a place on The Measurement Standard’s Honor Roll of Measurement Mavens. If defined and used appropriately it can help narrow the amount of chatter that you need to be paying attention to and help you focus your outreach efforts. For more on this, read Don Bartholmew’s excellent post Measuring Influence in Social Media.
Katie Delahaye Paine is CEO of KDPaine & Partners, a company that delivers custom research to measure brand image, public relationships, and engagement. Katie Paine is a dynamic and experienced speaker on public relations and social media measurement. Click here for the schedule of Katie’s upcoming speaking engagements.
The most popular series of articles in The Measurement Standard’s 100-issue history has been Katie Paine’s Measurement Checklists. They are the quick-and-handy step-by-step way to plan and organize your next public relations measurement project.
Here are all eight Checklists in the series so far:
Please let us know how we can improve them.
--Bill Paarlberg, Editor, The Measurement Standard
The Measurement Standard is a publication of KDPaine & Partners, a company that delivers custom research to measure brand image, public relationships, and engagement.
Nominations for The Measurement Standard’s Best New Measurement Product of 2010 Award will close soon. We welcome your suggestions.
Have you used a new measurement tool or product this year that you find particularly effective or valuable? A book, an online analysis and reporting tool, a particular automated monitoring service? Email your nominations to Katie Paine.
Happy Holidays!
The Measurement Standard is now accepting nominations for its 2010 Best New Measurement Product of the Year Award.
The rules are simple: The product has to be used for public relations or social media measurement, be new on the market in 2010, and available for purchase by the public. It has to be actual, not theoretical, and have the references to prove it.
Products will be judged on the following criteria:
Send your nominations to The Editor, The Measurement Standard. Deadline for nominations is December 10th.
Check out Sam Crocker's post “Tools to Predict and Monitor Competitor Web Traffic” at The Daily SEO Blog. Includes reviews of: Alexa, Compete, comScore, Google Ad Planner, Google Insights, Google Trends for Websites, HitWise, Quantcast, and SEMrush.
(KDPaine&Partners uses Compete for most of its clients.)
--Bill Paarlberg, Editor, The Measurement Standard
The Measurement Standard is a publication of KDPaine & Partners, a company that delivers custom research to measure brand image, public relationships, and engagement.
Thanks to whirled musings for the illustration.
Proper measurement requires time and expertise, not just cheap technology.
This month’s Measurement Menace Award goes to all those companies who are selling the dream of cheap measurement. By implying that measurement requires neither know-how nor effort, these companies do our industry a great disservice.
The reason that auditors and specialists are hired to measure financial performance is that it requires expertise and industry knowledge to glean insight from the data. I don’t know a lot of CEOs who would trust a $9-a-month accounting package to run their companies. So why do PR people jump at new tools like Swix, Social Mention, Reputation Defender, and others that promise measurement for less than the price of lunch at Mcdonald’s?
The reality is that good measurement requires brains and expertise as well as data..
. And it costs money. If you want highly reliable and accurate reporting about anything—PR, social media, marketing, or your financials—you need to make an investment of both time and money. I’m not suggesting you have to break the bank, but realistically, you should be spending 10% of your budget to figure out whether the other 90% is working.The cheap data = measurement illusion means that too often proper measurement is not a priority. Or just not understood. Too many managers relegate metrics to low level staffers who produce pretty reports on a regular basis. Then, 48 hours before the quarterly or annual Leadership Team meeting, they panic because the reports aren't meaningful to the CEO.
So next time cheapnumbers.com offers you the latest in quick and easy measurement solutions, recognize it for the snake oil that it probably is. --KDP
Katie Delahaye Paine is CEO of KDPaine & Partners, a company that delivers custom research to measure brand image, public relationships, and engagement. Katie Paine is a dynamic and experienced speaker on public relations and social media measurement. Click here for the schedule of Katie’s upcoming speaking engagements.