Recently I wrote to Jim Grunig to clarify a reference to his two-way symmetrical model of public relations. I had seen several different references to it and was confused about which to cite. In his usual genial and comprehensive manner he took the time to write back with an extensive bibliography. So I thought I'd pass it along.
As many of you will recall from PR 101, the two-way symmetrical model is part of the Grunig four-model theory of public relations practice. Since its introduction in 1984 (see below) the four-model theory of how PR has, does, and should work has grown in acceptance to become the basis of the practice, measurement, and ethics of modern PR. It is a tribute to the robustness of this model that social media fits right in. (Depending on how social media is used, as Jim points out: See "Jim Grunig on social media, 'the latest fad in public relations.'")
If you are interested in learning more, a quick google will reveal numerous references. For an introduction, see Bill Sledzik's blog post "The ’4 Models’ of public relations practice: How far have you evolved?" It includes an interesting comment by Jim Grunig.
Here is the bibliography that Jim sent me:
The most recent material we have published on the two-way symmetrical model is Chapter 8, titled "Models of Public Relations," in this book:
Grunig, L. A., Grunig, J. E., & Dozier, D. M. (2002). Excellent public relations and effective organizations: A study of communication management in three countries. Mahwah, NJ: Lawrence Erlbaum Associates.
Previous to that chapter, you can find summaries of research on the models here:
Grunig, J. E. (2001). Two-way symmetrical public relations: Past, present, and future. In R. L. Heath (Ed.), Handbook of public relations (pp. 11-30). Thousand Oaks, CA: Sage.
and earlier in:
Grunig, J. E., & Grunig, L. A. (1992). Models of public relations and communication. In J. E. Grunig (Ed.), Excellence in public relations and communication management (pp. 285-326). Hillsdale, NJ: Lawrence Erlbaum Associates.
and even earlier in:
J. E., & Grunig, L. A. (1989). Toward a theory of the public relations behavior of organizations: Review of a program of research. Public Relations Research Annual, 1, 27-66.
The first article describing research on the models of public relations was published in this article:
Grunig, J. E. (1984). Organizations, environments, and models of public relations. Public Relations Research & Education, 1(1), 6-29.
although the first mention of the models of public relations and the symmetrical model was in:
Grunig, J. E. & Hunt, T. (1984). Managing public relations. New York: Holt, Rinehart & Winston.
Thanks very much, Jim!
--Bill Paarlberg, Editor, The Measurement Standard
This is the latest Checklist in our series of tools for public relations measurement and social media measurement. You may also be interested in Katie Delahaye Paine's Social Media Measurement Checklist, and Katie Delahaye Paine’s Product Launch Measurement Checklist.
1. ___ Get consensus on the big picture issues:
a. ___ Ask yourself if you really want the truth, or are you just trying to justify your existence? Are you, and your boss, really interested in reality, or is this just an exercise in budget justification?
b. ___ List the audiences that will see and use the data.
c. ___ List the objectives for the research.
d. ___ Make sure those objectives are in line with corporate and divisional objectives.
2. ___ Inventory existing research:
a. ___ Find out who is already doing what for research in your organization. If it is survey research is it reusable? Is there leverage in keeping questions consistent?
b. ___ Find out if your market research department has a reliable track record with a particular vendor(s). Do they have standard accuracy standards that you can adopt?
3. ___ Do your background homework:
a. ___ Review Dr. Walter K. Lindenmann’s “Guidelines and Standards for Measuring the Effectiveness of PR Programs and Activities” available at the IPR website.
b. ___ Review The CASRO Code of Standards and Ethics for Survey Research.
c. ___ Review The Measurement Guidelines from IAB.
4. ___ Determine the universe upon which you are doing research:
a. ___ Will you investigate a defined media set, or “everything?” (You won’t ever get everything, so realistically, you’ll get about 85%.)
b. ___ Determine if you have a defined universe that matches your target audiences. Will it require sampling?
c. ___ Test to make sure you are getting a representative sample.
d. ___ List the variables that will be included. Get agreement from your boss and your boss’s boss on those variables.
5. ___ Determine who will do the work:
a. ___ If in house, then:
i. ___ Write up your methodology.
ii. ___ Test your methodology.
iii. ___ Refine your methodology until you achieve a minimum of 88% intercoder reliability score. Read about intercoder reliability scores at All Academic Research.
iv. ___ Decide if sampling error limits will be shown (if they can be computed).
v. ___ Determine how projectable the research findings will be to the total universe or population under study.
vi. ___ Analyze your results, using correlations wherever possible.
b. ___ If you’re outsourcing research:
i. ___ Determine who will actually be supervising and/or carrying out the project.
ii. ___ Investigate their backgrounds and experience levels.
iii. ___ Determine who will actually be doing the field work. If the assignment includes media content analysis, who actually will be reading the clips or viewing and/or listening to the broadcast video/audio tapes? If the assignments involve focus groups, who will be moderating the sessions? If the study involves conducting interviews, who will be doing those and how will they be trained, briefed, and monitored?
iv. ___ Determine and confirm that quality control mechanisms have been built into the study to assure that all "readers," "moderators," and "interviewers" adhere to the research design and study parameters.
v. ___ Review the written set of instructions and guidelines for the "readers," the "moderators," and the "interviewers"?
vi. ___ If the data are weighted, insist upon examining the basis for those weights (no black boxes allowed).
vii. ___ Determine if sampling error limits will be shown (if they can be computed).
viii. ___ Determine how projectable the research findings will be to the total universe or population under study.
6. ___ Review the results:
a. ___ Do a “Does this make sense test?” For instance, if you are fourth in the marketplace and the results place you at number one, ask why. If a competitor has a major product launch but its share of conversation declines, what’s up with that?
b. ___ Ask “so what” three times on every chart. Data is only meaningful if it tells you what to do next. Figure out the “So whats?” and the “What are the next steps?”
c. ___ Dig into the negatives first -- what doesn’t make you look good is much more educational than good news that you expect.
by Katie Delahaye Paine
Having recently attended a number of measurement presentations and a variety of conferences, I’m now convinced that most marketers and communications professionals are cheerfully going through life with blinders on. Those blinders are made out of a flimsy gauze of questionable accuracy, incomplete variables, and general apathy. Today’s marketers have taken “fuzzy math” to an all new level.
The most egregious example of today's inaccurate public relations and social media measurement is the use of free automated sentiment analysis. The vast majority of sentiment analysis tools get it right about 45% of the time. Which means that if you use those ”measurement“ tools, then your results are at least half wrong. And if this were accounting, you'd be in jail. (In the interest of transparency and full disclosure, I work with SAS which has a sentiment analysis tool that is 90+% accurate, and is tested against human coders.)
No one seems to mind about this sloppy work because it’s “just PR” or “just marketing.” Well I'm here to tell you it's your job and our industry, and our credibility is on the line. The only way we in PR and communications can be credible is to at least attempt to base our decisions on reliable, complete, and accurate data. Which is why I created Katie Delahaye Paine's Accuracy Checklist for Public Relations Measurement and Social Media Measurement. Go get your free copy right now.
There are four areas where I think most of the industry gets it wrong:
1. Spiders Aren't Smart Enough to Pick Your Content
Back in the old days, I’d have a team of people physically looking at publications and selecting only those articles that matched the client's criteria. In other words, the content was actually about the company and/or the product and had some bearing on a customer’s purchasing decisions. Today's electronic searches are a big help, but we still need human reviewers to check up on things.
Unfortunately, most spiders today just aren’t very smart. They aren't smart enough, for instance, to determine that an article that talks about a tax bill to which “small business objects” has nothing to do with the database company Business Objects. And they can’t tell the difference between a spike in coverage because of good PR for “Visa, a sponsor of the Olympics” and “I need a visa to go to the Olympics.”
In some cases up to 90% of what we collect with an electronic search can be irrelevant. You need a very sophisticated Boolean search string to even get close to accurate results, and those still need to be checked by humans. Or else you end up with “I met a really sassy intelligent chick in the Business School,” when you search for ”SAS business intelligence.”
2. Commercial Services Omit Results
Then there’s the issue of omission. The average content provider picks up just a fraction of actual Tweets and an even smaller selection of Facebook threads. If they say they can do better, do your own search on search.twitter.com or just compare with your average Google search. In about 5 out of 6 systems we tested, Google and Twitter outperformed the commercial services.
3. Accuracy of Content Analysis
After you’ve screened out all the crap and have a solid database of mentions, you then need a way to accurately analyze that content. As I said above, the solution for everyone today seems to be automated sentiment analysis. There’s a good reason it is so popular: Wouldn’t it be wonderful to simply hit a few buttons to determine what customers actually thought about your brand? Well, dream on. Most sentiment analysis doesn’t even come close.
First of all, most sentiment analysis systems get it right about 50% of the time, and you get what you pay for. A cheap system will get it wrong even more often. You need a sophisticated system supplemented with human coders to get anywhere close to accurate results.
Secondly, no amount of automated sentiment analysis can tell you what people think. You either need to ask them their thoughts, or hook them up to a sophisticated brain scanner that will ferret out the information. What sentiment analysis does is report back to you the words associated with your brand, and how people are discussing your product or services.
Lots of times computers can misinterpret those words. So if I say I found a wicked cool restaurant, the computer has no way of knowing that I’m from New England and that’s a compliment. Worse still if I mentioned that I saw the play Wicked after eating at that wicked cool dining spot, it would perhaps suggest burning the restaurant and all its occupants at the stake. Most computers don’t understand the irony and sarcasm of today’s conversations.
So what’s an acceptable level of accuracy? If you can get can get computers to agree with human coders 80% of the time you’re doing really well.
4. Incomplete Assessment of Variables
The biggest blinders of all are the assumptions we all make of what “causes” something to happen. So you put a whole lot of effort and energy into a program and you expect web traffic, or registrations, or whatever to increase. And many times it does. But not always. And most of the time you don’t know why because you’ve left out some key variable in your analysis.
Take for instance some work my company, KDPaine & Partners, did for a major national charity. After they did a fabulous PR job and saw overall exposure triple, we surveyed the national audience and found zero increase in awareness. Some would conclude that the entire PR program was a colossal failure. Except that the target audience wasn’t “everyone in America,” it was people with a connection to the military. And when we narrowed our analysis to that target audience, awareness and relationship scores went up, as did likelihood to contribute and volunteer.
We had enough foresight to include a question about military affiliation in the national survey. But if we hadn’t, we’d never have known that the program was successful only among those groups who were actively being targeted.
ATT and Bruce Jeffries-Fox have done a great study on the importance of the interaction of variables, finding that PR and certain key messages actually impact sales and loyalty far more than they thought.
Frequently it’s the presence (or absence) of a key message that has the greatest impact on consumer behavior. But if you’re not tracking your key messages, you have no way of knowing which message is driving behavior.
And just as frequently, it is the presence of conversations about the competition that drives behavior concerning the organization you are interested in. Again, if you’re not tracking the competition, you’ve left out a key variable that you will need if you want your research to be accurate.
The Paine of Measurement
You'd Think We Would Have Learned!
Now that the truth has come out about the bank failures and mortgage market collapse, you’d think that people would be learning that bad data leads to bad conclusions which leads to disasters. Think about it. The signs were all there, albeit a bit obfuscated by the enthusiasm of the moment. There’s a reason those loans were called “sub-prime:” most of them lacked solid finances behind them. So they were bundled up into pretty packages where maybe they wouldn’t be noticed. That didn’t take away the bad data, just attempted to hide it with more volume.
Which is exactly what marketers have been doing for years. When automated sentiment tools first came out we deplored their inaccuracy but fell for the line: “It’s good enough for marketing purposes.” So what if over half of the data was wrong? It didn’t matter: “We’ll make it up in volume.”
So they added millions of irrelevant data points to marketing mix models and ROI calculations. But it was cheap and easy. And, oh yeah, earned media didn’t matter that much anyway. Or so they thought until Procter & Gamble's marketing mix models showed that it did -- and millions of marketing dollars started shifting out of advertising and into PR.
And then web analytics came along, and the number of numbers grew and the volume got even bigger. Which made it even easier to ignore significant data points. Those in charge were so excited about the big numbers they were seeing that they ignored significant variables (like the impact of a PR campaign, or a Twitstorm, or a viral video). They assumed all those factors were ‘insignificant’ in the big scheme of things because clearly all sales were driven by online ads, as measured by hits.
The reality is that they weren’t and they aren't. Now that marketers can determine whether people actually bought something or behaved in a way that companies wanted them to, the value of those “Hits” really does stand for How Idiots Track Success.
The bottom line here, folks, is that accurate data is available. Yes, it might be a bit more expensive, but what’s the price of inaccuracy? The cost of good data is minor compared to the price of lost credibility and bad decisions. Just ask Bear Sterns.
HubSpot has put together a list of "36 Awesome Social Media Blogs Everyone Should Read," and The Measurement Standard just barely squeaked in at #35. We are pleased and proud to be chosen. Logrolling linkfest aside, there are lots and lots of top notch people and thoughts on the list so click on over and catch up on all things social media.
It is possible, however, that HubSpot may have confused us with Katie Paine's PR Measurement Blog. Her blog gets about twice the traffic we do here. Katie is the measurement guru who is CEO of the measurement company KDPaine & Partners. She will be happy to measure any sort of social media or public relations program you throw at her. Katie is the publisher of The Measurement Standard, and also writes most of the articles that appear here. --Bill Paarlberg, Editor
Jenny Schade's Making It Count
This 3-step process guarantees success on every measurement initiative.
The client on the other end of the phone was on fire with enthusiasm during our first phone call.
“We’ve developed visual icons that represent each of the eight positioning directions we’re considering. So we need to tell the consumers that first they need to select the positioning direction and after that, they’ll select a sub-brand name. Or do you think we should start with the sub-brand names?”
“Time out!” I interrupted. “Before we get any farther here, let’s talk about the most important question. What are you trying to determine with these focus groups?”
“Oh yeah,” he replied sheepishly. “I suppose we should address that.” He took a deep breath. “Well, we’ve developed this new product and everyone’s all excited about it, but we’re struggling with the best way to describe it…”
Right there is one of the most important discussions to have about the entire project. It always begins with some variation of that critical question: “What are you trying to do here?” Accurate measurement begins right at the start of every initiative – before you have begun to do any research.
Accuracy in research involves ensuring that you are measuring what you need to examine in order to answer your questions. That involves taking the time at the beginning of the project to get agreement on the reason for the research. With that agreement firmly in place, you are much more likely to be successful.
So how do you set yourself up for success through accuracy on every measurement initiative? Follow Jenny Schade’s Three Step Process for Accurate Measurement and you will poised for a home run every time:
1. Establish objectives
Whether you’re an internal or external research consultant, it’s absolutely critical to get a comprehensive understanding of the big picture behind what the client wants and needs to accomplish. Why are they here? Why now? Why are they talking to you?
Here’s a question I often ask clients when setting objectives: “What do you want to walk away from this research knowing?” This really gets to the heart of a research project.
Following are sample objectives from a recent study we conducted for a consumer packaged goods company:
Notice that the objectives never mention how we will approach the research study. There’s no mention of doing focus groups or how many focus groups we will do. That comes later, when we discuss methodology.
Establishing objectives for a research study is about understanding what your internal or external client needs to learn in order to do their jobs. Period.
2. Set clear measures of success
Measures of success are different than objectives. While objectives establish what you are seeking to accomplish, measures of success generate agreement on how all involved will know your objectives have been achieved. In other words, how will you measure your measurement?
In my experience, this step is most often overlooked and it’s an expensive oversight. Internal or external research consultants who haven’t established how to know if they’ve met their objectives will be at the mercy of popularity contests when budget cuts roll around.
Here’s our suggestion for beginning the process of setting measures of success in any situation: When embarking upon a project, ask your boss or your client or whoever is ultimately engaging your services, “How will we know we’ve been successful?” Note the emphasis on “we.”
In order to design effective measurement criteria, it’s critical that everyone involved in the initiative – whether actually executing the work or signing the check that pays for it – has a clear idea of what success looks like. The implementer may have a very different idea of victory than the person ultimately responsible for the initiative. It’s important to achieve consensus before getting underway.
Following up on our previous example when we discussed establishing objectives, here are the measures of success we agreed upon with our client for the cottage cheese project:
3. Methodology (or “There’s more than one way to skin a cat!”)
Are you surprised that methodology is last on my list? In my experience, it’s quite rare for there to be only one way to do effective market research. The number of focus groups can vary by location or respondent composition. Surveys can differ between sample sizes and approach (i.e., online or hard copy). That’s why the most valuable service of a good internal or external consultant isn’t methodology but developing a comprehensive understanding of the client’s situation. What’s going on? Why do they need you?
I’ve actually found that clients will hear of a new methodology and approach me about using it, based on my deep understanding of their business needs and our record of success. If you clearly analyze your client’s situation, proceeding with a research initiative will be far less dependent upon budget or the particular methodology you have recommended.
In the cottage cheese example we’ve been discussing, we started out discussing an online Awareness, Trial, and Segmentation survey with a client who emphasized having a very small research budget. We ended up doing two focus groups, followed by an expanded version of the study we had originally proposed. Our client told me he chose to work with us because “You really get us.” That level of insight resulted in an expanded version of our original recommendations.
So take time out to establish an accurate understanding of your needs right at the start of every initiative -- before you get into the details. That level of accuracy will pay off in every way!
Jenny Schade is president of JRS Consulting, Inc., a firm that helps organizations build leading brands and efficiently attract and motivate employees and customers. Subscribe to the free JRS newsletter.
© JRS Consulting, Inc. 2010
Can This Reputation Be Saved?
Short answer: Not a chance.
Before you even ask, no I’m not going to hold forth on the prospects for BP’s reputation. It seems to go up and down depending on the weather, the oil slick, and who’s talking into the microphone at the time.
On the other hand Goldman Sachs’ reputation is so permanently tarnished that I doubt if it is salvageable in my lifetime. (Although, I have no doubt that by the time my grand-nephew, who is six, enters his “investment years,” Goldman Sachs will once again be trusted.)
The problem, at least for the next couple decades, is that no amount of damage control or relationship building will eradicate the association of the Goldman Sachs brand with our current economic meltdown. Thanks to their unapologetic performance at congressional hearings, the SEC criminal investigation, and non-stop media attention, they are, in the mind of the public, at least partly responsible for the recession and subsequent loss of jobs and mortgage foreclosures.
Goldman Sachs has amassed all three necessary ingredients for permanent reputation damage:
Only after their stock fell by 23% and shareholders got antsy did they launch a PR charm offensive. And it has fallen on very deaf ears. It is a classic case of too little, too late. No amount of spin can reverse the effects of decades of bad behavior. --KDP
Geek of the Week goes to the folks at the recent eMetrics San Jose that attended my sessions. Years ago, the people at eMetrics were all essentially web metrics geeks, focusing on what people did after visiting their company’s websites. And mostly ignoring all the other stuff going on in marketing and PR.
Naturally, I challenged them on why they didn’t take into account social and earned media when doing their calculations. Three years ago, nary a one had even thought about it.
This year, however, attendees flocked to our sessions on measuring social media, and many of them were in fact already incorporating social media conversations into their calculations. So, to all of geekdom who gathered at eMetrics in San Jose last week, welcome to our world! And congratulations for making all of marketing measurement more accurate. --KDP
Communication Agencies Association New Zealand
First let me say that there are lots of really good measurement folks in New Zealand, my former partner Mary McNamara being one of them. But this travesty makes you wonder about the overall state of marketing and PR in that country.
I'm horrified. And, if you read the comments on the blog post in the link above, you'll see that I’m not the only one. Their position seems to be: Never mind if it isn’t accurate, it’s “something,” and therefore should be adopted.
This is like say saying that because everyone has some rice in their cupboard, all people need to eat is rice. Even if the rice is rotten, and regardless of the nutritional benefits or long term consequences of that declaration.
Let's hope that smarter voices prevail in this battle. --KDP
Vico Software is a small, less than $10 million, 2.5 years old, B2B construction software company. When I first met Holly Allison, as moderator of a panel on SMCBoston, she apologized because she had only prepared for our panel on the trip out to Framingham. She proceeded to give an amazing presentation, and I’d can’t imagine what would have happened had she had more time to prepare!
My first question to the panel was, “What are you currently doing to measure your social media efforts?” Her response was (I’ve paraphrased): “Our initial thought was that maybe we could start a community to help do tech support for our product. Then we realized that we could drive traffic to our website. So our most recent KPIs show that 16% of visitors to our web site come from our social media activities. Of those 27% turn into a qualified lead and 15% download something and 8% turn into customers. We calculate ROI by looking at revenue minus my salary times hours spent on social media.“
For once I was speechless – I’m not sure I’ve ever seen a better measurement plan!
Just to elaborate, they use SalesForce to keep track of leads, HubSpot to track web activity, and better yet, have such solid numbers that they can actually project how much they will need to increase traffic, based on sales forecasts and revenue projections. They’re even tracking leads from YouTube and Linked In. According to Holly: “There are over 108 groups dedicated to construction on Linked In. We participate in 39 on a daily basis, and we know what our click-through rate is on a per group basis.”
Way to go! And congrats on the Measurement Maven of the Month Award. --KDP
by Daphne Gray-Grant
What your hissy fit is trying to tell you.
Have you ever encountered a four-year-old who hasn't eaten enough? It's not pretty. It almost always involves a tantrum with screaming and tears -- and maybe even kicking and punching. But offer some apple juice, fishy crackers and a cheese string and, voila, the problem is usually solved.
Writing hissy fits aren't nearly as dramatic, but they are painful in their own unique way. They may involve throwing pencils across the desk, slamming drawers, staring endlessly at a blank screen and entertaining fantasies of another career -- something fun, like, say, forensic accounting or garbage collecting.
Of course, a quick glug of apple juice won't solve this kind of problem. That's because when it comes to writing, the food that matters is metaphorical.
Writing is a mostly inscrutable process that occurs inside our heads long before it's transferred to the page or screen via our fingers. In order to write well, we need ideas. And where do those ideas come from? Well, I can tell you they don't come from reading annual reports, strategic plans and marketing brochures.
To write easily, fluently and interestingly, we need to be "filled up" with thoughts and images. These come from going to movies, reading novels, taking walks in the park, talking with friends, listening to music. In short, we all need a "well" to draw upon; we cannot write if we're sucked entirely dry.
Remember: each and every piece of writing you create (note that word “create”!) leads to a deficit in your brain. Before your mental bank account goes into the “overdrawn” position, make certain you have a list of the fun things you enjoy. Then, be sure to replenish yourself with whatever "sustains" you -- whether it's reading a novel, going to a hockey match or playing a game of Twister with your kids.
This is not self-indulgent or a waste of time -- it's a necessity. You simply cannot work all the time. But you especially can’t work all the time as a writer. Otherwise you'll be like the four-year-old, kicking and screaming on the floor. And it's awfully hard to get much writing done from that position.
A former daily newspaper editor, Daphne Gray-Grant is a writing and editing coach and the author of 8 1⁄2 Steps to Writing Faster, Better. She offers a weekly newsletter on her website Publication Coach. It's brief. It's smart. And it's free.
School educators note the importance of the ‘three Rs,’ which they list as “reading, ’riting, and ’rithmatic” – and the way these are often spelled is intentional to indicate that many students and even graduates are not very good at most of them.
This is a useful prompt to reflect on what might comprise the three Rs of public relations and communication measurement. The ‘bean counters’ with finance and accounts backgrounds will immediately nominate ‘ROI’ for alphabetical canonization. Trouble is they mean DROI – dollars returned on investment or, even more narrowly, dollars returned on dollars invested (DRODI). Bean counters can’t spell, or understand PR. Their only unit of measurement is dollars, so they do not understand other forms of cost and others types of returns. Five thousand protestors pounding at the gates and three TV networks in the foyer do not register on the Richter scale of the ROIs.
“Two common misconceptions are that research is all about statistics and that research is accurate. Very little if any research is accurate.”
Politicians mostly have one R word uppermost in their minds that guides their communication and measurement – ‘re-election’. Newspaper editors and marketing departments are nervously focused on ‘readership’, hoping to avoid another big R word, receivership. HR people say communication and measurement are all about ‘retention’. Those who promote environmentalism cleverly created their own three Rs – ‘reduce, reuse, recycle’ – as specific goals and metrics which identify success.
Immediately, these few examples illustrate that different people want different results from communication.
The Research Rs
Researchers who spend their life measuring things also have a number of pet R words such as ‘rigor’, ‘reliability’ and ‘replicability’. Rigor and reliability, in particular, are emphasized by researchers, but these are mistakenly seen by some practitioners as the same as statistical accuracy. Two common misconceptions are that research is all about statistics and that research is accurate. Very little if any research is accurate. Even the most mathematically precise research usually has a 10 per cent error rate or more. And some methods of research do not use statistics or strive for accuracy at all.
Rigor and reliability refer to following systematic procedures, and these procedures vary between quantitative and qualitative research methodologies. In quantitative research, collected data are analysed statistically, as the purpose is to identify averages (i.e. means), medians, and modes, as well as total numbers in various categories. However, in qualitative research the objective is to intentionally identify and study the breadth and diversity of responses – not to narrow them down to averages – and to explore emotional, perceptual, and attitudinal responses that cannot be expressed fully in numbers. No one method is preferable in all circumstances and no one approach is superior. Most researchers see benefits in both quantitative and qualitative research and each provides some answers to different types of questions.
While there is no 100 per cent accuracy in research, reliability must be one of the three Rs of measurement. In quantitative research, reliability is calculated statistically based on the size of samples and calculations such as standard deviation, while in qualitative research reliability is an overall result of following valid systematic procedures. In measurement such as content analysis, for example, reliable methods require use of coding guidelines, multiple coders, and the conduct of intercoder reliability assessment in which a sub-sample of content is coded by more than one coder and their coding compared. Wide variation is addressed by rejecting the data as unreliable, rebriefing the coders, and retraining if necessary.
Practitioners need to recognize that if trained researchers following systematic procedures end up with variations and error rates, what is certain is that measurement methods that do not employ established research methods and procedures will have far wider margins of error and variation. Hence, another of the three Rs of measurement is to use established research procedures, methods, and scales – not made-up scales and concocted metrics.
However, there is another R word that is even more important than reliability and research. The most important criteria of any measurement is that it must be relevant. When measuring a function that has relations in its name – such as public relations, stakeholder relations, community relations, shareholder relations, media relations, and so on – it should be fairly obvious that the focus of measurement should be the state of those relations (i.e. relationships). Measuring organisational relationships in terms of dollars paid or received is as insensitive and inappropriate as measuring other relationships such as one’s marriage, love life, or family connections in monetary terms.
Similarly, if the objective of a communication program is to create awareness of something, awareness is the relevant variable that should be measured. If a behavioural outcome is the objective of communication, incidence of the specific behaviour sought is what should be measured. This may include inquiries generated, registrations received (e.g. for an event or subscribing), trialling, purchasing a product or service, or a range of other behaviours such as starting a fitness program, giving up smoking, driving more attentively, and so on.
This key R word – that measurement must be ‘relevant’ to particular functions, particular programs, and particular objectives, is why a single PR metric or even a single measurement standard is not only unlikely to be achievable, but also it would be counter-productive and restrictive if ever realized. However, there is a range of reliable research methods available that meet a wide range of needs – what I call a measurement toolbox.
So, the three Rs of measurement, and the key to public relations gaining the professional respect that it has long sought are, in reverse order: Relevant Reliable Research.
Jim Macnamara PhD, FPRIA, FAMI, CPM, FAMEC became Professor of Public Communication at the University of Technology Sydney in 2007 after a 30-year career working in journalism, public relations and media research which culminated in selling the CARMA Asia Pacific franchise which he founded to Media Monitors in 2006. He worked as Group Research Director with Media Monitors - CARMA Asia Pacific following the sale and continues as a Consultant with the Group.
I noticed today on Facebook that the Church of Jesus Christ of Latter-Day Saints is challenging people to view a YouTube video about the Book of Mormon to try to boost it high enough in YouTube's rankings so that it will be featured on YouTube's homepage. And thereby increase the exposure of the video and the Mormon's message.
Click here to visit the Facebook page Book of Mormon/YouTube Challenge. And if that doesn't work I've copied in the interesting part below.
What's fascinating is the explicit use of social media metrics in the Church's promotion of this challenge. See "Tips On How To Achieve This" in the excerpt below.
-- Bill Paarlberg, Editor, The Measurement Standard
(Please note: Lyman Kirkland, Manager, Social Media, The Church of Jesus Christ of Latter-day Saints, informs me that, "While many Mormons participated in this campaign, the Church did not organize it and had no involvement in it. Members of the Church who organized it and participated in it were representing themselves and not the Church, even though the video itself was produced by the Church.")
The Book of Mormon/YouTube Challenge
Monday, May 3, 2010 at 12:00am
Tuesday, May 4, 2010 at 12:00am
Please be sure to take this time to remind everyone of the Event date on Monday, May 3. We suggest updating your Facebook status to read, "Please remind everyone of this event by sharing it on Facebook, your blog, and with your email contacts http://www.facebook.com/#!/event.php?eid=108682145836290&index=1 "
EN ESPAÑOL - EN FRANÇAIS – EN PORTUGUÊS - NEDERLANDS
On May 3, 2010 (perhaps as part of your Family Home Evening program) if all reachable members of the Church of Jesus Christ of Latter-Day Saints--and any non members interested-- would follow the link http://www.youtube.com/watch?v=CkKblIMfmjI and watch the YouTube video of Jeffrey R. Holland bearing testimony of the truthfulness of the Book of Mormon, we could potentially achieve promoting that video to the YouTube homepage, based on volume of views.
TIPS ON HOW TO ACHIEVE THIS:
1 increased views - multiples views by one viewer count
2 comments - the more comments, the more valuable Youtube sees your video. Comments almost carry as much weight as views.
3 favorites - everyone needs to "favorite" the video.
4 thumbs up - **please take special care to do this as a group of individuals have banned together to "thumbs down" and this can negatively affect the popularity of the video.
5 subscriptions - daily subscriptions have a major impact on popularity. This also has longterm impact on the future videos coming to the channel because they will get emails from Youtube letting them know when MormonMessages posts new stuff.
6 reshare - post the video as a link on your profiles bearing testimony, and also make comments after friends' link so that it shows up in the news feeds, post to blogs and twitter. The more activity surrounding the video the more attention it will receive.
The repercussions of this could be great. YouTube reports:
YouTube Stats (US)
(comScore MediaMetrix April 2009)
"The YouTube Homepage is the highest-profile placement on the site... eleven million unique visitors a day in the US [and] 89.7 Million unique monthly visitors."
The exposure that the Book of Mormon could receive in one day is astronomical. Please keep in mind though, while it is ideal that this video be promoted to the homepage, it is the spirit felt from the message wherein the success lies.
As a member of the Church of Jesus Christ of Latter-Day Saints blessed with the knowledge contained in the Book of Mormon I seek to share with the world what I know to be true; what I know to bring happiness and hope in the times of travail; what I know to be the word of God.
For this challenge May 3 is calculated anytime in the 24 hour time period between 12 am to 11:59:59 pm Eastern Standard Time (for those participating from different time zones), but viewing before and after the event is helpful as well. Please invite both member and non-member alike to feel of the Spirit this message carries, mark your calendars and gather round in your families, wards and stakes and join me, May 3.
All my best,
*** As the group has more than 5000 members I am not able to send out a reminder email for the event so keep an eye on your Notifications on the right hand side of your Facebook Homepage reminding you of the Event. Also post the link to your profile, send out reminder emails and share, share, share!
Mathematics is so, so exciting in how it can explain the world, yet often so, so boring to read about or study. Those of us who work in public relations measurement or social media measurement have got to use statistics and math, but mostly don't feel much like actually messing about in the guts of any equations. (see Don Stacks and the Loneliness of the Measurement Mathematician.)
Well it's our lucky day: Check out Steven Strogatz's Group Think in today's NYTimes. It's a nice introduction to Group Theory, "...one of the most versatile parts of math, [that] bridges the arts and sciences." And you'll learn more than you thought there ever was to know about turning your mattress. -- Bill Paarlberg, Editor, The Measurement Standard
That illustration is lifted right from the NYTimes article.