Here at The Measurement Standard, we've always tried our best to bring together public relations research and baseball, see "Baseball and Measurement: Now Who's On First? Have new ways of measuring player performance changed the way baseball is played?" and "Can This Reputation Be Saved? Major League Baseball's Juicy Scandal: You can't buff a turd."
Well this morning the New York Times has an excellent piece concerning both baseball and statistics that, really, to fans of both fields, could hardly be more exciting: "A Journey to Baseball’s Alternate Universe." It describes a mathematical simulation of the entire history of baseball carried out to discover how likely Joe DiMaggio’s 56-game hitting streak really is. --Bill Paarlberg
2008 Wrap Up
Boot camp for your mind.
by Katie Delahaye Paine
As always, IPRRC in Miami was boot camp for your mind. If you can survive three days of stimulating conversation, debates until dawn washed down with quantities of wine, and 100+ presentations to comprehend, you can probably survive anything. I still haven't recovered fully, but at least I've sobered up enough to sum up a few observations. In another month or so, all papers will be published on the IPRRC website. In the meantime here's a wrap up.
IPRRC vs. South by Southwest Digital
First of all, at the same time the PR crowd was meeting in Miami, the social media crowd was gathering at South by Southwest Digital in Austin, Texas. Thanks to the magic of Twitter, there was even some cross-pollination of ideas between the two groups. But what was ironic was that whereas the folks in Austin were subjected to traditional people-on-stage-with-PowerPoint, those crazy PR people in Miami were gathering around tables arguing with the presenters. Here's what a typical table looked like:
In the end the effect was the same, the tools were just different. Attendees in Austin initially complained to each other via Twitter, and then finally shouted down the interviewer during the keynote with Facebook CEO Mark Zuckerberg. In contrast, Miami attendees gathered around each presenter at separate tables, constantly questioning and challenging them. In both cases, it was the dialog among attendees that proved the greater value.
But that was about all the two conferences had in common. The IPRRC had distressingly few presentations about social media; less than ten percent of the 85 total papers discussed the impact of blogs or bloggers. What is frustrating is that it is precisely in the arena of social media that real solid research is the most desperately needed. The vast majority of discussions focused on testing of existing theories of crisis, relationships, and organizational structures.
The area that garnered the most attention was crisis communications. There were numerous papers on the topic, a few of which we summarize elsewhere, see "Three New Measurement Studies Provide Crisis Control Tips: Research presented at the IPRRC has practical results.". The large number of crises in the past year -- pet food poisoning, mine disasters, toy recalls -- provided solid fodder for study. What was missing, of course, was the one answer that practitioners always want: "What is the recipe for perfect crisis handling?" The answer the IPRRC researchers provided was: "Well it depends." There are a bunch of good and bad ways to handle crisis, and you still need to pick the one that best suits your particular marketplace.
PR and Business Education
Another significant theme of the conference was how PR education is organized. There were some great conversations about whether PR should be taught in business schools, or business taught in PR schools. The answer is clearly both. PR people need to be taught to think like business people and become part of the overall value proposition in a company, but at the same time business people need to better understand the role that PR plays in corporate health, welfare and reputation.
For anyone wanting to understand PR in other cultures, there was lots of information, and presentations on PR in Turkey, Slovenia, Brazil, and Japan. Also a comparison of PR in Western and Eastern Europe.
Ben-Piet Venter's paper on making PR a support function, similar to IT, made a lot of sense. And of course there was the usual conversation about how to "get into the dominant coalition" and get them to listen to you. The answer seemed to be: Stick it out long enough for a crisis to happen so they'll see how indispensable you really are.
Most of all I was left with lingering excitement about the next generation of PR folks, now coming up through the ranks. These are recent graduates who understand the power of social media and the fruitlessness of the old command and control structure. People who inherently look for data on which to base decisions, and rely less on "It's the way it's always been done," and more on "This is the decision that the data supports." For them the route to the dominant coalition and the proverbial "seat at the table" will be quick and direct, because they'll always have the data to support their decisions.
(Okay, so I'm hopelessly prejudiced. I just got my new beautiful HP laptop pre-loaded with Vista and I hate it. It crashes about five times an hour, and my experience so far with Vista guarantees that my next laptop will be a Mac.)
That having been said, we are naming Microsoft our Measurement Menace of the Month for their "Engagement Mapping" product. Not that we don't believe in measuring engagement, (we really like that whole concept, see this recent article), but for throwing its weight into the whole discussion without providing any transparency or information as to how they "map engagement," or any details at all as to what is behind this black box. --KDP
The Measurement Maven of the Month:
Economics Professor at University of Chicago
When three or more people forward you the same article saying, "You have to read this!" -- it makes you pay attention. That's what happened with the New York Times Magazine piece about John List and his research on why people give to philanthropic organizations. And, since KDPaine & Partners is doing quite a lot of work in that area, we were thrilled with his main premise: Non-profits need to be making decisions based on data, not on long-standing beliefs that may or may not still hold water.
Professor List brought together results from several research projects to demonstrate some interesting, if unexpected conclusions. One of which was that, for stimulating donations in fundraising campaigns, challenge grant seed money works better than matching grants.
Of course we'd like to factor in some PR metrics into his research to figure out what impact headlines have on charitable giving. We know PR works with the ASPCA (see this article from last fall) but we'd love to see it factored into a broader range of philanthropic efforts. Nonetheless, he's this month's Measurement Maven for the rigorous approach he brings to the subject. --KDP
Here at KDPaine & Partners we've been studying social media and visitor engagement with social media sites, because we think it might very well have something to say about the eventual actions of the social media visitors, like whether or not they recommend or buy a company's products. But the HyveUp blog has posed an interesting question: Does online activity actually predict offline behavior? The excellent point there is that maybe people behave differently online:
"Our online life is often used as a frustration outlet... Sometimes, it just feels good to be somebody else online, or to support the candidate that it is taboo to support in your small town. Do stuff you'd never do in real life. The online world resembles a chimerical projection of our social fantasies."
This is going to take a bit of study to resolve, and my hunch is that sometimes online activity does predict offline behavior, and sometimes it doesn't. Here's some data that bears on the question, YouTube views and comments compared to voting behavior:
At least in this case, online activity can and does closely correlate with offline behavior. --Bill Paarlberg
Social Media Measurement
Engagement in social media: Web stats, visitor behavior, and relationship theory.
by Katie Delahaye Paine
This article is condensed from a paper submitted to the 11th Annual International Public Relations Research Conference.
"If we can put a man in orbit, why can't we determine the effectiveness of our communications? The reason is simple and perhaps, therefore, a little old-fashioned: people, human beings with a wide range of choice. Unpredictable, cantankerous, capricious, motivated by innumerable conflicting interests, and conflicting desires."
Ralph D. Paine, Publisher, Fortune Magazine. 1960 Speech to St. Louis Ad Club
Modern technology has come up with many good ways to measure what human beings read, watch and see, but comparatively few ways to measure -- as my father said half a century ago -- those "unpredictable, cantankerous, capricious... conflicting interests and conflicting desires." The recent rise in the influence of social media has turned the entire communications paradigm upside down. Counting column inches and eyeballs is irrelevant when a single YouTube video enjoys a larger audience than Monday Night Football, the average consumer is bombarded with 5000 messages a day, and 90% of CEOs say they are dissatisfied with how their CMOs measure results.
The basic problem is that we have years of research that says that if you "expose" 1 million consumers to a message (or buy 20 GRPs) you will sell X number of cases of shampoo, soda or soap. We have no data that says if 1 million people download your YouTube video, you'll sell any shampoo at all. What we now want to know is how social media affects user's behavior.
Engagement: The Relationship Between the User and the Brand
Like most other buzzwords, "engagement" has come a long way from its original meaning of "an agreement to marry." Essentially, it started with the notion that a website or a blog was "engaging" enough to get a reader to begin to develop a relationship with the brand.
As more and more advertisers and media types realized that hits really do stand for "How Idiots Track Success" and that even unique page views were suspect (given the enormous variation in such statistics), people began to speak of measuring engagement--not just how "sticky" the site was, but the extent to which it enhanced the relationship between the user and the brand. Advertisers now want to measure a site's ability to create an experience that earns a visitor's loyalty and, with luck, its business. As a result "engagement" now means everything from the number of times that a visitor returns to the site to the time spent online.
Another way to think of engagement is as the fourth step in a five step process that the individual user goes through:
Engagement According to Scoble
Popular blogger Robert Scoble (2006) has suggested that engagement is a valid measure of user interaction and authority of Internet-based social media channels. That is, engagement is a way to determine whether you are really having a dialog, or you are just yelling ever more loudly. His premise is that by measuring activity on a blog or social media website as a sign of engagement, you can predict users' behavior. In other words, if they come back to a corporate blog over and over again they'll eventually buy. If it's a YouTube video, if they watch and rate it or comment on it, they are more likely to pass it on to their friends and maybe even take some other action as a result.
Brian Haven of Forrester Research picked up on Scoble's premise and proposed measuring engagement based on a variety of tangible and intangible factors including links, track backs, comments and the frequency sentiment and tonality of comments. He defines engagement as the level of involvement, interaction, intimacy, and influence an individual has with a brand over time:
Web Analytics expert Eric Peterson, author of Web Analytics Demystified, Web Site Measurement Hacks, and The Big Book of Key Performance Indicators, has proposed alternative measures of engagement based on Web metrics. Peterson suggests that if you want to measure engagement you need to measure stats like the following:
The problem with Peterson's metrics is that for most organizations, that data is only available on their own site, not on competing sites, so there is no way to conduct a benchmark to understand how "engaged" visitors are with one's own site vs. the competition.
While both Peterson and Haven contribute important ideas to the engagement discussion, I suggest that measuring engagement necessitates following the actions and desires of the customer.
An Engagement Index? Not Yet.
There is no such thing, yet, as an engagement index, but there has been a lot of talk about the possibility. Both Scoble and Peterson suggest that their metrics could be reduced to a single index, but they don't say how.
(Others have written excellent discussions of engagement, see, for instance, Steve Bridger's nfp 2.0 blog.)
Most of the discussion on the topic is centered on the necessity for advertisers to quantify the impact of their online advertising. Microsoft's new black box "Engagement Mapping" is designed to make advertisers on Microsoft websites more comfortable with their data (see our Measurement Menace Award for this month). Comscore and Nielsen's efforts are designed to give more meaning to the numbers they provide advertisers.
Unfortunately, metrics that make advertisers happy are not necessarily very useful for other communications functions. As internal and external communications functions become more involved in social media, they too need a way to measure engagement, but numbers from Microsoft, Comscore and Nielsen are only available for large consumer sites, not corporate blogs. More problematic is that those numbers do not factor in the newer more popular social networking sites like Facebook, YouTube and Twitter.
Engagement is a Relationship
I suggest that "engagement" is just another way to say "relationship, but a minor one." So, to the metrics suggested by Scoble and Peterson, I suggest we add those from relationship theory. At some point, you just need to come right out and ask your audience:
In short, we suggest incorporating the concepts of the Grunig Relationship Instrument into the engagement measurement.
Unless one incorporates relationship measurement into the mix, you end up with just data rather than insight. Because, while you can track behavior with increasingly accuracy, all the web metrics in the world may not answer the fundamental question of "Why?" "Why did they stop coming to your site?" "Why are they spending less time there?" Or, more critically, "Why are they buying less?" Without the true understanding of the nature of the relationship, you won't be able to do anything to improve once you find out what the problems are.
Which leads me to some final, unanswered questions: What is the difference between engagement and relationship? In fact, do we really know that engagement is something distinct -- distinct from web stats and distinct from relationship theory? Suppose we do use Grunig's questions to measure engagement, how do we know we are measuring "engagement" rather than "relationships?" If you think you know, please let me know.
New Measurement Studies Provide Crisis Control Tips
Research presented at the IPRRC has practical results.
by Katie Delahaye Paine
We've been saying forever to anyone who'll listen that PR measurement is not just about demonstrating ROI or proving value, but is really about having data on which to make better decisions. Never was that more evident than in the plethora of papers on crisis communications presented at this year's IPRRC in Miami. I personally listened to a dozen papers on the topic, and there were a other dozen or so more that I didn't even get to.
IPRCC researchers studied the impact on crisis communications of everything from involvement to intimacy. A lot of the findings fell into the "duh" category (as in "we professionals probably knew that already"), but it's always useful to have solid data to back things up. Here are three studies with some results that will come in handy.
1. The More They Know, the Angrier They Get
Yoonhyeung Choi and Ying-Hsuan Lin of Michigan State presented a paper on the Mattel product recall that has interesting implications for any consumer company under fire that is trying to manage customer expectation. Choi and Lin compared what moms and mommy-bloggers had to say about the toy recall to how the daily newspapers reported it. As it turns out, the four major newspapers studied blamed the Chinese manufacturers more than twice as often as did the consumers. The consumers were twice as likely to point their collective fingers at Mattel than at China.
As the crisis unfolded, and recall followed upon recall, the media's portrayal of China was consistently negative, and the reputation of Chinese manufacturers declined over time. However, among consumers, anger was more frequently directed at Mattel and its sister company Fisher Price. What this tells us is that even though consumers may be getting their information from the media initially, their long term opinions are formed less by the media and more by people like themselves. The bad news is that your customers are likely to ignore any of the mitigating factors that the media reports on, but the good news is that if you have highly engaged consumers, they may also ignore the pounding you're getting in the media.
Another conclusion from Choi and Lin is that engaged consumers are more likely to dig deeper than the media into a crisis, looking to go beyond the headlines to find the real issues at hand. And the more they know, the angrier they get. "Consumers with high involvement are more likely to scrutinize and elaborate crisis information and generate more counter arguments as they process the crisis information, as covered in newspapers."
The important conclusion the authors draw is that monitoring media in a crisis is no longer enough. You need a clear understanding of how your stakeholders are responding to the media, particularly as consumer generated media continues to increase in awareness and credibility. More importantly, success should be measured by the speed with which one's headlines go away. Essentially what Choi & Lin found was that the more media exposure a crisis got, the angrier the consumers got at the company. Lesson learned: Get the story out of the headlines and into the back pages as fast as you possibly can.
2. Good Relationships Mean Fewer Bad Rumors
An equally intriguing paper was presented by Hun-Jim Kang from Pennsylvania State, with co-authors Karina Garcia Ruono and once again, Ying Hsuan Lin of Michigan State. This study examined the manner in which rumors were spread, specifically the impact of relationships on the rumor mill. The authors started out with the premise that there were two kinds of rumors: "dread" rumors, which foretold of something bad about to happen, and "wish" rumors, which foresaw something good happening. The study compared the speed with which dread vs. wish rumors circulate, as well as the credibility of the different rumors.
Their hypothesis was that the spread of rumors was closely tied to the health of the organizational public relationships (OPR) behind the entity under discussion. OPR was defined according to the standard Grunig terms such as trust, control mutuality, satisfaction, and commitment. Each participant rated statements on a 7-point scale. The study was conducted on the campus of Michigan State among 109 undergraduates. With a mean age of 22, it may not be relevant if your target audience is senior citizens. Nonetheless, the findings were fascinating.
Not surprisingly, the propensity to spread dread rumors was significantly higher than that to spread wish rumors, and the credibility of dread rumors was also higher. But the most telling information was on the impact of relationships: The health of your relationships has a great deal to do with the likelihood of people to spread nasty rumors about you. People with low relationship satisfaction were more likely to spread dread rumors than people with good relationship satisfaction. Additionally, people with good relationship scores were more likely to check out the validity of rumors before passing them on. The clear lesson is, if you want to squelch rumors, keep your relationships healthy and strong.
3. If You Are Innocent, Act Indignant
Then there were the really surprising and interesting findings of someone (I'm sorry, I've forgotten your name) who studied communications in the military. He concluded that when you are in crisis and you are in the right -- i.e. your organization has not done anything wrong -- the public position that is most likely going to generate a positive response from your target audience is an aggressive, proactive response. So if you need ammo to keep those lawyers quiet, you got it. Again, something we always knew, but it's nice to have it verified.
Check out the article in today's New York Times about the StrawberryFrog-conceived Scion website where you can design a custom Toyota Scion logo. Here's one I just made:
Then go there and design your own. Even though those of us in social media measurement haven't yet quite defined "engagement," there must be some serious amount of it happening with this website. So how would you measure it? Count the number of custom logos on Scions? Or maybe on any random sample of cars? A survey on pride of ownership? Resale value?
(The website has some rather bizarre Term of Use: the site is only "open' to legal residents of the U.S. over the age of eighteen. Good luck enforcing that. Reverse psychology or lawyers run amok?) --WTP
The Paine of Measurement
How online measures of engagement have predicted recent primary results.
Ever wondered what the effectiveness of political lawn signs is? Supposedly, every lawn sign represents six votes for the candidate. Or maybe ten votes, depending on what you read. And there's a theory in political circles that if you can get someone to put out a lawn sign, then that person is committed enough to not just vote for you, but also to encourage his or her friends to vote for you as well. So, each additional lawn sign means more than just one more vote, it means more of something even more valuable and a lot more difficult to pin down: more loyalty or commitment or what we in communications call engagement.
To my knowledge, no one has ever done a scientific study of how lawn sign displays influence voting habits. But my completely unscientific study of New Hampshire lawns this fall more or less predicted the outcome of our First in the Nation primary: Everywhere you went there were lots of Obama and Ron Paul signs, and both did much better than the polls predicted.
Now let's transfer this scenario into the world of social media. Can online measures of engagement predict votes? I argue that they can and have done so recently:
And in the end, Obama did better in the primaries than the early polls suggested. The primary results have proven that Obama has a stronger than expected following, as hinted at by the strong online engagement we found.
The point here about engagement is bigger than just politics. How and why is engagement a stronger or different measure than just impressions? If, by joining a group, rating a video, or following someone on Twitter, you are actually thinking or behaving differently than if you just viewed an ad or a message, then measuring these signs of engagement is critical to every marketer. In order to hang on to advertising dollars, media companies will need to provide this data. And the good news is that the data is there, they just need to release it.
And finally, I can't help but see engagement as a kind of bridge between measuring outputs and measuring relationships. (Most of you are aware of my recently published book Measuring Public Relationships, learn more here.) If you measure an output like impressions, you only know what has happened to an audience. But if you measure engagement, you are measuring what is done by an audience as the result of their relationship with your output. How does measuring engagement fit in with measuring relationships? That's a good question, let me know if you have the answer.
Here's wishing you large measures of success,
Social Media Measurement
Or... What's the ROI of my living room?
by Katie Delahaye Paine
As everyone who has talked to me recently knows, I'm a serious social media evangelista. And as I travel around I'm constantly confronted with business people who say: "Social media? I don't get it! Who has time? Why should I bother?"
My simple answer has always been that blogging (or Twitter, or Facebook) is a way to engage in a conversation with your customers or your employees.
(And if, Mr. CEO, you do not want to engage your customers or your employees, you deserve to be fired. If not shot.)
But I've been told that that attitude isn't particularly helpful when you're talking to people who think Facebook is still "just a college thing," and who think "twittering" must be something dirty.
So here's my new explanation (a bit long-winded, but, please, bear with me here):
When I built my house, it was designed to be the capital of social capital, with a huge living room and kitchen and dining room so that lots and lots of people could gather there. As that's just what happened:
So are you getting my point? Social Media is just a great big version of my living room. Any social network -- Twitter, Facebook or MyRagan -- starts off as one big noisy place. But soon, people of like minds and like interests start to find each other and sometimes they spin off and form their own separate groups.
So What's the ROI of my Living Room?
Which gets us to measurement. The hot topic right now -- and what everyone wants to know -- is: How do you measure the ROI of the effort you put into these groups?
To answer that, let's go back to my living room. The bride and groom's goal was to save money. The Library's was to build their mailing list. The politician wanted votes and the musician wanted to sell CDs. If they wanted to measure the ROI of their events in my living room, they'd compare their investment in effort to the particular return that was important to them.
And What's My ROI for Social Media?
I, on the other hand, use social media, and (often) my living room, to satisfy my thirst for knowledge and intellectual stimulation. Take Twitter: Do I appreciate the fact that traffic on both my blog and my website has picked up since I started Twittering? Absolutely. But mostly I Twitter because it makes me smarter. It allows me to follow interesting people that I wouldn't ever meet in my living room, and who would never read my blog.
I mess around in Facebook because it's a great way to share stuff with my friends and colleagues and it's a huge time saver when you're trying to pull together an event.
I blog because I like having conversations with people – about my business, my services, and the world as a whole.
And that ends my longwinded explanation of why you or anyone might want to bother with social media. It depends on you and your goals. And as for measurement, it's the same as for any marketing effort; first be clear about what you're trying to achieve, then go about measuring it.
Making It Count
Note to Salesgenie.com's CEO: A little investment in research generates a huge return.
Vinod Gupta, the InfoUSA CEO who owns Salesgenie.com, learned this lesson the hard way when he had to stop airing his television commercial after its debut on Superbowl XLII because of protests about its culturally-insensitive presentation of animated pandas with Chinese accents. It was an expensive lesson: The cost of airtime alone for a 30-second spot on Superbowl 2008 was $2.7 million. View the ad here at YouTube:
Gupta, who wrote and produced the ad himself, told USA Today (February 11, 2008) that next year, he'll test his ads with consumer focus groups. This year, he said, he only ran the ad by some friends. "None said it was offensive," he said.
A Major Retailer Discovers that Ideas are Beautiful, but Reality Is Something Else
Vinod Gupta isn't the only top executive to become enamored with his own creativity. The Salesgenie.com ad debacle reminded me of another marketing campaign developed by a retail chain leadership team. Fortunately, those executives had the good sense to test the materials before going public. I'll never forget the shock experienced by the campaign's creators when they saw the reactions of their intended audiences.
By the time I was retained, a seven-figure budget had already gone into the creation of advertising and internal communications materials that featured real employees. It was my job to test the communications with focus groups to determine if they were relevant and would produce the desired reactions and results.
The stores in the materials looked wonderful. Inviting. Immaculate. Smiling employees welcomed you. Salespeople exhibited pride in their work. Several mentioned their impressive employee benefits: health insurance, 401K plans and paid vacations. Others spoke of their aspirations to become store managers. These proud, joyful team members encouraged the public to shop at their stores and consider working with them.
I showed the employees in the first focus group the materials and asked the following questions:
"What's the main idea behind what you see?"
"Would you notice this ad if you were looking through a magazine?"
"How does this information make you feel about working there?"
Immediately I knew we had a major problem. The employees looked angry, confused and resentful. Sample responses included:
The employees filed out. As I prepared for the second group, I popped into the adjoining room where a team of marketing and ad agency executives were watching through a one-way mirror. The creative director had a light film of sweat glistening on his forehead. No one looked at me. They were engaged in an intense discussion that absorbed all of their attention.
I showed the materials to the next focus group, which consisted of seven customers. I asked:
"Any immediate thoughts or feelings?"
"Is the ad clear?"
"How does this make you feel about shopping there?"
The customers were equally incredulous:
Again I ducked into the next room to confer with my clients. Now the creative director was openly mopping his forehead and the marketing director had gone ash white. He whispered to his assistant, "Haven't these benefits gone national?"
The assistant whispered back, "Guess not."
Where the Reality Hits the Road
I said to the marketing team, "For now, the only thing we can do is to proceed with these focus groups and instruct the participants to react to the ads as if they are true. Our goal today isn't to poll them on their actual experiences in your stores; it's to get their responses to this campaign."
The team agreed that this was the best use of our time; however, it was clear that there would be some serious accountability checking back at home office.
I proceeded with the same questions to the remaining 15 focus groups. Then my team and I summarized our findings in a written report that we presented to the client two weeks later.
Upon reviewing the report, the client called me and said, "I must admit, this hurts. We spent a lot of money on those materials, but I'd rather know this now than after rolling out a national, multi-million dollar campaign."
As painful as the findings were, they served a valuable purpose.
The overriding benefit of testing your marketing communications is ensuring that your communications are meaningful to key constituents. In other words, the resources that you devote to research now will pay off in spades when you move forward with communications programs and materials that hit the mark. By ensuring that your marketing is compelling, you can stop pursuing – and start attracting -- your intended audiences.
Schade is president of JRS Consulting, Inc., a firm that helps organizations
build leading brands and efficiently attract and motivate
employees and customers. Subscribe to the free JRS newsletter on
© JRS Consulting, Inc. 2008
by Katie Delahaye Paine
1. Set up Google Analytics on your blog to find out how many repeat visitors you have. How many pages per visit do they check out? How many go back more than 3 times a week? How many go back and spend more than a second or two on the site?
2. Post a vizu poll on your blog and see how many people respond.
3. Go to xinure and enter the URL of your choice to find out how well it is doing in search engines, links, social bookmarks and a whole bunch of other stats.
4. With many of the leading blog providers like TypePad, check your stats to find out how many people have subscribed, and how many visits per day you're receiving.
5. What's the Conversation Index (the ratio of postings to comments)? In the blogosphere any comment is a good comment because it shows that people are engaged enough in what you are saying to take the time to respond.
7. If you have a presence on Facebook, how many people have joined your group?
8. Ask a question on Facebook and see how many people respond.
9. If you're on Twitter, how many followers do you have? More importantly, how many responses do you get when you ask them a question?
Many public relations professionals find that media analysis or survey research are tools sufficient to measure their programs. The more adventurous bring in external data and more esoteric measures of success like donations or number of new memberships or even lives saved. In politics, KDPaine and Partners has recently used YouTube video counts to predict primary results, and our Ms. Paine has ruminated on the possibility of using the number of political lawn signs as an election predictor. Not until now, however, has anyone used dreams to measure political progress.
In this week's New Yorker, Ben McGrath writes about Sheila Heti and her new blog "I Dream of Hillary... I Dream of Barack." This blog, a repository of reader-submitted dreams about the candidates, can be interpreted as a rough poll of interest in those candidates. As Mr. McGrath says, "...what if the recurrences of Presidential candidates in people's dreams were meaningful in the aggregate?" As of the writing of the article, "..Obama's edge in the over-all dream count... was roughly equivalent to his lead in the latest Gallup poll."
Ms. Heti's blog has been so successful she plans to open an I Dream of McCain section, so in the future we will have bipartisan dream data to go on. --Bill Paarlberg, Editor, The Measurement Standard