Could cooperation in Congress or peace in the Middle East be next? At today's meeting of the IPR Commission on Measurement and Evaluation, we approved a set of standards for traditional media measurement. The IPR has the full paper up on its web site but the gist of the standards are:
Item for media analysis -- includes any of the following: an article in print media (e.g. New York Times), an article in the online version of print media (e.g. NYTimes.com), an article in an online publication (huffingtonpost.com), a broadcast segment (TV or radio), a news item on the web site of a broadcast channel or station, a blog post (e.g. WSJ health blog), analyst reports, etc. This document does not cover other forms of earned media such as comments on Facebook or Twitter.
Impressions - the number of people having the opportunity for exposure to a media story ; also known as “opportunity to see” (OTS); usually refers to the total audited circulation of a publication the verified audience-reach of a broadcast vehicle or viewers of an online news story. i
Mention – reference to a topic, company, product, spokesperson or issue which is the focus (or one of the focal points) of the media analysis. So, one article might mention a product, a spokesperson, a key issue, a company, etc., all of which are intended to be coded as part of the analysis. A single item may contain a single mention or 100 mentions each of which may be measured as part of the analysis.
Tone - measures how a target audience is likely to feel about the individual, company, product or topic after reading/viewing/listening to the item; typically defined as positive, neutral/balanced, or negative.[i]
Standard #1 – How to Calculate Impressions
- At the very least, professional media analysis offers transparency in any reporting clearly noting the basis for impressions determination as well as consistency and agreement a) among internal and external teams preparing these measures and b) about the sources used to calculate numbers over time.
- For print media, impressions should be based on circulation figures such as those provided by the publication, or through resources such as the Audit Bureau of Circulation (ABC) or Nielsen. Multipliers are not recommended for calculating impressions.[ii]
- Some organizations use multipliers to account for the greater credibility of earned media coverage vs. paid media coverage. However, unless there is a client-specific study to prove the impact of earned vs. paid, the use of multipliers is not justified.
- Others apply multipliers when calculating impressions to represent the “pass-along” effect - when there are multiple readers of a magazine or newspaper thus rendering audited circulation “insufficient.” Without verified proof for specific media outlets, such as a study from the publisher, this approach is not recommended.
- If you must use multipliers because they have been used historically in your organization and/or are part of your performance targets or Key Performance Indicators (KPIs), they should be applied consistently using a predetermined factor with a documented methodology. When possible, a transition plan should be developed to eliminate the use of multipliers.
- For online media – impressions should be calculated by dividing the number of unique visitors per month by the number of days in the month to get the number of daily views. Impressions should be based on the unique URL or sub-domain for the item (e.g., www.yahoo.com vs. finance.Yahoo.com). Unique visitors per month can be sourced through several services, such as Compete.com or Nielsen NetRatings.
- For broadcast – use the numbers provided by the broadcast monitoring service provider i.e. usually they are from Nielson. Again, be consistent.
- For example, a monitoring report for a single clip typically includes the following: Time: 9:30am, Aired On: NBC, Show: Today (6/8), Estimated Audience Number: 5,358,181
- For wire services (AP, Bloomberg, Reuters, etc.) – no impressions are assigned to stories which are simply carried by wires services; only to the stories that they generate in other media. Stories attributed to wire sources can be aggregated separately if that information is useful.
NOTE: Impressions ≠ awareness. “Awareness” exists only in people’s minds and must be measured using other research tools. Impressions are indicative of the opportunity to see (OTS). Consider OTS as an alternative nomenclature to better clarify what “impressions” really mean – potential to see/read and a potential precursor to “awareness.”
Depending on the nature of your work, you may want to consider using “volume of articles” only and eliminate “impressions” completely. This is useful in a scenario where top tier media titles are the focus of the analysis and thus impressions would be impressions are often low compared to consumer media, but are irrelevant since the publication reaches a highly targeted audience.
Standard #2 – Items for Analysis – What Counts as a Media “Hit”
Guidance in this area would be: a story counts only if it has passed through some form of “editorial filter” i.e., a person has made a decision to run or not run the story. Editorial or journalistic imprimatur is considered one of PR’s unique positive differentiators as a form of validated information as opposed to simple “assertion.”
- Reprints or syndication of an article - count as distinct media hits because they appear in unique, individual media titles with different readership. For example, a PC World story on facial recognition appearing of May 29, 2012 was syndicated through IDG News and also featured in InfoWorld.com, PCAdvisor.co.uk. All three placements count as three separate articles each of which generates its own OTS.
- Wire story pick-up – each media outlet running the story counts as a separate ‘hit’ because they have different readership. For example, an Associated Press story on New York City’s proposed ban on the sale of oversized sodas on May 31, 2012 appeared on USAToday.com, BusinessWeek.com and CBSNews.com to name only a few. Each of these placements counts as a “hit” for analysis.
- Same story updated on the wire or online media multiple times in one day – count only once in 24 hour period, use the latest, most updated version for analysis. Do not use multiple versions appearing on the same day.
NOTE: News aggregation systems such as Factiva, LexisNexis, Google News, etc. may manage syndicated stories and wire service updates in different ways. For example, services like Factiva do not typically provide copies of syndicated or wire stories appearing in publications aside from the original. So, if it is important to you to include a copy of every appearance in your analysis, you may need to use a traditional clipping service. If you are just seeking a representative sample of coverage, aggregators or web searches should be adequate.
- Article appearing in both the online and print version of media – count both articles because the readership is different for each channel. Note that some news vehicles may combine circulation numbers.
- Bylined “thought leadership” features by company and brand executives–count as an item for analysis assuming they represent planned communication, even if they offer no company or brand-specific messaging other than what appears on the by-line.
- Press release pickups generated from ‘controlled vehicles’ such as posting your story on PR Newswire, Business Newswire and other commercial wire services - should not count as a ‘hit’ unless they appear in media sections which are generally accessible to the public. While these services provide valuable services and help their clients meet certain reporting requirements with mainstream media pick-up, they also distribute content which is automatically posted on websites which cannot be sourced independently via common search tools or the news outlet itself. In these cases, where the content does not qualify as “editorially validated,” these placements are not “earned,” their placement is paid via the Newswire service and, as such, do not qualify as news content. If for any reason you must count ‘controlled vehicles’ as hits, you should be prepared to provide separate numbers for original stories and controlled stories although actual OTS for commercially-distributed content will be very difficult to quantify.
Standard #3 – How to Calculate Tone or Sentiment
- The consistency and transparency principles hold true here, too. Whatever process is defined and applied, the methodology must earn acceptance by the client from the beginning and must be consistently applied throughout any analysis.
- There are several approaches for judging sentiment of media coverage and many organizations have developed their own systems. One common practice is known as “latent analysis” which is to look at the entire article or mention, and judge the item as a whole based on the overall tone. A second approach is called “manifest analysis.” It looks at an item as a series of sentences or paragraphs and judges each one on its sentiment and then adds up the total number of positives and negatives to get an overall score. A third approach would be avoid assessing tone based on the whole story and make the evaluation on the basis of pre-determined positive and negative messages present in the article. There are pros and cons to each approach. The important point is to be consistent.
- If a scale is used (i.e. 5 point scale - very positive, somewhat positive, neutral, somewhat negative and very negative) it must be established and defined, with examples. We also recommend including a category for balanced coverage – i.e. both positive and negative sentiment occurs in the same story. Typical definitions are:
An item leaves the reader more likely to support, recommend, and/or work or do business with the brand.
An item contains no sentiment at all, just reports the facts. If the news is negative, an article can be neutral if it just reports the facts, without any editorial commentary. In an unfavorable environment, neutral may be the best you can achieve. Base your coding on whether or not the clip makes people more or less likely to do business with your organization.
An item leaves the reader less likely to support, and/work or do business with the brand.
An item includes both positive and negative sentiment, and therefore the resulting overall tone and perception of the reader is balanced.
- Some approaches distinguish “journalistic” or “factual” coverage from “editorial” where the story takes a position or expresses an opinion through third-party citations, for example. In these cases, a mention may be both negative and factual, representing both the nature of the reporting (factual) as well as the audience’s perception after consuming the content (negative e.g. revenue decrease).
- You must define for what or whom you want to determine sentiment. You may be looking to understand tone regarding an industry or sector, or sentiment around a specific product or service, an individual or an organization. A single article could mention all of these therefore, it is necessary to define specifically what element(s) you are targeting for sentiment.
- You must define from whose perspective you are judging the sentiment. It could be the point of view of the general public; a specific stakeholder group such as investors, physicians, teachers, parents; etc.
Standard #4 – Options for Assessing Quality Measures
Measurement should not just be quantitative relying on volume or impressions. There should be some measure of quality included when analyzing each item. We recommend using at least ONE of the following quality measures:
- Visuals – percent of items including a photo, chart or logo in the article which will make the article more prominent for the reader
- Placement – percent of items with preferred placement in the item i.e. front page, first page of a section, website landing page, etc.
- Prominence – percent of items where your organization/program is mentioned in the headline, first paragraph, prominent side-bar, etc.
- Message Penetration – percent of items which include one or more key message
- Track the number of pick-ups for each message
- Track the number of articles that have one message, two messages, etc.
Option: Message integrity – you may want to further analyze message pick up as full, partial, amplified, or incorrect/negative.
- Spokesperson – percent of items including a quote from your spokesperson(s)
- Third party – percent of items including quotes from third parties endorsing your organization or program
- Shared/sole mention (also referred to as dominance) – percent of items where your organization/program is the dominant subject of the item vs. mentions shared with competitors in the same space or a mere passing mention
Quality measures can be scored to allow comparisons among those being tracked. If some qualitative factors are more important than others, weighted values could be assigned to reflect this.
Standard #5 – Advertising Value Equivalency (AVEs) Should Not be Used as a Measure of Media
AVE advocates contend that the value of a placement is equal to the cost of purchasing an equivalent amount of time or space and that a news story of a particular size or length has equal impact to an advertisement of the same size. At this time, there is no known factual basis for this assumption as no research exists to confirm whether this is true.[iii] The Barcelona Principles, established by a consortium of PR industry trade groups in 2010, states that Advertising Value Equivalents (AVEs) do not measure the “value” or “return on investment” of public relations and do not inform future activity; they only measure the cost of media space (where even the “cost” of an advertisement may not derive value or ROI for the advertiser). As such, AVEs are rejected as an evaluation concept for public relations
To ensure that these, and other aspects of your analysis, are understood and consistently executed by analysts, several steps should be taken based on the rules of transparency and consistency. The methodology should be clearly documented in written instructions. Include details about proposed sources and search terms, systems for calculating impressions, qualifying “hits”, assessing sentiment, key messages, target media, key geographic markets, opinion-leaders, etc. and any other relevant instructions. Analysts should be briefed, together if possible, and test articles representing each level of sentiment and a variety of topics/key messages should be coded and reviewed by all. Ensure that these items are agreed with the decision-makers/clients before the real coding begins, though some tweaking may occur as the project evolves. To ensure inter-coder reliability, if a project is ongoing, it is best to use the same coders, methodology and software (or, if outsourced, use the same provider) on a regular basis. Encourage communications and dialog, so that if questions arise they can be discussed and resolved. It is recommended that a percent of articles coded be quality checked by a project manager.
[i] Stacks, D. (n.d.). Dictionary of Public Relations Measurement and Research. Copyright © 2006, Institute for Public Relations. Revised January 1, 2006. From http://www.instituteforpr.org/topics/dictionary-measurement-research/
[ii] Bartholomew, Don and Mark Weiner; Dispelling the Myth of PR Multipliers and Other Inflationary Audience Measures. Copyright © 2006, Institute for Public Relations http://www.instituteforpr.org/wp-content/uploads/Dispelling_Myth_of_PR_Multiplier.pdf.
[iii] Jeffries-Fox, Bruce; Advertising Value Equivalency. Copyright © 2003, Institute for Public Relations. From http://www.instituteforpr.org/topics/advertising-value-equivalency/.