(PLEASE NOTE: The following article is reprinted from the Nov 2004 issue of The Measurement Standard. It is reposted for its historical value.)
Ask The Experts
Human vs. Computer
Content Analysis
Five measurement experts tell us what works and what
doesn't.
"Good analysis is not about man vs. machine; good analysis requires man and machine. Machines elevate man's ability to do intelligent analysis."
--You Mon Tsang
Here at the Measurement Standard, we've had numerous inquiries about when it is appropriate to use manual media content analysis and when it is appropriate to use automated content analysis. What parameters or criteria don't work with automated analysis, and what ones work better? At what point is it more cost effective to go with manual? What are the limitations of each?
We asked the experts...
From: Don Stacks,
Professor, University of Miami,
and author of A Primer of Public Relations Research
All content analyses should be done both ways. Even the best computerized CA programs need to be double checked. I suggest a random selection of messages be done by hand -- by at least 2 coders -- and that a Scott's pi coder reliability be computed and then compared to whatever reliability the computer program may provide (which are typically nonexistent). A careful comparison of results should provide some indication of reliability.
From: Jim Macnamara,
CEO of MASS Communications Group
Download Jim Macnamara's paper on Humans vs. Machines.
From Jim Macnamara's Content Analysis paper:
"...Neuendorf (2002) says that, "The notion of the completely automatic content analysis via computer is a chimera … The human contribution to content analysis is still paramount."
Most content analysts agree with this viewpoint based on professional experience. They point to four failings of computerized content analysis:
- Computerized coding of texts makes very arbitrary associations between words and phrases. While neurolinguistic software programming and Artificial Intelligence (AI) systems in which computers are purported to ‘learn' to interpret the way humans do are developing, such programs remain unreliable for subtle, nuanced and idiomatic interpretation and their analysis is simplistic. Also, the underlying methodology of some automated systems is rudimentary and, in some cases, invalid. Some have been developed by IT specialists rather than researchers.
- Computer coding of media content results in what Neuendorf terms "black box measurement." This is inconsistent with the scientific method of research which requires that full information is disclosed on how results are obtained. Also, it limits replicability as other researchers cannot compare data and findings.
- When content analysis is conducted across multiple languages, such as for global or non-western media studies, the problems of machine coding become even more marked. Most automated coding systems work with English language text only and computer translations are unreliable except for the most basic applications.
- Automated retrieval of media articles online often substantially limits and invalidates samples, particularly in global studies. While a large number of North American media are available online, in markets such as Asia only a small percentage of media are online."
Read the whole paper here.
From: Bruce Jeffries-Fox,
Executive Vice President at Fox Associates
The more complex or abstract the concept, the more difficult to find. I always use the example of "innovation." That can manifest in an infinite number of ways, and, by definition, you can never get them all by looking backwards!
Also, automated doesn't work well for things that are left out. Let's say you're trying to position yourself or one of your offerings in a particular way. And maybe you get coverage. But they leave out key messages. Theoretically, there ought to be an automated way to deal with this, but I've never seen it executed accurately.
Endorsements: I've never seen a good automated approach for determining whether there's an endorsement in a story.
Favorability: Even something as fundamental as favorability, I have yet to find a program that really handles things as well as humans.
Translation: All the automated translation programs I've tried over the years (even recently) eat shit.
From: Gaugarin Oliver,
Vice President of Business Development, Cymfony
Automated analysis is appropriate when:
- The company needs more than just reporting. There is a need for things such as ad hoc query and analysis, collaboration by various parties (global teams involving agencies, etc.).
- A unified backplane that integrates content and provides access to content
- Faster analysis during crisis management, campaigns
- High volume
- Numerous parameters (many competitors, products, regions, divisions, spokespeople, etc)
- Objective of the program is also to count as many clips as possible as part of some metrics
- Parameters change often (new products, competitors)
Manual analysis is appropriate when:
- Very low volume
- Need analysis based on images (product placement)
- Starter package for someone to realize and/or communicate the value of measurement
Automated analysis can not reliably do:
- Tonality
- Expert guidance based on inference
- Non-English translation
Automated works better if:
- There is a measurement process (or company spends time to develop one prior to launching the tool)
- The process takes advantage of the availability of a tool
- Enterprise-wide training
- Managers use the tool to get their reports (this way, others will make sure that the tool is up to date and working well)
- Totally integrated (instead of going to different tools for different pieces of the measurement program)
From: You Mon Tsang,
CEO, Biz360
Some thoughts:
Good analysis is not about man vs. machine; good analysis requires man and machine. Machines elevate man's ability to do intelligent analysis. As machine analysis continues to grow more sophisticated, man's ability to creatively use the information to improve strategy and execution also increases.
While we at Biz360 are working hard to automate sophisticated analysis (even beyond our advanced metrics such as tonality, prominence, and message hits) to help bring better, deeper, faster analysis to our clients, we also believe that human interpretation that leads to quick and thoughtful recommendations of action is integral to the process.
Understanding the relationship between man and machine, we have trained thousands of PR practitioners (corporate and agency) to add their eye to the data. We have a strong service network of partners to deliver the qualitative analysis.
|
|
Comments
You can follow this conversation by subscribing to the comment feed for this post.