Unhealthy Music Test Scores

Ask the Research Doctor a question.

Moderators: shawnski, jdenver, rogerwimmer

Forum rules
How do you ask the Research Doctor a question?

Just click on "New Topic" below and then type your question to me. Please put in a Subject and then ask your question in the body.

Your question is submitted annonymously unless you include your name.

Click "Submit" and then check back to find Dr. Wimmer's answer to your question.

If you wish to have a private answer to a question (not posted in the forum), please include your email address. Dr. Wimmer will send your answer directly to you.

Dr. Wimmer maintains an Archive of questions from the forum. Go to http://www.rogerwimmer.com and click on "The Research Doctor Archive" link.
Post Reply
User avatar
rogerwimmer
Posts: 197
Joined: Mon Sep 28, 2009 12:40 pm
Contact:

Unhealthy Music Test Scores

Post by rogerwimmer » Thu Jul 15, 2010 2:34 am

Doctor: I am enjoying my time in your fine state of Colorado. I just moved here last month and I can see why you stay put as a resident.

At this new station I have joined, I have been going over our "Internet Callout" scores from the past couple of years. Before I get to my question, please allow me to state that I know your reservations about relying on these data. Trust me, I don't. I see the information only as one indicator tool out of a few at my disposal. My decisions are made by me, and not by my possibly faulty data.

Sadly, my possibly faulty data is the best I have at the moment (though I am petitioning our GM to spend . . . wish me luck). Something I have noticed so far: What defines a "good score" here is quite lower than anywhere I have ever worked. We do the typical 1 to 5 scale, with 5 meaning "Gold," and 1 meaning "Gulf of Mexico" water. At every station I have programmed, in multiple formats, I am used to: (a) The top scores in our target demo landing somewhere in the 4.0 to 4.5 range; and (b) Seeing maybe 30% of the songs tested land at a 4.0 or better.

But here at this new gig, in our target demo, I'll see maybe one or two songs land at a 4.0 or higher. I can't decide what this means. Could it be that of the listeners we have attracted in this demo, we are significantly missing the mark on playing what they want? (But then why are they taking these surveys over and over?) Might they simply be the most picky listeners I have every programmed for? Was every station I've run before now out of whack, and this is the first splash of truth I've ever experienced?

Possibly related . . . when I look at the young end of this target, or even shift my filters to the next demographic group down from them, the scores resemble what I am used to.

I'm worried that I said all that and you're gonna be like, "Well you should switch to z-scores. Everything else is crap." And that you'll say "it makes no sense to compare audience A to audience F." I agree with that, and I don't view this as a direct comparison. Looking forward to your perspective. - Anonymous


Anon: First, welcome to Colorado. I don't know who you are or where you live and work, but most of the state is a nice place to be. On to your questions . . .

1. I'll start with the comment about z-scores in your last paragraph. You are correct is saying that I will tell you not to make data comparisons (music test scores or any other ratings or scores) from one market to another without converting the data to z-scores. Comparing raw scores/ratings from one situation to another is meaningless because each data set has its own "metric." That is, each sample is unique in how it scores or rates anything. For example, while one sample may be "easy" graders and rate everything highly, another sample may be "tough" graders and rarely give the highest score/rating. One absolute rule in research, a rule without exception, is: Data comparisons of any kind require that the data be converted to z-scores. This is true not only for comparing data from one market to another, but also for your analysis of the ratings from one callout to another, or from one age cell to another. An analysis of your Internet Callout scores is meaningless unless you convert the data to z-scores. (For anyone who needs more information about z-scores, there are many questions/answers on the Research Doctor Archive on this page).

2. Two pieces of information you didn't include are: (1) How many do songs you test in a callout?; and (2) Which songs do you test? Are they songs you're currently playing or songs that you are thinking about playing? These two points are very important, because at face value (only looking at the data you have without any background information), something "don't be right." It might be that you are testing too many songs. It might be that your instructions for your scale aren't clear. If you are testing songs you're currently playing, and only the lower end of your demo rates the songs highly, then at face value, the older people in your demo don't like the songs you're playing. If that's true, and I need a lot more information to verify that, then you have a problem.

3. With that said, there are several items that need to be addressed: (a) How are respondents recruited? (b) What screening questions do you use to qualify respondents? (c) What procedures do you use to verify that the respondents are legitimate?

You said that many people complete your Internet Callout "over and over." Who are these people? Are they legitimate respondents? Are they people just messing around on the Internet? Are they people from your competing radio stations trying to mess up your results? You need to know all those things.

In addition, have the screening requirements changed at all since your station's first test? If the screening requirements have changed in any way, you won't be able to analyze a history of your information unless you convert to z-scores.

4. As you probably already know, I'm not a big fan of collecting research information via the Internet. However, if that is the only data collection option available, there are a few tests that can be conducted to determine if the sample is legitimate (or at least, somewhat legitimate). Your approach to looking at the data only as an indication of reality is good. In addition, as I am sure you do, you should look at the results from several surveys before considering making a decision about a specific song. One test of a song means nothing.

5. If you verify that your sample and measurement instrument are OK, and you still don't get the types of high scores that you expect, then I think you will have to adjust your thinking. It may be that your listeners (or the people taking the survey) are "tough" graders.

6. The fact that the upper ages in your demo don't rate the music very highly bothers me. You need to look very closely at every single aspect of your testing procedure. Something isn't right, but I don't have enough information about the sampling procedures, measurement instrument, and other things, to determine what that "something" is. Look at everything.

However, the lower scores by the upper age listeners may be 100% true. If that's the case, you have a significant problem because, for these people, your radio station is their choice for mediocre music. In other words, the music on your radio station is severely limiting its potential.

7. One final thing: I hope your GM understands that the music on your radio station IS the product. If you don't test the product, how do you know what the listeners want? By not giving you money to test your product, the GM is essentially giving you a shovel to help dig the grave.


(Want to comment on this question? Click on the POSTREPLY button under the question.)
Roger Wimmer is owner of Wimmer Research and senior author of Mass Media Research: An Introduction, 10th Edition.

User avatar
rogerwimmer
Posts: 197
Joined: Mon Sep 28, 2009 12:40 pm
Contact:

Re: Unhealthy Music Test Scores

Post by rogerwimmer » Thu Jul 15, 2010 5:33 pm

Doc: Thanks a lot for your answers to this person's questions. I have been wondering about many of the same things myself. I enjoy your column. - Anonymous

Anon: Thank you. I'm glad I answered your questions (even though you didn't ask them) and I'm happy to hear that you enjoy the column.
Roger Wimmer is owner of Wimmer Research and senior author of Mass Media Research: An Introduction, 10th Edition.

Post Reply