All Access forum API Error

Net Talk • Perceptual Research (Music Montage): A Second Opinion
Page 1 of 1

Perceptual Research (Music Montage): A Second Opinion

Posted: Mon Jun 27, 2011 8:32 am
by rogerwimmer
Doctor: In medicine, when dealing with something serious, it's often good to seek a second opinion, right? In case that's true in research as well, I wanted to run this by an expert like you. (Kissing up is how I roll, sorry.)

We're doing a perceptual study for our market to find out who the most likely audience is for our format. This will involve playing a hook montage that best represents our format. If the responder rates the montage a 4 or a 5 (on a 5-point scale, where 5 is "most favorable" and 1 is least), we continue to dive deeper into the survey with them.

If the respondent rates 3 or lower, we move on to some other things (not involving any more montages) and quickly wrap up.

I asked our researcher what difference it would make to include those who gave our montage a "3. " (I understand why 1s and 2s might be a waste, since they're disinclined to us from the start.) My thinking being: if someone rates the format a 3, they don't HATE us, and as we dive deeper into the survey, their additional answers may shed some light on how to optimize our format so that they could perhaps score it at a 4 or a 5 in the future.

Our researcher contends that anything lower than "5" won't provide a good foundation to draw conclusions. Even a "4" is sketchy in this person's mind, since existing passion and familiarity are waning at anything under a 5.

He makes a good point. I thought mine was decent too. What's your take? - Anonymous

Anon: Your question involves two items, not just one. Let's first deal with the "5" rating on the montage.

Like most things in life, there are a variety of approaches in research to collect answers from respondents. Therefore, there is nothing wrong with your researcher's idea to use only the respondents who rate the montage as a "5," but I don't use this approach. Why? Because I think using only the respondents who rate the montage the highest rating is restrictive for several reasons, including, but not limited to: (1) Some people are "tough" raters and rarely rate anything with the highest number even though the item rated may be their favorite, so it's possible that some people (I can't tell you how many) who consider the music their favorite may rate the montage as a "3" or "4;" (2) The theory behind using only the highest raters is that these people are, supposedly, P1s (fans of the radio station; people who listen most often to a radio station or the type of music represented in the montage), but this may not be true because of the differences in how people rate anything, including music montages; (3) Research that involves humans needs to be flexible because of the wide variety of perceptions people have about anything. Sure, it's easy and "clean" to select only those who rate the montage as a "5," but my experience in radio research during the past 30+ years indicates that it's best not to guess at how respondents will answer any question. With that in mind, there should be some flexibility in what you accept and what you don't accept as good or bad or usable or not.

Now, because I don't like to guess at what respondents will or will not do, believe, or perceive, in your study, I think it would be wise to include respondents who rate the montage a "4" or "5," but also accept 20 or so who rate the montage as a "3" to see how these people differ from those who rate the montage as a 4 or 5. You can do this by creating banner points for each of these ratings. The banner points will allow you to compare each group to the others to see if there are any differences. My guess is that you will find some very interesting information about the three groups of people you can use in programming your radio station.

As I said, there is nothing wrong with only allowing respondents who rate the montage as a "5," but I tend to use a conservative approach in all research projects and don't assume that all people will perceive any rating scale in the same way. I would rather err on the side of conservatism than possibly artificially eliminate qualified respondents.

Now, on to the second item . . .

I know that many researchers use music montages to qualify respondents for research studies, but music montages have problems.

1. When I used music montages a few decades ago to qualify respondents, I noticed that the incidence was very low. Incidence is the percentage of people contacted who actually qualify for a study. I asked the interviewers to call back several respondents who rated the "target" montage lower than what was required to qualify for the study. What we found was that a large percentage of the people (I think it was about 30%) who didn't rate the montage high enough to qualify for the study were actually P1s of the client radio station.

What? How can that be? The montage included four or five songs selected by the PD as representative of the radio station's playlist. We thought, which seemed logical, that a representative sample of songs a radio station plays should be "recognized" or "identified" as representing the music the client radio station plays, but in reality, it doesn't work that way.

The reason it doesn't work that way is that the montage may include one or more songs the respondents: (1) Don't like; or (2) Don't recognize. These were comments from the respondents in follow-up interviews. In addition, we found that many of the radio station's P1s didn't identify the client's "target montage" as representative of their favorite radio station. Over the years, I found that as high as 65% of a radio station's P1s don't rate a "target" montage as highly as we would expect. That's a lot of people to artificially eliminate from a research study.

Frustrating, yes, but that's the way it is. While PDs, consultants, or anyone else who develops a music montage that supposedly represents a radio station, the listeners (only the most important part of the equation) may not agree. So what do you do?

1. Because music likes and dislikes are so volatile, I found the best approach is to avoid using music hooks (montages) to qualify respondents. Instead, I use artists, and ask a question like, "I would like you to rate different types of music by reading short lists of music artists. For each group, lease tell me how much you like the music by the artists by using a scale of 1 to 10, where the higher the number, the more you like the music as represented by the artists. Please rate the music as a whole, not a specific artist." Then the respondents are asked, "How much do you like music by artists such as . . . (three or four artists are read)."

2. What this approach does is eliminate the volatility of individual songs. Respondents may "love" a certain artist, but they may not "love" all of the songs the artist performs. If one of the "hated" songs is included in a music montage, the montage as a whole may be rated lower than expected. This doesn't happen when artists are included.

3. I always include at least two groups of artists the client radio station plays, and sometimes three groups. I do this to reduce the possibility of making an error in the artist lists.

4. I always ask the respondents which radio stations they choose to listen to during a typical week.

5. Respondents can then qualify for a study in two ways: (1) By rating one of the artist groups with a certain rating (usually 8, 9, or 10 since I always use 1-10 scales); or (2) Naming the client radio station in the radio listening question.

In summary, you can use the music montage approach, but I would suggest being more flexible in who you allow into the study. However, I suggest the music artist approach over the music montage approach.

Does that help?

(Want to comment on this question? Click on the POSTREPLY button under the question.)