Monday, September 24, 2007

Station Review: Comcast Likes Speed

Well, Comcast's promotional station which is shockingly named Comcast Radio* (click to play) has not been developed at all as usual. Nevertheless, I must give it a cautious vote of approval for matching Guns & Roses and AC/DC with Daft Punk and other techno bands. The station is based on four bands and five songs, but it's an interesting mixture, and better than the usual bland and homogenious selection of music.

Station Review: MTV Commemorates the 2007 VMA's with a Station that Contains No Britney Whatsoever

The MTV VMAs Radio* station (click to play) does contain 16 other artists who are not worth listing here. Just imagine a list of every pop artist that's appeared in every other promotional pop station in the last twelve months. No song seeds or thumbs of any kind. A slow jam rap is currently boring me to tears. Lot's of down tempo songs about teh sexy. Bleargh.

Station Review: Southern Comfort Fails To Be Hip By Calling Itself "soco"

But they did pay for two Pandora stations, and they also posted up a cringe-worthy version of every alcohol ad you've ever seen on their profile page. (I swear I thought it was an anti-drinking ad parody until I saw the closing panel and found out what "soco" was.)

The first up is a mostly harmless jam-band station: Jammin' Radio* (click to play). The Dead, Phish, DMB and Widespead Panic form the basis with two additonal song seeds. Ho Hum. Two thumbs up and two thumbs down. There are probably better jam stations out there.

The second station attempts to take you back to the heady days of 2006 with Dance With A Drink In Your Hand Radio* (click to play). Timberlake, Akon, Lily Allen, and Rihanna (and a Pink track) do the obligatory pop thang. One thumb up and two down. This station might be useful for some future ethnomusicologists to write an equally boring paper about early 21st century Pop.

Station Review: Bud Select Introduces Two New Stations in August

As far as I know, Bud Select is the first company that has paid for a second set of stations. They had an earlier campaign in February of this year which included five stations which, I beleive, I reviewed at This campaign features two stations.

The first is Step Up To Select Radio* (click to play). This is yet another alt-rock station based on the Yeah Yeah Yeah's, four other bands and four other songs. It exibits minimal development with one thumb up and no thumbs down. The couple of songs I've listened to have been synth-heavy whinefests without much interest.

The second is 99 Radio* (click to play), and it is equally insipid. It an techno-y R&B station based on M.I.A., Kanye West, Foxy Brown and six songs. No development whatsoever. I'd recomend Z Huge Rap instead which I found by doing a quick station search on the artists.

September Listening Test Results

It looks like I was a bit hasty in announcing an improvement to the Pandora selection algorithm. My mean satisfaction rating was identical to those of July at 7.41, and, in fact, the median dropped from 7.5 to 7.0. Grrrl Power did do exceptionally well this month, but no other particularly followed suit, and, indeed, Grrl Power topped the charts.

I think advertisers were holding off from investing more in Pandora during the Internet Radio Crisis because I was not seeing many promotional stations this summer. Any reluctance on the advertiser's part seems to be over now because while I was doing this month's listening tests I logged eleven new campaigns, most with multiple stations. Thus, you can anticipate many short Station Reviews over the next week or so as I work through the backlog.

Thursday, September 13, 2007

Station Building 101: Listening Test Mark III

In the previous post, I described the second version of the listening test which I had been doing roughly monthly for the past six months. In the original listening test I only listened to ten songs from each station, and then we learned that Pandora was generating songs in sets of 3 or 4 songs. Thus, I moved to listening to ten sets of songs (30 to 40 sings total).

As I performed the September listening tests, I became slowly convinced that the player was occasionally generating two-song sets. Finally, while testing Pagan Pride (which has an eclectic mix of genres) I came across a case where two folk songs were jammed between a hard rock set and an electronica set.

I e-mailed Tom Conrad, and he confirms that the selection algorithm is now occasionally generating 2-song sets. I've been seeing less value in trying to identify the sets anyway, and so I've decided to switch to listening to 40 songs from each station, and taking ten time the average song score (1 for up, 0 for down, and 0.5 for neither) as the score for the station.

I'm less than halfway through this month's tests, but it appears that the selection algorithm has once again been improved. It would not surprise me if the average station score is up an additional 0.5 since July's increase of a full point. My Grrrlpower station scored an astonishing score of 9.5 with no thumbs-down and four non-thumbed up tracks in 34 songs.

I began to wonder what we might be losing in these changes since my satisfaction has been increasing so consistently. It seems to me that the stations might not be exploring as much new material, and so I've started roughly tracking the number of new tracks that are being played. I can't do it exactly since, clearly, I can not remember every track that has ever played on each station. Nevertheless, a track that already has a thumb-up has clearly been played before, and I'm pretty certainly that if I'm motivated to give a track a thumb up or a thumb down, then it's most likely new since I would have have had the same motivation the first time I heard it. And so I've begun to track the percent of new tracks being played and the percentage of new tracks that get a thumbs-up. So far over five stations the % New has run from 16% to 39% and the % Good|New (% of Good given that a track is New) is all over the map from 23% on Pagan Pride to 83% on O, Wow the Moon.

Friday, September 7, 2007

Station Building 101: Listening Test

The best way to develop a station is to spend time listening to it with intention and focus. I've developed a methodology to do so that allows me to track the development of the station over time and spend some uninterrupted time with each station that I'm actively curating. I came up with this procedure about a year after I started listening, and it's obviously more work than many might be interested in doing. However, I've found it to be quite fun and helpful at understanding what works well in Pandora and defining what I want for particular stations.

The basic idea of my listening test is to listen to ten sets of songs generated by a station and score each song: 1 for thumbs up, 0 for thumbs down and .5 for neither. Please note that it is important when doing a listening test to not click on a thumbs-down until after the songs is over since the player will start an entirely new set at that point and you could miss any remaining songs in the set.

The first time you click on a station after starting up the player seems to start a new set (and so you generally do not have to worry about a partial set at the beginning). To help me identify whether a set is three or four songs, I click on the song page for each song and copy and paste the focus traits into a column of the spreadsheet. I then shade the traits which are common between those songs I believe to be in the same set. On particularly homogeneous stations this step can be hard, and you may need to change you mind and readjust your assessments occasionally. I often will wait on the fourth song of a set to see if it's more similar to the next song.

Once I've identified the sets and scored each song, I then calculate a score for each set as the average score for the songs in the set. For instance, on a three-song set if I had a thumb up, and thumb down and a neither, I'd score that set as a 0.5 ( = (1 + 0 + 0.5)/3). The final score for the station is the sum of the scores across the sets giving a number from 0 to 10. A good score is anything above a 5. An 8 or higher is a great score and was very rare prior to the change in the selection algorithm this summer. Now, you can get at least some of your stations into that range with diligent development.

The scores tend to be pretty volatile, but that is largely because a session of ten sets is not long enough statistically for the average song score to settle down. Nevertheless, a ten-song set is about as long as can be done comfortably at a single setting, and it is long enough to give you some read of the quality of the station.

Monday, September 3, 2007

Stations Story: Loreena and Friends

This station is the oldest station that's still a part of my Quickmix. This station is what Tom Conrad calls "a simple station", that is one that is based on a single artist or song seed.

I was first introduced to the music of Loreena McKinnet by a Canadian documentary called The Buring Times which was the second of a three part series called "Woman and Spirituality". An instrumental track called "Tango to Evora" which featured in a dance sequence in the movie. (How many documentaries not about dance actually have dance sequences anyway?) The piece is striking and had me going through the credits in slow motion to discover who had created it. The track was on her recently release album, "The Visit". She became a favorite artist for both my wife and I, and I quickly caught up with her back catalog.

This station introduced me to Luka Bloom and Mary Black, and led me to purchase a CD from each of them. The station, of course, brings in a lot of Enya, but since I used Enya as a seed for my more general New Age station, "O, Wow, the Moon," I do not give thumbs up to her tracks on this station. This station also has a fair amount of overlap with the Celtic station, "Tir Na Nog," and the "Womenfolk" station.