Anita Elberse recently wrote a paper in Harvard Business Review on the Long Tail theory and its applications to digital distribution, arguing that while the "tails" of digital distribution sites tended to be long, they also tended to be "flat" (or more correctly, they aren't "fat", i.e., the area under is not as big as might have been expected) - her analysis of Quickflix DVD rentals showed that the top 10% of titles accounted for 48% of sales, and that on Rhapsody the top 10% of songs accounted for 78% of the plays.
Chris Anderson, (who also featured in this blog's previous post), a proponent of the theory, responded, in a kind of "I sort of agree, essentially arguing that in terms of absolute numbers, even the top 1% of titles played/rented on Quickflix/Rhapsody are higher than the total sold/rented on Walmart or a Blockbuster, and hence that the tail is getting longer and fatter.
While both these authors looked at interesting data, and raised some interesting points, there's something in both their arguments which makes me cringe: they both behave as if a blockbuster is already a blockbuster before consumers decide to buy it, and hence it's possible to segment consumers as buying something that's a blockbuster versus a niche product; now, that's (almost) completely the other way round - an album or a dvd is a blockbuster AFTER (i.e., as a result of) a lot of people having bought it. There's also an implicit belief that it's possible to predict what books/DVDs/songs would succeed and which ones would stay niche.
Let me take these on in more detail by examining some of the recommendations and some of the passages from the articles, and then at the end I'll discuss alternative recommendations.
One quote by Anita is particularly illustrative about the direction she assigns to the causality: she writes: "Is most of the business in the long tail being generated by a bunch of iconoclasts determined to march to different drummers? The answer is a definite no. My results show that a large number of customers occasionally select obscure offerings that, given their consumption rank and the average assortment size of off-line retailers, are probably not available in brick-and-mortar stores. Meanwhile, consumers of the most obscure content are also buying the hits. Although they choose products of widely varying popularity, top titles generally form the largest share of their choices. (The wide appeal of these top titles is, of course, what makes them popular in the first place.)". She barely manages to redeem her credibility in that last sentence, after getting it completely wrong for most of the paragraph: it's not at all surprising that more consumers are buying the hits, because that's TRUE BY DEFINITION!!!!!!!!!
Chris writes in his response "Much of the paper is about consumer satisfaction in the head vs tail. In the Quickflix data, she says, "customers give lower ratings to obscure titles...it is a myth that obscure books, films and songs are treasured. What consumers buy in Internet channels is much the same as what they have always bought." That may be true for the specific example of the Australian DVD data, but it is not clear from the paper why she feels able to extrapolate that to all Internet commerce." Anita's assessment is completely off the mark - the reason why these titles stayed obscure was that they got low ratings. If they had gotten high ratings, these "obscure" titles would have become hits. Clearly, the error here is on Anita's part and Chris demonstrates his lack of understanding of the subject by nitpicking on it rather than attacking the core of the argument.
Some of the other quotes also relate to what I said earlier about Anita's belief that it's possible to a-priori segment the winers and the losers:
- "Making “onesies” and “twosies” profitable may require completely eliminating any associated costs. It is therefore worthwhile to explore creative solutions for the very end of the tail." Hmm... so how exactly is this supposed to work? Do you first eliminate the costs for every product, and then, the ones that sell more then 1-2 units get some marketing support? Oops.. that might force a lot of products that could have sold more than a couple of copies to be relegated to the ranks of "onesies" and "twosies"!!! In fact, (and I lack data here), I am sure some of these a-priori losers would actually have ended up selling millions of copies, and aren't even figuring in Anita's analysis of the profitability of this segment, since she's already put those winners into another segment.
- "Don’t radically alter blockbuster resource-allocation or product-portfolio management strategies. A few winners will still go a long way—probably even further than before....". Aha, but therein lies the catch: how do you spot the winners ahead of time? Could Anita have predicted the massive success of the Harry Potter series in 1996? In fact, the only time she could have predicted it was when it was already a success, and at that time her recommendation is to let the product be a loss leader!!! "The seventh book in the Harry Potter series, introduced by Scholastic at a suggested retail price of $34.99 in the United States, was a blockbuster loss leader: It was sold at sharply reduced prices by Barnes & Noble ($20.99, a 40% discount) and Amazon ($17.99, a 49% discount) in an effort to stimulate other purchases." Brilliant!! Exactly the kind of strategic advice that a CEO needs: don't spend anything on Harry Potter till it becomes a success (and hence not make much money either), and then once it succeeds make it a loss leader!
Because of these errors in her understanding, I think that Anita's recommendations don't add up to much value. So what should you do instead? While I lack some crucial data, I can still point to some directions:
- Anita argues that 3.6 million of the 3.9 million tracks sold in 2007 sold less than 100 copies, and hence this segment doesn't seem to be very profitable. Therein lies the fallacy. Because she's looking at it post-facto, after these 3.6 million had failed. I am sure that several of the other 300,000 tracks that did succeed seemed "obscure" BEFORE they succeeded. And the amount of money made by those "successful obscure" tracks would be much more than the money lost on the 3.6 million tracks (here, I admittedly lack data, but this seems like a reasonably hypothesis to me). Hence, the right strategy here is to bet on the entire pool of music tracks succeeding: on the whole, even if 3.6 million of 3.9 million tracks don't sell much, you'd come out a winner if you bet on the entire pool of 3.9 million tracks. On the other hand if you try to a-priori segment the obscure and the mainstream, you are likely to fail.
- One of the clues perhaps lies in a blog post written by Anand at Datawocky. Anand points out correctly that the real long tail provided by the digital medium is that of influence, i.e., that the blockbusters are now selected not by a small set of editors and producers but by ever larger groups. How do you leverage that? Can you somehow use these armies of people in ever more creative ways to allocate your marketing dollars, without making flawed a-priori assumptions about what's going to succeed (i.e., before you publish a book, you get some segment of the community to certify it, instead of employing 1 editor)?
- Dynamically allocate your marketing budget in phases: Adapt to success and failure. If something seems to be succeeding, invest more in it, merchandise it on the home page of your website etc.
Of course, there are some times when you have enough data to improve your a-priori judgment. For example, I can say with reasonable confidence that Maroon 5's next album will sell more than a 1000 copes, and hence might be a better a-priori candidate for a heavy marketing budget. You should certainly feel free to make some a-priori decisions. But all I am arguing for is that many of the hits have become hits because people chose to buy them or rate them high, and not the other way round, and that your decision making should reflect that consideration. Otherwise, you'll come up with false segmentations like Anita's segmentation of the 3.6 million tracks that sold less than 100 copies versus the other 300,000, and the different strategies she suggests for the two segments.