Three A’s of E-Book Pricing: Amazon, Apple, and Antitrust

A few months ago, I noted that Amazon and book publishers were tussling over the pricing of electronic books. Amazon had originally acquired e-books using a wholesale pricing model. It paid publishers a fixed price for each e-book it sold, and then decided what retail price to charge customers. Retailers usually sell products at a mark-up above the wholesale price–that’s how they cover their other costs and, if possible, make a profit. Amazon, however, often offered books at promotional prices below its costs. For example, it priced many new e-books at $9.99 even if it had to pay publishers $13.00 or more for them (often about half of the list price of a new hardback).

Several large publishers hated Amazon’s pricing strategy, fearing that it would ultimately reduce the perceived value of their product. They thus pressured Amazon to accept an agency pricing model for e-books. Under this approach, the publishers would retain ownership of the e-books and, most importantly, would set their retail prices. Amazon would then be compensated as an agent for providing the opportunity for the publishers to sell at retail. Under this approach, Amazon would receive 30% of each sale, and publishers would receive 70%.

The strange thing about these negotiations is that their initial effect appears to be lower publisher profits. As I noted in my earlier post:

Under the original system, Amazon paid the publishers $13.00 for each e-book. Under the new system, publishers would receive 70% of the retail price of an e-book. To net $13.00 per book, the publishers would thus have to set a price of about $18.50 per e-book, well above the norm for electronic books. Indeed, so far above the norm that it generally doesn’t happen. … [In addition]  publishers will sell fewer e-books because of the increase in retail prices. Through keen negotiating, the publishers have thus forced Amazon to (a) pay them less per book and (b) sell fewer of their books. Not something you see everyday.

Publishers presumably believe that the longer-term benefits of this strategy will more than offset lost profits in the near-term. What they may not have counted on, however, is the attention they are now getting from state antitrust officials such as Connecticut Attorney General Richard Blumenthal. As reported by the Wall Street Journal this morning, Blumenthal worries that the agency pricing model (which is also used by Apple) is limiting competition and thus harming consumers. And the WSJ says he’s got some compelling evidence on his side:

The agency model has generally resulted in higher prices for e-books, with many new titles priced at $12.99 and $14.99. Further, because the publishers set their own prices, those prices are identical at all websites where the titles are sold. Although Amazon continues to sell many e-books at $9.99 or less, it has opposed the agency model because it argues that lower prices, as exemplified by its promotion of $9.99 best sellers, has been a key factor in the surging e-book market.

It’s also interesting to note that Random House decided to stick with the wholesale model, and many of its titles are priced at $9.99 at Amazon.

Of course, higher prices on select books are not enough to demonstrate an antitrust problem. Publishers will likely argue that there is nothing intrinsically anticompetitive about agency pricing, which is used in many other industries. Moreover, there is nothing to suggest that they are colluding on e-book pricing. Also, they may claim that their pricing strategy will allow more online retailers to enter the marketplace, thus providing more competition and more choice for consumers (albeit along non-price dimensions).

The End of Cap and Trade?

No, not for carbon. For sulfur dioxide.

As noted by Mark Peters at the Wall Street Journal:

The original U.S. cap-and-trade market, which succeeded in slashing the power-plant emissions that cause acid rain, is in disarray following the issuance of new federal pollution rules.

The collapse in the pioneering market where power producers trade permits that allow them to emit sulfur dioxide and other pollutants that cause acid rain comes as policy makers seek to establish a similar market to curb the emissions of carbon, a cause of climate change.

The SO2 market has been one of the great successes of economic engineering, using market forces to drive down the cost of cleaning the environment. After almost twenty years of trading, however, the market ran into what may be an insurmountable hurdle: increased regulatory concern about the location of SO2 emissions.

The SO2 marketplace is national in scope, which has been great for establishing liquid trading and allowing emitters to find the cheapest way of reducing emissions. But it also meant that some SO2 emissions would end up in particularly unwelcome spots, e.g., upwind of cities, states, or entire regions that are having trouble meeting air quality standards.

Over the past couple of years, court rulings and new regulatory efforts by the Environmental Protection Agency have increased the emphasis of the location of emissions. And that means that the national market may be coming to an end.

That’s certainly what it looks like in the allowance marketplace, where prices have fallen from more than $600 per ton in mid-2007 to $5 or less today:

The price decline has been particularly sharp because utilities had been polluting less than allowed in recent years. That allowed them to build up an inventory of allowances to use in the future. With prices so low today, however, utilities have essentially no incentive to avoid sulfur emissions and no incentive to hold allowance inventories. As Gabriel Nelson puts it over at the New York Times:

With SO2 allowances trading at about $5 per ton, and little prospect of carrying over the permits into the new program, utilities have little incentive to bank allowances or add emissions controls for the time being, traders say. Because those controls have upkeep costs beyond the original investment, some plants might even find it more cost-effective to use allowances than to turn on scrubbers that have already been installed, traders said.

How to Defeat the Lionfish? Use Your Knife and Fork

As regular readers know, I am intrigued by animals in weird places (voles in the Rose Garden, grey whales in the Mediterranean) and quirky discussions of property rights (guacamole, overhead bins, snow shoveling, office lunches). So imagine my delight when I opened the Food section of the Washington Post to discover an issue that brings them together: the battle against the lionfish.

The beautiful, venomous lionfish is native to the Indo-Pacific, but back in 1985 it started showing up in a weird place: the east coast of the United States. Today you can find the spiny critters along the eastern seaboard, through the Caribbean, down to Belize, and over to the Azores. That’s bad news since the lionfish often crowds out (or consumes) native species.

So what to do? According to Wikipedia, some places have offered bounties for killing them (but, one hopes, not enough to induce breeding and importing) or have established kill-on-sight policies. Another strategy, as the WaPo reports, is to move them down a notch on the food chain:

Federal officials have joined with chefs, spear fishermen and seafood distributors to launch a bold campaign: Eat lionfish until it no longer exists outside its native habitat.

The genius of this approach is that it harnesses a classic economic pathology–the tragedy of the commons–to serve a greater good. No one has any property rights to these interlopers, so all we need to do is create enough market demand for their tasty flesh. Once that reaches critical mass, we can sit back and watch fishermen and -women overfish the Atlantic lionfish into oblivion. Or so goes the theory.

I wish them good luck–and look forward to seeing Atlantic lionfish on the menu soon. I am skeptical, however, that it will actually work. But if it does, maybe the seafood alliance could then turn their attention to the new Potomac snakeheads?

The Vuvuzela Externality

Thus far, the top three stories of the World Cup are (3) Germany looks strong, (2) the U.S. got lucky, and (1) the vuvuzela is remarkably annoying.

For those who haven’t tuned in yet, the vuvuzela is a meter-long plastic horn whose name translates roughly as “making a vuvu noise.” And make a noise it does. When thousands of fans start blowing, you’d think a swarm of bees was taking over the soccer stadium … and your living room. Highly annoying.

And that’s not all. According to Wikipedia, the vuvuzelas raise other concerns:

They have been associated with permanent noise-induced hearing loss, cited as a possible safety risk when spectators can’t hear evacuation announcements, and potentially spread colds and flu viruses on a greater scale than coughing or shouting.

In short, the vuvuzela creates a host of externalities. So it’s not surprising that FIFA is under growing pressure to ban them.

I’ve been unable to come up with a market-based approach for dealing with the vuvuzela — there won’t ever be a Pigou club to limit the vuvu noise — and I would personally benefit from such a ban. So I’m all for it.

It is worth pondering, however, whether there are less drastic actions that might address some of the vuvuzela nuisance. Here’s one idea: ESPN and ABC should figure out a way to cancel out most of the vuvuzela noise. I still want to hear the cheers of the crowd and the screams of players who pretend to be hurt, but those are on different frequencies than the dreaded vuvu noise.

I don’t know how technically challenging that would be, but the marketplace is already providing similar solutions for consumers. According to Pocket Lint, you can change the sound settings on your TV, purchase an anti-vuvuzela sound filter, or even build your own filter at home.

Or you can go really low tech and use your mute button.

Advice to Nasdaq and the NYSE: Cancel Only 90% of the “Erroneous” Trades

Nasdaq and the New York Stock Exchange have both announced that they will cancel many trades made during the temporary market meltdown between 2:40 and 3:00 last Thursday afternoon (see, for example, this story from Reuters). These “erroneous” trades include any that were executed at a price more than 60% away from their last trade as of 2:40.

The motivation for these cancellations is clear: a sudden absence of liquidity meant that many stocks (and exchange-traded funds) temporarily traded at anomalous prices that no rational investor would have accepted.

As several analysts have noted, however, canceling these trades creates perverse incentives. It rewards the careless and stupid, while penalizing the careful and smart. It protects market participants who naively expected that deep liquidity would always be there for them, while eliminating any benefits for the market participants who actually were willing to provide that liquidity in the midst of the turmoil.

Kid Dynamite has helpfully linked to several comments along these lines, as well as providing his own view:

Paul Kedrosky asks aloud: “why are we wiping out all the errant trades by runaway algorithms and market battle bots?”

David Merkel points out, emphasis mine: “NASDAQ should not have canceled the trades.  It ruins the incentives of market actors during a panic.  Set your programs so that they don’t so stupid things.  Don’t give them the idea that if they do something really stupid, there will be a do-over.”

And the Law of Unintended Consequences rears its ugly head again.  Merkel’s point is simple and accurate:  if buyers who step in later see their trades canceled, it removes all incentive for them to step in – and then you don’t get the bounce back that we saw!  Think about how much havoc it causes a trader who astutely bought cheap stock, then sold it out at a profit.  He’s now short!  Or, he spent the entire day wondering if his order would be canceled, in a state of limbo.  What’s the alternative – that traders should just assume that the orders will get canceled, and NOT buy stock?  Guess what – if no one buys, the stock stays cheap! SOMEONE has to buy, and that someone shouldn’t be penalized in favor of remedying the ignorance of the seller who screwed up.

I see merit in both sides of this argument. My economist side thinks people should be responsible for their actions and bear the costs and benefits accordingly. But my, er, human side sees merit in protecting people from trades that seem obviously erroneous.

What’s needed is a compromise–one that maintains good incentives for stock buyers and sellers, but provides protection against truly perverse outcomes.

Happily, the world of insurance has already taught us how to design such compromises: what we need is coinsurance. People have to have some skin in the game, otherwise they become too cavalier about costs and risks. That’s why your health insurance has co-pays and coinsurance. Those payments undermine the risk reduction that insurance provides, but for a very good reason; 100% insurance would make medical care free, and people act really weird when things are free. Even a little skin in the game gets people to pay attention to what they are doing.

So here is my proposal:  NYSE and Nasdaq should cancel only 90% of each erroneous trade. The other 10% should still stand.

If Jack the Algorithmic Trader sold 100,000 shares of Accenture for $1.00 last Thursday, he should be allowed to cancel 90,000 shares of that order. But the other 10,000 shares should stand–as a reminder to Jack (and his boss) of his error and as a reward to Jill the Better Algorithmic Trader who was willing to buy stocks in the midst of the confusion.

Financial Literacy and the Subprime Crisis

A new working paper from the Atlanta Fed identifies a key reason why some subprime mortgage borrowers have defaulted and some haven’t: differences in numerical ability (ht: Torsten S.).

In “Financial Literacy and Subprime Mortgage Delinquency,” Kristopher Gerardi, Lorenz Goette, and Stephan Meier examine how the financial literacy of individual subprime borrowers (as measured through a survey) relates to mortgage outcomes. They find a big effect:

Foreclosure starts are approximately two-thirds lower in the group with the highest measured level of numerical ability compared with the group with the lowest measured level. The result is robust to controlling for a broad set of sociodemographic variables and not driven by other aspects of cognitive ability or the characteristics of the mortgage contracts.

20 percent of the borrowers in the bottom quartile of our financial literacy index have experienced foreclosure, compared to only 5 percent of those in the top quartile. Furthermore, borrowers in the bottom quartile of the index are behind on their mortgage payments 25 percent of the time, while those in the top quartile are behind approximately 10 percent of the time.

Interestingly, this effect is not due to differences in the mortgages that borrowers selected (e.g., it’s not that the less-numerically-able chose systematically bad mortgages*) or obvious socioeconomic factors (e.g., it’s not that the less-numerically-able had lower incomes).

Instead it appears that the less-numerically-able are more likely to make financial mistakes once they have their mortgages. As the authors note, this conclusion is consistent with other studies that examine how financial literacy relates to saving and spending choices over time. All of which is further evidence of the potential benefits of better financial (and numerical) education.

* The authors note one caveat on the conclusion about mortgage terms: The survey covered “individuals between 1 and 2 years after their mortgage had been originated,” but many subprime defaults happened more quickly than that. As a result, their results don’t address whether financial literacy played a role in determining which borrowers ended up in mortgages that blew up very rapidly.

An Unusual Battle Between Amazon and Publishers

Over at the New Yorker, Ken Auletta has a fascinating piece about the future of publishing as the book world goes digital. Highly recommended if you a Kindle lover, an iPad enthusiast, or a Google watcher (or, like me, all three).

The article also describes an unusual battle between book publishers and Amazon about the pricing of electronic books:

Amazon had been buying many e-books from publishers for about thirteen dollars and selling them for $9.99, taking a loss on each book in order to gain market share and encourage sales of its electronic reading device, the Kindle. By the end of last year, Amazon accounted for an estimated eighty per cent of all electronic-book sales, and $9.99 seemed to be established as the price of an e-book. Publishers were panicked. David Young, the chairman and C.E.O. of Hachette Book Group USA, said, “The big concern—and it’s a massive concern—is the $9.99 pricing point. If it’s allowed to take hold in the consumer’s mind that a book is worth ten bucks, to my mind it’s game over for this business.”

As an alternative, several publishers decided to push for

an “agency model” for e-books. Under such a model, the publisher would be considered the seller, and an online vender like Amazon would act as an “agent,” in exchange for a thirty-per-cent fee.

That way, the publishers would be able to set the retail price themselves, presumably at a higher level that the $9.99 favored by Amazon.

Ponder that for a moment. Under the original system, Amazon paid the publishers $13.00 for each e-book. Under the new system, publishers would receive 70% of the retail price of an e-book. To net $13.00 per book, the publishers would thus have to set a price of about $18.50 per e-book, well above the norm for electronic books. Indeed, so far above the norm that it generally doesn’t happen:

“I’m not sure the ‘agency model’ is best,” the head of one major publishing house told me. Publishers would collect less money this way, about nine dollars a book, rather than thirteen; the unattractive tradeoff was to cede some profit in order to set a minimum price.

The publisher could also have noted a second problem with this strategy: publishers will sell fewer e-books because of the increase in retail prices.

Through keen negotiating, the publishers have thus forced Amazon to (a) pay them less per book and (b) sell fewer of their books. Not something you see everyday.

All of which yields a great topic for a microeconomics or business strategy class: Can the long-term benefit (to publishers) of higher minimum prices justify the near-term costs of lower sales and lower margins?

Web Coupons, Privacy, and Price Discrimination

Suppose you’ve got a successful business, selling your product to a diverse set of customers. Life is good. But you’d like to increase profits even more. What should you do?

One option from the MBA playbook (among many) is to think creatively about your pricing. Maybe there’s a way to distinguish your customers from each other and charge them different prices. Perhaps you can charge higher prices to some of your existing customers without driving them away or charge lower prices to folks who aren’t yet buying from you, or a combination of the two.

Businesses have myriad ways of doing this but, not surprisingly, the web has opened up new vistas. Saturday’s New York Times has an interesting article about the extent to which web coupons can be used to distinguish customers, track their behavior, and optimize marketing and pricing strategies (ht Diana):

The coupon efforts are nascent, but coupon companies say that when they get more data about how people are responding, they can make different offers to different consumers.

“Over time,” Mr. Treiber said, “we’ll be able to do much better profiling around certain I.P. addresses, to say, hey, this I.P. address is showing a proclivity for printing clothing apparel coupons and is really only responding to coupons greater than 20 percent off.”

That alarms some privacy advocates.

Companies can “offer you, perhaps, less desirable products than they offer me, or offer you the same product as they offer me but at a higher price,” said Ed Mierzwinski, consumer program director for the United States Public Interest Research Group, which has asked the Federal Trade Commission for tighter rules on online advertising. “There really have been no rules set up for this ecosystem.”

The web thus offers new ways for companies to pursue the holy grail (from their point of view) of pricing: the ability to personalize prices for each potential customer.

Needless to say, this is sometimes bad news for consumers. After all, increased information can allow firms to jack up prices to consumers that the firms believe are unlikely to stop buying.

Less appreciated, however, is the fact that this can benefit consumers as well. For example, increased information can sometimes help firms offer lower prices to select customers who wouldn’t otherwise choose to purchase.

Without further information, it’s hard to know how such creative pricing will affect consumers in the aggregate. Except that the variety of prices will increase, making more of the marketplace look like the airline industry, in which it sometimes seems as though every seat was sold for a different price.

Ultra Trouble for the Ultra Low Cost Airline?

Last week Spirit Airlines announced that it would start charging fees for carry-on bags this summer. Spirit described the benefits of this move as follows:

“In addition to lowering fares even further, this will reduce the number of carry-on bags, which will improve inflight safety and efficiency by speeding up the boarding and deplaning process, all of which ultimately improve the overall customer experience,” says Spirit’s Chief Operating Officer Ken McKenzie.  “Bring less; pay less.  It’s simple.”

As I’ve noted in previous posts, carry-on bags have become a problem on many flights. With advances in roll-aboard technology and in the face of new fees for checked luggage, more passengers are bringing baggage on board, sometimes overwhelming the capacity of the overheads. Airlines need to find a solution to that problem. Spirit’s fees are one possible answer.

I’m sure Spirit expected that some passengers and passenger advocates would object to these fees. I wonder, however, whether the airline ever suspected that it would incur the wrath of Washington?

Over the weekend, New York Senator Chuck Schumer denounced the proposed fees and sent a letter to Treasury Secretary Tim Geithner asking that he stop them. He’s also threatening legislation to prohibit them.

If you are like me, your first reaction should be to wonder why the Treasury Secretary–rather than, say, the Transportation Secretary–is the lucky recipient of Schumer’s letter. This being the middle of April, however, the answer shouldn’t surprise you: taxes,  specifically the taxes that are levied on airline tickets (but not on some other fees associated with flying). The narrow issue is whether the carry-on fees should be subject to the tax. The broader issue is whether carry-on fees should be allowed separate from the ticket price.

Meanwhile, the Transportation Secretary, Ray LaHood, wasted no time in denouncing the proposed fees as well, saying:

I think it’s a bit outrageous that an airline is going to charge someone to carry on a bag and put it in the overhead. And I’ve told our people to try and figure out a way to mitigate that. I think it’s ridiculous.

So watch out Spirit Airlines; your experiment in pricing scarce overhead capacity may not be welcome in Washington, even if it does lead to lower fares and faster boarding.

P.S. The tempest over the baggage fees is temporarily overshadowing a much more interesting and important issue: the transparency and intelligibility of airline fees. Secretary LaHood touches on this in the interview linked to above, as does this article over at Philly.com’s Philadelphia Business Today. Given the panoply of fees and taxes on air travel–thanks both to the government and to the airlines–there’s a real question about whether consumers understand the full costs of flying when they make their purchasing decisions. And some airlines–most notably Spirit with its “penny” and “$9” fares–seem to be playing on that.

Spirit Airlines Combats the Tragedy of the Overhead Bin

As any frequent flyer knows, the competition for overhead space is tight. As I noted a few months ago (“The Warped Economics of Carry-On Luggage“), the situation has only become worse since airlines started charging fees for checked luggage. Budget-conscious travelers caught on quick and started carrying on more of their luggage.

In economic terms, the basic problem is a lack of property rights to overhead space. Without those rights, there is a tragedy of the commons as travelers try to grab space before their fellow travelers (just as some guacamole eaters compete for appetizers). Particularly egregious? The passenger in row 35 who brings on two over-sized roller bags and stows them in the overheads around row 15. No, I’m not bitter.

One solution to this problem would be to create property rights to overhead space. But that would be hard to operationalize.

Another possibility–which Spirit Airlines announced today–would be to charge for carry-ons. Spirit announced:

In order to continue reducing fares even further and offering customers the option of paying only for the services they want and use rather than subsidizing the choices of others, the low fare industry innovator is also progressing to the next phase of unbundling with the introduction of a charge to carry on a bag and be boarded first onto the airplane.

The carry on fee ranges from $20 to $45, the same or more than the fees for a single checked bag (fees for multiple bags may be higher). Personal items (i.e., the things you put under your seat) remain free.

Note how Spirit frames this as helping the airline reduce fares. In the future, I hope some enterprising economist studies the different bag pricing approaches that the airlines use to see to what extent higher bag fees–checked or carry on–translate into lower fares and either more or less crowded overhead compartments.