On Orbitz, Windows Users Steered to Cheaper Hotels

The fun economics story of the day is that Orbitz sometimes looks at your computer’s operating system to decide what hotel options to show you. Dana Mattioli breaks the story over at the Wall Street Journal:

Orbitz Worldwide Inc. has found that people who use Apple Inc.’s Mac computers spend as much as 30% more a night on hotels, so the online travel agency is starting to show them different, and sometimes costlier, travel options than Windows visitors see.

The Orbitz effort, which is in its early stages, demonstrates how tracking people’s online activities can use even seemingly innocuous information—in this case, the fact that customers are visiting Orbitz.com from a Mac—to start predicting their tastes and spending habits.

Orbitz executives confirmed that the company is experimenting with showing different hotel offers to Mac and PC visitors, but said the company isn’t showing the same room to different users at different prices. They also pointed out that users can opt to rank results by price.

Here are examples from the WSJ’s experiments:

The WSJ emphasizes that Mac users see higher-priced hotels. For example, Mattioli’s article is headlined: “On Orbitz, Mac Users Steered to Pricier Hotels.”

My question: Would you feel any different if, instead, the WSJ emphasized that Windows users are directed to lower-priced hotels? For example, Windows users are prompted about the affordable lodgings at the Travelodge in El Paso, Texas. (Full disclosure: I think I once stayed there.)

As Mattioli notes, it’s important to keep in mind that Orbitz isn’t offering different prices, it’s just deciding which hotels to list prominently. And your operating system is just one of many factors that go into this calculation. Others include deals (hotels offering deals move up the rankings), referring site (which can reveal a lot about your preferences), return visits (Orbitz learns your tastes), and location (folks from Greenwich, CT probably see more expensive hotels than those from El Paso).

Online Education and Self-Driving Cars

Last week, I noted that former Stanford professor Sebastian Thrun enrolled 160,000 students in an online computer science class. That inspired him to set up a new company, Udacity, to pursue online education. A new article in Bloomberg BusinessWeek adds some additional color to the story.

Barrett Sheridan and Brendan Greeley answer a question many folks asked about the students: how many actually finished? Answer: 23,000 finished all the assignments.

Second, they note that professor Thrun is also at the forefront of another potentially transformative technology: self-driving cars:

Last fall, Stanford took the idea further and conducted two CS courses entirely online. These included not just instructional videos but also opportunities to ask questions of the professors, get homework graded, and take midterms—all for free and available to the public.

Sebastian Thrun, a computer science professor and a Google fellow overseeing the search company’s project to build driverless cars, co-taught one of the courses, on artificial intelligence. It wasn’t meant for everyone; students were expected to get up to speed with topics like probability theory and linear algebra. Thrun’s co-teacher, Peter Norvig, estimated that 1,000 people would sign up. “I’m known as a crazy optimist, so I said 10,000 students,” says Thrun. “We had 160,000 sign up, and then we got frightened and closed enrollment. It would have been 250,000 if we had kept it open.” Many dropped out, but 23,000 students finished all 11 weeks’ worth of assignments. Stanford is continuing the project with an expanded list of classes this year. Thrun, however, has given up his tenured position to focus on his work at Google and to build Udacity, a startup that, like Codecademy, will offer free computer science courses on the Web.

I wish Thrun success in both endeavors. Perhaps one day soon, commuters will settle in for an hour of online learning while their car drives them to work.

P.S. In case you missed it, Tom Vanderbilt has a fun article on self-driving cars in the latest Wired.

Zanran: Google for Data?

Zanran is a new search engine, now in beta testing, that focuses on charts and tables. As its website says:

Zanran helps you to find ‘semi-structured’ data on the web. This is the numerical data that people have presented as graphs and tables and charts. For example, the data could be a graph in a PDF report, or a table in an Excel spreadsheet, or a barchart shown as an image in an HTML page. This huge amount of information can be difficult to find using conventional search engines, which are focused primarily on finding text rather than graphs, tables and bar charts.

Put more simply: Zanran is Google for data.

This is a stellar idea. The web holds phenomenal amounts of data that are hard to find buried inside documents. And Zanran offers a fast way to find and scan through documents that may have relevant material. Particularly helpful is the ability to hover your cursor over each document to see the chart Zanran’s thinks you are interested in before you click through to the document.

Zanran is clearly in beta, however, and has some major challenges ahead. Perhaps most important are determining which results should rank high and identifying recent data. If you type “united states GDP” into Zanran, for example, the top results are rather idiosyncratic and there’s nothing on the first few pages that directs you to the latest data from the Bureau of Economic Analysis. Google, in contrast, has the BEA as its third result. And its first result is a graphical display of GDP data via Google’s Public Data project. Too bad, though, it goes up only to 2009. For some reason, both Google and Zanran think the CIA is the best place to get U.S. GDP data. It is a good source for international comparisons, but it falls out of date.

Here’s wishing Zanran good luck in strengthening its search results as its competes with Google, Wolfram Alpha, and others in the data search.

The Attention Deficit Society

One highlight of the Milken Global Conference was an excellent panel discussion of how new communication technologies are changing the way that people think and interact.

Moderated by the amusing Dennis Kneale of Fox Business, the panelists were:

Nicholas Carr, Author, “The Shallows: What the Internet Is Doing to Our Brains”

Cathy Davidson, Ruth F. DeVarney Professor of English and John Hope Franklin Humanities Institute Professor of Interdisciplinary Studies, Duke University

Clifford Nass, Thomas M. Storke Professor, Stanford University

Sherry Turkle, Abby Rockefeller Mauzé Professor of the Social Studies of Science and Technology, MIT

I am proud to say that I sat through the entire panel without checking my iPhone or iPad. But it was a struggle.

The full panel is a bit more than an hour. If you are short on time, you may still enjoy the first few minutes of movie clips illustrating some perils of modern technology. Here’s the link again.

Why Free Is a Bad Price

Marco Arment is the brains behind one of my favorite apps. Instapaper allows you to store articles off the Web for later reading; very useful, for example, when I am surfing and come across an article I want to share with my students or use in a future blog post. And the editor of Instapaper periodically shares excellent reads that I might otherwise miss.

Instapaper is currently available for both the iPhone and the iPad for $4.99. As Marco discusses in his blog, however, the iPhone version has sometimes been available for free (but with ads).

Based on his pricing experiments, Marco has decided that free is a bad model. In part that’s because ads provide weak revenues, and it’s expensive to support two versions of the app. In part it’s because the free app cannibalizes sales from the paid version.

But that’s not all. Another problem is that the free version attracts “undesirable customers”:

Instapaper Free always had worse reviews in iTunes than the paid app. Part of this is that the paid app was better, of course, but a lot of the Free reviews were completely unreasonable.

Only people who buy the paid app — and therefore have no problem paying $5 for an app — can post reviews for it. That filters out a lot of the sorts of customers who will leave unreasonable, incomprehensible, or inflammatory reviews. (It also filters out many people likely to need a lot of support.)

I don’t need every customer. I’m primarily in the business of selling a product for money. How much effort do I really want to devote to satisfying people who are unable or extremely unlikely to pay for anything.

Free is a risky price because it allows people to get something without really thinking about whether they want it. That’s why health insurers insist you pay at least $5 to see your doc or get a prescription.  And it’s why DC’s nickel bag tax has been so effective in cutting use of plastic bags.

Kudos to Marco for sharing his results and calling on others to run similar experiments. But I won’t be one of them. Free continues to be the right price here in the blogosphere.

Yahoo’s Self-Inflicted Winner’s Curse

Over at Managerial Econ, Luke Froeb highlights a nice example of the winner’s curse. Like Google, Yahoo uses automated auctions to sell ads. One wrinkle is that some advertisers prefer to pay for impressions, some prefer to pay for clicks, and some prefer to pay only for resulting sales. Yahoo thus needs some mechanism to put these different payment approaches on a comparable footing:

To choose the highest-valued bidder, Yahoo develops predictors of how many clicks and sales result from each impression. For example, if one click occurs for every ten impressions, an advertiser would have to bid more than 10 times as high for a click as for an impression in order to win the auction.

Yahoo was very proud of its predictors, but was puzzled that they systematically over-predicted the actual number of clicks or sales after the auctions closed.

This is the winner’s curse in action. As auction guru (and Yahoo VP) Preston McAfee explains in the paper Luke cites:

In a standard auction context, the winner’s curse states that the bidder who over-estimates the value of an item is more likely to win the bidding, and thus that the winner will typically be a bidder who over-estimated the value of the item, even if every bidder estimates in an unbiased fashion. The winner’s curse arises because the auction selects in a biased manner, favoring high estimates. In the advertising setting, however, it is not the bidders who are over-estimating the value. Instead, the auction will tend to favor the bidder whose click probability is overestimated, even if the click probability was estimated in an unbiased fashion.

McAfee then goes on to explain how Yahoo overcame this self-inflicted winner’s curse, and other strategies to improve auction performance.

Curation versus Search

I love Twitter (you can find me at @dmarron). Indeed, I spend much more time perusing my Twitter feed than I do on Facebook. But it’s not because I care about Kanye West’s latest weirdness (I followed him for about eight hours) or what Katy Perry had for lunch. No, the reason I love Twitter is that I can follow people who curate the web for me. News organizations, journalists, fellow bloggers, and others provide an endless stream of links to interesting stories, facts, and research. For me, Twitter is a modern day clipping service that I can customize to my idiosyncratic tastes.

Several of my Facebook friends are also remarkable curators, as are many of the blogs that I follow (e.g., Marginal Revolution and Infectious Greed, to name just two).  So curation turns out to be perhaps the most important service I consume on the web. In the wilderness of information, skilled guides are essential.

Of course, I also use Google dozens of times each day. Curation is great, but sometimes what you need is a good search engine. But as Paul Kedrosky over at Infectious Greed notes, search sometimes doesn’t work. That’s one reason that Paul sees curation gaining on search, at least for now:

Instead, the re-rise of curation is partly about crowd curation — not one people, but lots of people, whether consciously (lists, etc.) or unconsciously (tweets, etc) — and partly about hand curation (JetSetter, etc.). We are going to increasingly see nichey services that sell curation as a primary feature, with the primary advantage of being mostly unsullied by content farms, SEO spam, and nonsensical Q&A sites intended to create low-rent versions of Borges’ Library of Babylon. The result will be a subset of curated sites that will re-seed a new generation of algorithmic search sites, and the cycle will continue, over and over.

Google More Popular Than Wikipedia … in 1900

Google unveiled a new toy yesterday. The Books Ngram Viewer lets users see how often words and phrases were used in books from 1500 to 2008. Other bloggers have already run some fun economics comparisons. Barry Ritholz, for example, has does inflation vs. deflation, Main Street vs. Wall Street, and Gold vs. Oil.

In the humorous glitch department, I tried out the names of two Internet services I use everyday, Google and Wikipedia. For some reason, the Ngram viewer defaults to the timeperiod 1800 to 2000 (rather than 2008), and this was the chart I got (click to see a larger version):

It’s amazing to see references to Wikipedia as far back as the 1820s. Impressive foresight. Google overtook Wikipedia in the late 1800s and, with the exception of a brief period in the 1970s, has led ever since.

No, the Web Isn’t Dead (Yet)

Wired’s cover story this month, “The Web is Dead,” features the following chart showing the portion of internet traffic in different uses:

Over the past few years, peer-to-peer services and video have gobbled up an increasing share of traffic, while the “traditional” web — you know, surfing from site to site, reading your favorite blog about economics, finance, and life, etc. — has been declining.

Chris Anderson cites this as evidence of the pending death of the web. To which there is only one thing to say: wait a minute buster. Just because the web’s share of total bits and bytes is falling doesn’t mean it’s dying. Maybe it’s just that the other services are growing more rapidly.

One of the benefits of being off the grid for a week-plus is that other commentators have already had the same thought and have tracked down the relevant data. Kudos to Rob Beschizza at BoingBoing for charting the data in absolute terms. Rather than dying, the web is still growing like fresh bacteria in a petri dish:

Three A’s of E-Book Pricing: Amazon, Apple, and Antitrust

A few months ago, I noted that Amazon and book publishers were tussling over the pricing of electronic books. Amazon had originally acquired e-books using a wholesale pricing model. It paid publishers a fixed price for each e-book it sold, and then decided what retail price to charge customers. Retailers usually sell products at a mark-up above the wholesale price–that’s how they cover their other costs and, if possible, make a profit. Amazon, however, often offered books at promotional prices below its costs. For example, it priced many new e-books at $9.99 even if it had to pay publishers $13.00 or more for them (often about half of the list price of a new hardback).

Several large publishers hated Amazon’s pricing strategy, fearing that it would ultimately reduce the perceived value of their product. They thus pressured Amazon to accept an agency pricing model for e-books. Under this approach, the publishers would retain ownership of the e-books and, most importantly, would set their retail prices. Amazon would then be compensated as an agent for providing the opportunity for the publishers to sell at retail. Under this approach, Amazon would receive 30% of each sale, and publishers would receive 70%.

The strange thing about these negotiations is that their initial effect appears to be lower publisher profits. As I noted in my earlier post:

Under the original system, Amazon paid the publishers $13.00 for each e-book. Under the new system, publishers would receive 70% of the retail price of an e-book. To net $13.00 per book, the publishers would thus have to set a price of about $18.50 per e-book, well above the norm for electronic books. Indeed, so far above the norm that it generally doesn’t happen. … [In addition]  publishers will sell fewer e-books because of the increase in retail prices. Through keen negotiating, the publishers have thus forced Amazon to (a) pay them less per book and (b) sell fewer of their books. Not something you see everyday.

Publishers presumably believe that the longer-term benefits of this strategy will more than offset lost profits in the near-term. What they may not have counted on, however, is the attention they are now getting from state antitrust officials such as Connecticut Attorney General Richard Blumenthal. As reported by the Wall Street Journal this morning, Blumenthal worries that the agency pricing model (which is also used by Apple) is limiting competition and thus harming consumers. And the WSJ says he’s got some compelling evidence on his side:

The agency model has generally resulted in higher prices for e-books, with many new titles priced at $12.99 and $14.99. Further, because the publishers set their own prices, those prices are identical at all websites where the titles are sold. Although Amazon continues to sell many e-books at $9.99 or less, it has opposed the agency model because it argues that lower prices, as exemplified by its promotion of $9.99 best sellers, has been a key factor in the surging e-book market.

It’s also interesting to note that Random House decided to stick with the wholesale model, and many of its titles are priced at $9.99 at Amazon.

Of course, higher prices on select books are not enough to demonstrate an antitrust problem. Publishers will likely argue that there is nothing intrinsically anticompetitive about agency pricing, which is used in many other industries. Moreover, there is nothing to suggest that they are colluding on e-book pricing. Also, they may claim that their pricing strategy will allow more online retailers to enter the marketplace, thus providing more competition and more choice for consumers (albeit along non-price dimensions).