The fun economics story of the day is that Orbitz sometimes looks at your computer’s operating system to decide what hotel options to show you. Dana Mattioli breaks the story over at the Wall Street Journal:
Orbitz Worldwide Inc. has found that people who use Apple Inc.’s Mac computers spend as much as 30% more a night on hotels, so the online travel agency is starting to show them different, and sometimes costlier, travel options than Windows visitors see.
The Orbitz effort, which is in its early stages, demonstrates how tracking people’s online activities can use even seemingly innocuous information—in this case, the fact that customers are visiting Orbitz.com from a Mac—to start predicting their tastes and spending habits.
Orbitz executives confirmed that the company is experimenting with showing different hotel offers to Mac and PC visitors, but said the company isn’t showing the same room to different users at different prices. They also pointed out that users can opt to rank results by price.
Here are examples from the WSJ’s experiments:
The WSJ emphasizes that Mac users see higher-priced hotels. For example, Mattioli’s article is headlined: “On Orbitz, Mac Users Steered to Pricier Hotels.”
My question: Would you feel any different if, instead, the WSJ emphasized that Windows users are directed to lower-priced hotels? For example, Windows users are prompted about the affordable lodgings at the Travelodge in El Paso, Texas. (Full disclosure: I think I once stayed there.)
As Mattioli notes, it’s important to keep in mind that Orbitz isn’t offering different prices, it’s just deciding which hotels to list prominently. And your operating system is just one of many factors that go into this calculation. Others include deals (hotels offering deals move up the rankings), referring site (which can reveal a lot about your preferences), return visits (Orbitz learns your tastes), and location (folks from Greenwich, CT probably see more expensive hotels than those from El Paso).
Zanran is a new search engine, now in beta testing, that focuses on charts and tables. As its website says:
Zanran helps you to find ‘semi-structured’ data on the web. This is the numerical data that people have presented as graphs and tables and charts. For example, the data could be a graph in a PDF report, or a table in an Excel spreadsheet, or a barchart shown as an image in an HTML page. This huge amount of information can be difficult to find using conventional search engines, which are focused primarily on finding text rather than graphs, tables and bar charts.
Put more simply: Zanran is Google for data.
This is a stellar idea. The web holds phenomenal amounts of data that are hard to find buried inside documents. And Zanran offers a fast way to find and scan through documents that may have relevant material. Particularly helpful is the ability to hover your cursor over each document to see the chart Zanran’s thinks you are interested in before you click through to the document.
Zanran is clearly in beta, however, and has some major challenges ahead. Perhaps most important are determining which results should rank high and identifying recent data. If you type “united states GDP” into Zanran, for example, the top results are rather idiosyncratic and there’s nothing on the first few pages that directs you to the latest data from the Bureau of Economic Analysis. Google, in contrast, has the BEA as its third result. And its first result is a graphical display of GDP data via Google’s Public Data project. Too bad, though, it goes up only to 2009. For some reason, both Google and Zanran think the CIA is the best place to get U.S. GDP data. It is a good source for international comparisons, but it falls out of date.
Here’s wishing Zanran good luck in strengthening its search results as its competes with Google, Wolfram Alpha, and others in the data search.
I love Twitter (you can find me at @dmarron). Indeed, I spend much more time perusing my Twitter feed than I do on Facebook. But it’s not because I care about Kanye West’s latest weirdness (I followed him for about eight hours) or what Katy Perry had for lunch. No, the reason I love Twitter is that I can follow people who curate the web for me. News organizations, journalists, fellow bloggers, and others provide an endless stream of links to interesting stories, facts, and research. For me, Twitter is a modern day clipping service that I can customize to my idiosyncratic tastes.
Several of my Facebook friends are also remarkable curators, as are many of the blogs that I follow (e.g., Marginal Revolution and Infectious Greed, to name just two). So curation turns out to be perhaps the most important service I consume on the web. In the wilderness of information, skilled guides are essential.
Of course, I also use Google dozens of times each day. Curation is great, but sometimes what you need is a good search engine. But as Paul Kedrosky over at Infectious Greed notes, search sometimes doesn’t work. That’s one reason that Paul sees curation gaining on search, at least for now:
Instead, the re-rise of curation is partly about crowd curation — not one people, but lots of people, whether consciously (lists, etc.) or unconsciously (tweets, etc) — and partly about hand curation (JetSetter, etc.). We are going to increasingly see nichey services that sell curation as a primary feature, with the primary advantage of being mostly unsullied by content farms, SEO spam, and nonsensical Q&A sites intended to create low-rent versions of Borges’ Library of Babylon. The result will be a subset of curated sites that will re-seed a new generation of algorithmic search sites, and the cycle will continue, over and over.
In a series of posts (here, here, and here), I have expressed concern that Google directs its users to what I think is the “wrong” measure of unemployment. For example, if you search for “unemployment rate United States” today, it will tell you that the U.S. unemployment rate in August was 9.6%, when the actual figure is 9.7%.
This discrepancy arises because Google directs users to data that haven’t been adjusted for seasonal variations. Almost all discussions of the national economy, however, use data that have been seasonally-adjusted. Why? Because seasonally-adjusted data (usually) make it easier to figure out what’s actually happening in the economy. The unemployment rate always spikes up in January, for example, because retailers lay off their Christmas help. But that doesn’t mean that we should get concerned about the economy every January. Instead, we should ask how the January increase in the unemployment rate compares to a typical year. That’s what seasonal adjustment does.
My concern about Google’s approach is that many (if not most) data users know nothing about seasonal adjustment. They simply want to know what the unemployment rate is and how it has changed over time. Directing those users to the non-seasonally-adjusted data thus seems like a form of search malpractice.
I’ve wondered why Google has chosen this approach, and thus was thrilled when reader Jonathan Biggar provided the answer in a recent comment. Jonathan writes:
Continue reading “Insight on Google and Unemployment”
A strange this happened last week: Google misplaced my blog.
I’ve run all the usual diagnostics, and I can confirm that Google still knows that my blog exists. But it no longer appears in any of the searches – e.g., “natural gas price”, “unemployment”, “budget deficit”, or “brooke boemio” – that used to help new readers find posts on my site.
Things are so bad, in fact, that my blog doesn’t even come up when you search for “donald marron”. I feel an existential crisis coming on.
I presume this is just the result of some obscure algorithm tweak and that, over time, my posts will reappear in the ranks of the Google-worthy. But it’s fun to imagine that Google is mad at me for my posts criticizing the way it reports unemployment data.
I just checked and, no surprise, Google is still reporting the wrong data. If you search for “unemployment rate”, Google will tell you that the U.S. unemployment rate was 9.6% in August, when in fact it was 9.7%. Why the difference? Because Google is reporting an obscure measure of unemployment, not the one used by 99% of the world.
Everyone who follows the U.S. economy closely knows that the unemployment rate was 9.4% in July, down 0.1% from June.
Everyone, that is, except Google.
If you ask Google (by searching for “unemployment rate United States“), it will tell you the unemployment rate in July was 9.7%.
What’s going on? Well, it turns out that Google is directing users to the wrong data series. As I discussed last month, almost everyone who talks about unemployment is using (whether they know it or not) data that have been adjusted to remove known seasonal patterns in hiring and layoffs (e.g., many school teachers become unemployed in June and reemployed in August or September). Adjusting for such seasonal patterns is standard protocol because it makes it easier for data users to extract signals from the noisy movements in data over time.
For unknown reasons, Google has chosen not to direct users to these data. Instead, Google reports data that haven’t been seasonally adjusted and thus do not match what most of the world is using.
This is troubling, since I have high hopes for Google’s vision of bringing the power of search to data sets. The ability of users to find and access data lags far behind their ability to find and access text. I am hopeful that Google will solve part of this problem.
But data search is not about mindlessly pointing users to data series. You need to make sure that users get directed to the right data series. So far, Google is failing on that front, at least with unemployment data.
P.S. As I discussed in a follow-up post last month, Wofram Alpha has an even more ambitious vision for making data — and computation — available through search. I like many of the things Alpha is trying to do, but they are lagging behind Google in several ways. For example, as I write this, they haven’t updated the unemployment data yet to reflect the new July data. (Click here for Alpha results.)
Bing isn’t trying yet.
This morning’s headlines include some important follow-ups to recent posts:
Yesterday’s deal between Microsoft and Yahoo is a big boost for Bing. Microsoft’s new engine will power search on Yahoo, raising its visibility and, perhaps, eating into Google’s market leadership.
If the stock market is any guide, Microsoft is getting the better of the deal. As Techcrunch notes, Yahoo’s stock fell 12% on the day, lopping almost $3 billion off its market cap:
Microsoft , on the other hand, was up about 1.4% — boosting its market cap by about $3 billion.
The real question, of course, is how the deal will affect Google. GOOG was down about 0.8% (around $1 billion in market cap), a bit more than the decline in the Dow or the Nasdaq. That suggests that Google investors respect the MSFT-YHOO deal, but aren’t running scared just yet.
The logic of the deal seems impeccable. Yahoo is an also-ran in the search space, while Microsoft’s Bing is an exciting new entrant. Just how far Yahoo has trailed in search was driven home for me when I reviewed my posts about the search market (here is a list). Google gets the most attention in those posts, of course, but I also discussed competitors Bing, Wolfram Alpha, and Cuil. But it never occurred to me to mention Yahoo. That oversight is vindicated by today’s deal.
Personally, I am looking forward to having Bing on the Yahoo home page. I’ve spent far too much effort avoiding Yahoo’s search engine (e.g., by uninstalling the annoying Yahoo toolbar that various services foist on you when you get new software). Perhaps now I will have reason to let Yahoo take up a bit more valuable screen space.
Disclosure: I don’t own stock in any of these companies.
The August Wired has a nice article about the increased antitrust scrutiny that Google is facing. (Updated July 28, 2009 I would usually insert a link to the article, but I couldn’t find one online; sorry, but I am working from the dead-tree-and-ink version that the postman dropped off.)
Early on, the article notes some ironies of the current situation:
More than 15 years ago, federal regulators began making Microsoft the symbol of anticompetitive behavior in the tech industry. Now, a newly activist DOJ may try to do the same thing to Google.
It is an ironic position for the search giant to find itself in. [CEO Eric] Schmidt not only campaigned enthusiastically for the very Obama administration that appointed [DOJ antitrust chief Christine] Varney, but also was one of the most devoted opponents of Microsoft in the mid-’90s, eagerly helping the government build its case against the software firm.
A few weeks ago, I described some of the arguments that Google might use to defend itself. The Wired article elaborates on one of these: it’s fine for a company to be a monopoly if, as John Houseman used to say, they earn it. It then points to the other issues that may raise concerns:
Continue reading “Google and Antitrust”
I’ve received a number of helpful responses to my post about the strengths and weaknesses of Google’s efforts to transform data on the web. Reader DD, for example, reminded me that I ought to run the same test on Wolfram Alpha, which I briefly mentioned in my post on Google’s antitrust troubles.
Wolfram Alpha is devoting enormous resources to the problem of data and computation on the web. As described in a fascinating article in Technology Review, Wolfram’s vision is to curate all the world’s data. Not just find and link to it, but have a human think about how best to report it and how to connect it to relevant calculation and visualization techniques. In short:
[Wolfram] Alpha was meant to compute answers rather than list web pages. It would consist of three elements, honed by hand …: a constantly expanding collection of data sets, an elaborate calculator, and a natural-language interface for queries.
That is certainly a grand vision. Let’s see how it does if I run the same test “unemployment rate United States” I used for Google:
Continue reading “Wolfram Alpha, Unemployment, and the Future of Data”