Uncle Sam Is Smaller (Relatively) Than We Thought

At 8:30 this morning, Uncle Sam suddenly shrunk.

Federal spending fell from 21.5 percent of gross domestic product to 20.8 percent, while taxes declined from 17.5 percent to 16.9 percent.

To be clear, the government is spending and collecting just as much as it did yesterday. But we now know that the U.S. economy is bigger than we thought. GDP totaled $16.2 trillion in 2012, for example, about $560 billion larger than the Bureau of Economic Analysis previously estimated. That 3.6 percent boost reflects the Bureau’s new accounting system, which now treats research and development and artistic creation as investments rather than immediate expenses.

In the days and months ahead, analysts will sort through these and other revisions (which stretch back to 1929) to see how they change our understanding of America’s economic history. But one effect is already clear: the federal budget is smaller, relative to the economy, than previously thought.


The public debt, for example, was on track to hit 75 percent of GDP at year’s end; that figure is now 72.5 percent. Taxes had averaged about 18 percent of GDP over the past four decades; now that figure is about 17.5 percent. Average spending similarly got marked down from 21 percent of GDP to about 20.5 percent.

These changes have no direct practical effect—federal programs and tax collections are percolating along just as before. But they will change how we talk about the federal budget.

Measured against an economy that is bigger than we thought, Uncle Sam now appears slightly smaller. Wonks need to update their budget talking points accordingly.

Getting the Most from Excel Charts

Over at The Why Axis, Jon Schwabish has a great show-and-tell on improving economic visuals in Excel. He starts with a chart of job openings data from the Bureau of Labor Statistics that was clearly made using Excel’s default settings (click charts to enlarge):


Jon demonstrates many improvements. I particularly like this one:


To my eye, this chart is way better than the original, although it shares one big flaw: the horizontal spacing of the dots doesn’t match the timing. Jon then corrects that and adds more information (I might have stopped with just getting the spacing right):


Check out his original post for an easy guide to the how and why of these changes and a copy of his Excel file.

Turning Data Into Art

The art shows in Miami last weekend included several artists who turn data into art.

Norwood Viviano’s installation Cities: Departure and Deviation brings three-dimensional tangibility to the population history of 24 American cities. Here are the first dozen of his blown-glass pieces, Atlanta through Los Angeles:




Chicago, short and squat, is fifth from the left.

In Words and Years, Toril Johannessen brings a certain whimsy to data tracking word use in leading academic and popular publications. Here, for example, she documents the triumph of “Hope” over “Reality” in political science:


Purple America – The Best Election Maps

For all the talk of red states and blue states, much of America is really purple.

That simple observation has inspired some great alternatives to the standard red and blue maps depicting electoral outcomes.

Princeton’s Robert Vanderbei, for example, has created an animation that makes three improvements on the standard red/blue map: he maps counties not just states; he uses shades of purple to reflect the mix of Democratic and Republican votes; and he uses green for third parties.

Here’s his animation for the 1960 to 2008 elections; keep an eye out for Ross Perot. (Vanderbei also has a static version of the 2012 results.)

Michigan’s Mark Newman also adopts the purple view, with another wrinkle. Traditional maps emphasize geographic area, not the location of electoral votes (or population). Using some fancy math, he resizes and reshapes states to reflect their relative electoral import. The result resembles a smooshed butterfly, with blue areas (mostly cities) amid a red web:

A Good Jobs Report

Today’s jobs data exceeded expectations. Payrolls expanded by 114,000 in September, in line with expectations, but upward revisions to July and August added another 86,000 jobs, so the overall payroll picture is better than the headline.

The big news, though, is that the unemployment rate fell to 7.8%. That’s big economically and symbolically. Indeed, it’s so big that conspiracy-mongerers are suggesting the BLS cooked the numbers to help President Obama get re-elected. Let there be no doubt: That’s utter nonsense.

Other numbers also indicate an improving job market: the labor force participation rate ticked up to 63.6%, the employment-to-population ratio rose 0.4 percentage points to 58.7%, and the average workweek increased by 0.1 hours. All remain far below healthy levels, but in September they moved in the right direction.

Despite the drop, unemployment and underemployment both remain very high, as well. After peaking at 10% in October 2009, the unemployment rate has declined a bit more than 2 percentage points. The U-6 measure of underemployment, meanwhile, peaked at 17.2% and now stands at 14.7%:

As you may recall, the U-6 measures includes the officially unemployed, marginally attached workers, and those who are working part-time but want full-time work. One anomaly in the September data is that the unemployment rate fell from 8.1% to 7.8%, but the U-6 remained unchanged at 14.7%. Why? Because the number of workers with part-time work who want full-time work spiked up from 8.0 million to 8.6 million.

Niall Ferguson’s Mistake Makes the Case for Metadata

Harvard historian Niall Ferguson goofed on Bloomberg TV yesterday. Arguing that the 2009 stimulus had little effect, he said:

The point I made in the piece [his controversial cover story in Newsweek] was that the stimulus had a very short-term effect, which is very clear if you look, for example, at the federal employment numbers. There’s a huge spike in early 2010, and then it falls back down.  (This is slightly edited from the transcription by Invictus at The Big Picture.)

That spike did happen. But as every economic data jockey knows, it doesn’t reflect the stimulus; it’s temporary hiring of Census workers.

Ferguson ought to know that. He’s trying to position himself as an important economic commentator and that should require basic familiarity with key data.

But Ferguson is just the tip of the iceberg. For every prominent pundit, there are thousands of other people—students, business analysts, congressional staffers, and interested citizens—who use these data and sometimes make the same mistakes. I’m sure I do as well—it’s hard to know every relevant anomaly in the data. As I said in one of my first blog posts back in 2009:

Data rarely speak for themselves. There’s almost always some folklore, known to initiates, about how data should and should not be used. As the web transforms the availability and use of data, it’s essential that the folklore be democratized as much as the raw data themselves.

How would that democratization work? One approach would be to create metadata for key economic data series. Just as your camera attachs time, date, GPS coordinates, and who knows what else to each digital photograph you take, so could each economic data point be accompanied by a field identifying any special issues and providing a link for users who want more information.

When Niall Ferguson calls up a chart of federal employment statistics at his favorite data provider, such metadata would allow them to display something like this:


Clicking on or hovering over the “2” would then reveal text: “Federal employment boosted by temporary Census hiring; for more information see link.” And the stimulus mistake would be avoided.

I am, of course, skimming over a host of practical challenges. How do you decide which anomalies should be included in the metadata? When should charts show a single flag for metadata issues, even when the underlying data have it for each affected datapoint?

And, perhaps most important, who should do this? It would be great if the statistical agencies could do it, so the information could filter out through the entire data-using community. But their budgets are already tight. Failing that, perhaps the fine folks at FRED could do it; they’ve certainly revolutionized access to the raw data. Or even Google, which already does something similar to highlight news stories on its stock price charts, but would need to create the underlying database of metadata.

Here’s hoping that someone will do it. Democratizing data folklore would reduce needless confusion about economic facts so we can focus on real economic challenges. And it just might remind me what happened to federal employment in early 2009.