On January 1, Washington DC introduced a 5-cent tax on disposable shopping bags at grocery, drug, convenience, and liquor stores. The fee had two goals: to reduce the number of bags, in particular plastic ones, that end up blighting the landscape and to raise funds for cleaning up the Anacostia River.
The fee appears to be succeeding on both counts, but not equally so. As Sara Murray and Sudeep Reddy report over at the Wall Street Journal, shoppers have cut back on bag use more than anticipated; as a result revenues are running below expectations:
[T]he city estimated that [bag use] would decline by 50% in the first year after the tax was imposed. …. [A]n informal survey of corporate headquarters for grocery stores and pharmacies with dozens of locations in the city estimated a reduction of 60% or more in the number of bags handed out. … Through the end of July, the city collected more than $1.1 million from the bag fee and small donations. At that rate, receipts are likely to fall short of the expected $3.6 million in the first year.
I’ve witnessed the sharp decline in bag use during my daily lunch run. Last year, the Subway folks would automatically put your sandwich and a napkin in a plastic bag. Now they ask if you want one. I always decline, as do most other customers.
Why has there been such a strong reaction to a nickel fee? I think it’s a combination of two factors.
- The first is a traditional microeconomic explanation: there are often good substitutes for a disposable shopping bag. For example, I find it just as easy to carry the wrapped sandwich as to carry the old Subway bag. And if I buy some dental floss at CVS, I can just pop it in my pocket for the trip home. So even a relatively small fee can get results.
- The second is a behavioral explanation: people act weird when things are free–they acquire things without really thinking about it. If you start charging a price–and thus change the default from “here’s your bag” to “do you want a bag?”–you can witness large responses.
P.S. As noted in a previous post on the bag fee, Arthur Cecil Pigou is the father of environmental taxes.
In March 2009, Burlington Vermont used a non-traditional system of voting—Instant Runoff Voting—to select its mayor. The voters returned the incumbent, Progressive Bob Kiss, to the mayor’s office and, in so doing, set off a surprisingly fierce debate among advocates for voting reform. Some tout the Burlington results as a success for Instant Runoff Voting, while others cite them as evidence of its fundamental flaws.
In this post, I will try to settle one part of this debate: whether the Burlington results display a voting pathology known as non-monotonicity. That sounds geeky—ok, it is geeky—but it boils down to a simple question: could a candidate lose an election if voters showed more enthusiasm for him or, equally perversely, win an election if voters showed less enthusiasm?
Several readers asked me to weigh in on this debate after my previous post on alternative voting systems (check out the comments on that post if you want to get a flavor of the debate). I should state from the outset that I am not an expert on voting systems, but I am a card-carrying math and economics geek and enjoy mediating interesting debates, so I gave it my best shot. I reached three main conclusions:
- The Burlington results provide a fascinating case study in American voting for reasons that have nothing to do with non-monotonicity. In what was effectively a three-way race, Instant Runoff Voting (henceforth IRV) appears to have chosen a better winner than our usual system, plurality voting. That’s great news for IRV except for one thing: it failed to choose an even better winner. IRV thus appears to have elected the “wrong” candidate, but traditional voting would have elected an even “wronger” candidate. That weird result illustrates how challenging it can be to design a democratic voting system.
- The debate about non-monotonicity–which pales in importance next to the larger issues posed by the Burlington results–confuses technical semantics and electoral substance. The Burlington results do illustrate the possibility for non-monotonicity in real world voting data, as IRV critics claim. But, as IRV proponents emphasize, that potential had no effect on the election outcome.
- The debate among voting reformers would be more fruitful if they adopted some new lingo to distinguish between the potential for non-monotonicity and its actual impact. Inspired by the world of accounting, my suggestion is to distinguish between material non-monotonicity, in which it affected an election outcome, and immaterial non-monotonicity, in which it didn’t. The Burlington election results display the immaterial variety. (Suggestions welcome for better ways of saying this.)
For further details in a handy-if-lengthy Q&A format, keep on reading.
Continue reading “The Feud over the 2009 Burlington Mayoral Election”
It looks like 2011 will be another year without a cost-of-living adjustment (COLA) for Social Security recipients. Why? Because consumer prices haven’t yet returned to the peak they reached in the third quarter of 2008, when the 2009 COLA was set.
Beneficiaries received a healthy 5.8% boost in their payments in 2009, which made sense after the sharp run-up in energy prices in 2008. But then energy prices collapsed. The inflation rate used to calculate the COLA was negative from 2008 to 2009. The cold logic of cost-of-living adjustments would thus have implied a reduction in Social Security benefits in 2010. For understandable reasons, however, Social Security doesn’t allow negative COLAs. So benefits remained flat, and 2010 went into the record books as the year without a COLA.
The same thing will happen in 2011. Consumer prices have increased since the third quarter of 2009, but as of the August CPI report, they still fell far short of the peak reached back in 2008. Barring a miraculous surge in inflation in September, that means that 2011 will be the second year without a COLA.
The Social Security Administration will make its official no-COLA announcement on October 15, just a few weeks before the mid-term elections. If last year is any guide, that announcement will set off a flurry of debate about whether Social Security recipients should receive a special benefit adjustment above that implied by the COLA formula (or, in this case, the unCOLA formula) and whether such special payments might be desirable as a form of economic stimulus.
If you are interested in all the facts surrounding the COLA calculation, the incomparable Calculated Risk has a wonderfully detailed analysis.
Future generations will remember September 15, 2008 as the day that Lehman died. But the art world has another memory of that fateful day: the opening of a London auction of works by artist Damien Hirst. Over a period of two days, Sotheby’s rapped the gavel on almost $200 million of his new works, marking the high point of the contemporary art bubble that accompanied all the other asset bubbles.
Not familiar with Damien Hirst? He’s probably most famous for a 14 foot tiger shark preserved in formaldehyde (titled “The Physical Impossibility of Death in the Mind of Someone Living“), spot paintings, and an as-yet-unsold skull covered with diamonds (“For the Love of God“).
The Economist marks this anniversary with enjoyable retrospective on the auction, which was notable not only for the amount of money that changed hands, but also because it was a very rare example of an artist using an auction in the primary market. Artists usually sell through galleries, where dealers try to place new works in the hands of “worthy” buyers. But Hirst decided to take his new work directly to the auction market — with stunning, if transient, success:
The Economist suggests that Hirst was frustrated with the traditional model, in which initial buyers sometimes flipped pieces at a profit in the auction market after buying from a gallery. Hirst thus set himself a mission, saying “The first time you sell something is when it should cost the most” and “I’ve definitely had the goal to make the primary market more expensive.” And he certainly succeeded, albeit with a little help from the credit market.
Other interesting economic tidbits about Hirst’s work are his exceptional reliance on assistants to execute the works (he clearly understands the idea of the division of labor) and the uncertainty about just how many works he (and his team) have created over the years.
It’s quiet weekend, so please forgive one more item from my recent sojourn in southeast Alaska. If you are a regular watcher of nature documentaries, you know that Alaska’s humpbacks employ a unique feeding technique called bubble-netting. A group of whales will corral herring in a wall of bubbles, push them to the surface, and then engulf them.
We had the pleasure of watching a pod of whales on their morning round of bubble-netting. During one surface cruise between dives, the whales made a sudden turn and came right over to our skiff. The resulting video (ht Esther) has a certain Cloverfield / Blair Witch feel to it:
Esther also shot a video in which you can listen to the alpha female as she sings to coordinate her bubble-netting team.
More Alaska adventures here.
Over at the Tax Policy Center, we just unveiled a nifty new tool for understanding how the ongoing tax debate might affect real households. The Tax Calculator allows taxpayers, analysts, and the media to analyze how much an individual or family would pay in taxes under three scenarios:
- 2010 law, in which the 2001-2003 tax cuts are all in effect;
- The law scheduled to take effect in 2011, in which essentially all of the 2001-2003 tax cuts have expired; and
- The proposals that would take effect in 2011 under President Obama’s Budget. The budget includes numerous features, of which the most prominent is that the tax cuts would continue for incomes up to $200,000 (individual) or $250,000 (joint), but almost all of the tax cuts at higher income levels would expire.
Users can create their own taxpayer profiles or can select from any of six sample households.
If you are interested, please try it out.
In previous posts (most recent here), I noted that oil and natural gas prices have disconnected from their usual historical relationship. For many years, oil prices (as measured in $ per barrel) tended to be 6 to 12 times natural gas prices (as measured in $ per MMBtu). That ratio blew out to more than 20 in late 2009, briefly receded toward more traditional levels, and then expanded again. At Tuesday’s close, the ratio stood at 19.4, far above its historical range:
(Note: A barrel of oil has roughly 6 times the energy content of a MMBtu of natural gas. If the fuels were perfect substitutes, oil prices would thus tend to be about 6 times natural gas prices. In practice, however, the ease of using oil for making gasoline makes oil more valuable. As a result, oil has usually traded higher.)
The unusual pricing of the last two years reflects two factors. First, there has been a dramatic–and welcome–expansion in domestic natural gas supplies. That’s driven natural gas prices down to less than $4 per MMBtu at yesterday’s close. Second, there is limited opportunity for energy users–utilities, businesses, and homeowners–to switch from oil to natural gas. Years ago, such switching linked oil and natural gas prices relatively closely. But today those prices appear largely decoupled.
All of which poses an important question for investors, forecasters, and industry planners: Will historical relationships eventually reassert themselves, perhaps by longer-term fuel switching by utilities and transportation fleets to natural gas? Or is this time really different, with old pricing relationships no longer relevant?
One way to answer that question–or, at least, to get some insight into how others are answering it–is to look at futures prices. As illustrated in dark blue above, those prices imply that the ratio of oil to natural gas prices will remain well above historical levels for at least the next eight years. The new normal, according to futures markets, will be for oil prices to average about 15 times natural gas prices.