In the wake of the Trump administration’s decision to exit the Paris climate accord, many on the left seem to be writing an obituary for the planet. “Trump to Planet: Drop Dead,” wrote CNN. And the Huffington Post. And the New York Daily News. USA Today, by contrast, went with the more nuanced: “President Trump to Planet Earth: Drop Dead.”
Yet anyone who was relying on the Paris agreement to save the world was bound to be disappointed sooner or later. A hint of this could be seen in the other dominant theme among critics—that withdrawing was stupid, since the agreement didn’t actually require America to do anything.
It’s true. Not only is participation in the Paris accord voluntary, but each country is charged with setting its own emissions-reduction goals, however stringently or laxly they wish. Should a country later change its mind or simply not meet its own goals, there is no penalty. Unsurprisingly, even if every country meets their Paris commitments, this will not reduce emissions enough to avoid what scientists predict to be dangerous levels of warming over the coming century. And the results of past agreements suggest that many countries won’t meet their own goals anyway.
Why is the Paris agreement so weak? Because it has to be. An agreement that imposed severe and binding emissions reductions on member nations would find few takers. Just as St. Augustine famously prayed for God to “give me chastity, but not yet,” many nations are very much in favor of cutting their emissions, at some point in the future. And understandably so, given that the costs of reducing emissions are felt here and now, while the benefits will be felt only in the future and will be spread across the globe.
Even if a nation did sincerely agree to take costly action for the benefit of humanity, there is no guarantee that this commitment will last over the many decades necessary to make it effective. China started the 20th century as a nominal empire and ended it ruled by a pro-capitalist Communist Party, with a rollercoaster of changes in between. While one hopes that China’s 21st century will be less eventful, it’s simply not possible for the current government of China to credibly commit itself to actions decades away. And the same goes double for democratic nations. The very existence of President Donald Trump shows that politics can be unpredictable.
These are not problems with a specific international agreement. They are inherent to the nature of climate change. This is why James Hansen, one of the scientists most responsible for bringing attention to the issue of climate change, has described international agreements such as Paris as “just worthless words. There is no action, just promises.”
However, the fact that international agreements are unlikely to save the planet from climate catastrophe does not mean all is lost. Consider the seemingly unrelated case of cellular phones. In the course of only a couple of decades, cell phones have proliferated rapidly throughout the developed and developing world, to the point that there are now about as many cellphones as there are people. This didn’t happen because of an international agreement through which each nation pledged to promote cellphone use. It happened through the market.
Similarly, the United States has over the past decade reduced its greenhouse-gas emissions by more than almost any other country. It did this despite the fact that it had not joined the Kyoto Protocol, a previous international agreement meant to fight climate change. Instead of imposing costly measures to reduce emissions, the United States succeeded because of new technologies that were both cheaper and cleaner than existing sources.
This doesn’t mean there is no role for government in ensuring proper market incentives for energy. The R Street Institute (where I work) has long supported a revenue-neutral carbon tax, as well as funding for basic research to develop clean energy tech. But it does mean that the solutions to climate change are less likely to come from a global summit than they are from the next Bill Gates.
Josiah Neeley is energy policy director for the R Street Institute.
Sometimes it seems like everything is underwater. In Galveston, Texas, sea levels have risen by more than a foot since 1983. So-called “superstorms” like Hurricanes Sandy and Katrina have caused hundreds of billions in damage and lost economic output.
While the media are quick to pin the blame for these events on climate change—and some experts do believe, for example, that higher temperatures are providing “fuel” for hurricanes in the North Atlantic—both the Intergovernmental Panel on Climate Change and the National Oceanic and Atmospheric Administration have been reluctant to endorse such claims. But if that sounds reassuring, it shouldn’t.
Climate change is expected to increase storm intensity over the long term, and sea-level rise will leave coastal areas more vulnerable even to ordinary storms. And the IPCC did find an increase in damage from storms—concluding that “Economic growth, including greater concentrations of people and wealth in periled areas and rising insurance penetration, is the most important driver of increasing losses.”
In other words, storms are doing more damage not because they are more powerful but because more people are living in storm-prone areas. Fortunately, there’s something policymakers can do to reduce the damage: they can stop subsidizing population growth in high-risk areas.
About one-third of Americans—more than 100 million people—now live in low-lying coastal regions. Analysis by the Risky Business Project forecasts that between $48.2 billion and $68.7 billion worth of existing coastal property in the Southeast alone will be below sea level by 2050. Parts of Louisiana are expected to be at least 4.3 feet below sea level by the end of the century.
It may seem odd for people to be moving into these vulnerable areas. But one major explanation for this trend is simple: government is paying for it.
The National Flood Insurance Program, for example, offers policies to people in flood-prone areas. The NFIP was created in 1968 to address a gap in the private market—insurers weren’t confident that they could accurately assess the risks of flooding and were concerned about handling many claims at once when disaster struck. But even today, with much better modeling tools and sophisticated global markets that can spread risks far and wide, the NFIP still holds 5 million policies. This is because the NFIP charges rates so far below the levels that actuaries would recommend—in some areas, only 45 percent of the full level of risk—that private companies cannot compete.
State governments have also gotten in on the game. Every Gulf Coast state operates some form of state-backed insurance program for wind; Florida and Louisiana have programs to insure against other risks as well. In Texas, for example, residents of coastal counties can obtain wind coverage from the Texas Windstorm Insurance Association (TWIA), a state-created nonprofit. TWIA is meant to be an insurance provider of last resort, but its rates are substantially below what would be actuarially sound and do not vary geographically in the same ways that private insurance rates do. Since 2000, the number of TWIA policies has more than quintupled, from 50,000 to around 275,000.
People should be free to live where they want, provided they are willing to bear the risks of doing so. But masking the extent of risk through artificially low insurance rates is almost literally a recipe for disaster.
It’s also unsustainable. When insurance programs charge rates below what’s needed to meet expected damages, they naturally find themselves unable to pay out claims. The NFIP, which is supposed to be self-funding, is more than $20 billion in debt to federal taxpayers. After Hurricane Ike hit the Texas coast, TWIA’s financial situation grew so precarious that the organization briefly considered bankruptcy. In Florida, potential liabilities could amount to $2.7 trillion, so large the state itself might not be able to pay for it.
Part of the solution here is simple: the government should stop making things worse. Rates for programs like the NFIP and TWIA should be raised at least to market levels. Ideally, they would be a bit higher still, to ensure that people use those programs only when private insurance truly isn’t available.
But attempts to reform government disaster insurance have had mixed results. The biggest attempt has been the 2012 Biggert-Waters Flood Insurance Act, which, as enacted, would have phased out many of the NFIP’s subsidized rates. When the first rate increases hit, however, homeowners in affected areas responded with anger. The resulting political backlash led Congress to make a quick about-face, and in 2013 many of the reforms were delayed or suspended. Rate increases were slowed or canceled altogether, and the NFIP’s fiscal solvency was instead propped up by adding surcharges to policies across the board, including many that were near the cost of risk already. The result was to shift costs from riskier properties to those less at risk. While the new legislation temporarily patched up NFIP’s fiscal situation, it did not deal with the underlying problem that the program encourages development in flood-prone areas.
The situation with TWIA has been a bit better. The organization’s financial situation has improved over the last few years owing to a combination of light storm activity and small but steady annual rate increases. TWIA has also started several voluntary “depopulation” programs that match current TWIA policyholders with private insurers willing to take over their policies. Yet TWIA remains one bad year away from fiscal distress, and state law empowers it to meet some shortfalls through mandatory contributions from private insurers operating in the state.
More promising has been the experience in Florida. Florida’s state-run Citizens Property Insurance Corp.—once the largest property insurer in the state and among the largest in the country—has shed 60 percent of its exposure and over a million policies thanks to an aggressive depopulation program. Ironically, the very direness of Florida’s situation may have made serious reforms more viable.
If financial considerations alone are not enough reason to reform government disaster insurance, the climate-change factor should add extra urgency. Government policies that encourage people to live in vulnerable areas aren’t just economically perverse, they are downright immoral.
Josiah Neeley is Texas director for the R Street Institute.
If politics makes for strange bedfellows, few issues make for more peculiar sleeping arrangements than that of distributed energy. There aren’t many others that put former Republican congressman (and son of “Mr. Conservative” himself) Barry Goldwater Jr. on the same side as the Sierra Club. But when it comes to efforts by electrical utilities to add special fees for homes that use rooftop solar, Goldwater sees his opposition as the truly conservative stance.
“Technology is kind of replacing the old order,” Goldwater said recently. “There was a place for utilities and monopolies when they were first created, but the time has come for a change.” Viewing emerging technologies like solar as a means of adding competition to what has traditionally been a very uncompetitive sector, Goldwater formed TUSK—Tell Utilities Solar won’t be Killed—an organization that fights for policies friendly to distributed rooftop solar power.
Not all conservatives see things this way, however. Some conservative Republicans have found themselves in the unusual position of advocating stricter regulation and higher fees for distributed-energy generation. In Arizona, legislators recently tightened restrictions on solar leases, mandating more disclosure about lifetime cost to customers and giving homeowners a three-day grace period to get out of a solar contract after it’s been signed. The author of the legislation, State Senator Debbie Lesko, is a Republican whose campaign website stresses the importance of “limiting taxes and government regulation.”
The conflict scrambles political loyalties because it goes back to competing visions from the beginning of electrification. When Thomas Edison drew up the plans, the electrical grid was envisioned as a highly decentralized affair, with small power plants in each neighborhood. But with the advent of alternating current, it became possible for much larger plants to supply power over great distances. While the modern grid is a technological marvel, the politics of how it operates are less impressive: with a few exceptions—notably Texas—power generation is not open to competition. Instead, electrical generation is controlled by monopoly utilities and subject to stringent regulation on everything from the prices that can be charged to the mix of fuel sources that can be used.
But the allure of a decentralized power system never completely went away, and the rise of environmental concerns has led to widespread public support for cleaner forms of energy, which are often—as in the case of solar panels—more decentralized and distributed. For decades, a small proportion of consumers have opted for some form of distributed power. (When I was growing up in the 1980s, our semi-rural home used solar panels to heat water from our well.) Nevertheless, the growth of distributed energy has been limited by high costs, which have to be paid upfront, as well as by problems of intermittency.
That may now be changing. Since 2009, prices for solar power have fallen 70 percent. These declining costs, along with a healthy helping of government subsidies, have led to a rapid increase in solar deployment. And in April, Tesla Motors announced its new Powerwall battery, which can store excess electricity generated by solar during the day for use during less sunny periods.
Since utilities also can use storage technologies, low-cost storage needn’t necessarily privilege distributed generation over grid-provided electricity, and the combined cost of solar generation and storage is still above that of grid electricity in most places. Still, these developments have not only made solar installation more attractive to many consumers, they also have everyone from utility executives to political activists to tech gurus reconsidering what the future of electricity might look like.
Innovative business models also are overcoming some of the traditional challenges to distributed generation. Many consumers are wary of paying the high upfront costs involved with buying and installing solar panels. In response, solar companies have marketed solar leases, where the company pays all upfront costs and maintains ownership of the panels, while charging a set monthly fee to the homeowner. The company then acts as a middleman, reselling the excess electricity the panels generate during peak periods.
Known as “third-party power purchase agreements,” these leases have helped spur a boom in solar-panel installations in some states. But like many new business models, these have also run into legal roadblocks. In Florida, whose sunny climate should make it a solar leader, only utility companies are legally allowed to sell electricity. To change this, Floridians for Solar Choice, a coalition of distributed-energy advocates—which includes the Christian Coalition, Florida Republican Liberty Caucus, Florida Libertarian Party, and the Tea Party Network—have qualified a ballot measure for 2016 that would allow third-party power purchase agreements.
They face competition from Consumers for Smart Solar—allied with Americans for Prosperity—which has submitted its own ballot measure for 2016 that uses similar-sounding language but would in fact codify the existing practice whereby homeowners can only sell excess power back to the utilities themselves. Major Florida utilities Florida Power & Light, Tampa Electric, Gulf Power and Duke Energy have contributed nearly $2 million to Consumers for Smart Solar.
The biggest battles over distributed generation involve “net metering.” Required in some form in 44 states plus the District of Columbia, net metering mandates that utilities must purchase the excess electricity a homeowner generates and credit the purchase against the homeowner’s power bill, often at the full retail rate of electricity. But net metering schemes tend not to acknowledge that the price of electricity reflects not only the cost of generating the power but also the cost of building and maintaining the grid. Reimbursing homeowners at the full retail rate of electricity acts, effectively, as a subsidy.
Net metering tends to be popular across the political spectrum: in a recent poll by the ClearPath Foundation, 87 percent of self-described conservative Republicans supported policies that let them sell rooftop-generated solar power back to utilities.
As long as distributed generation remained a bit player in the electrical market, utilities regarded the costs of net metering as just an annoyance. As the proportion of distributed generation grows and the cost of grid maintenance is borne by a smaller and smaller base of electrical customers without distributed generation, the matter becomes more serious. The situation is roughly analogous to electric cars not paying for road repairs via the gas tax. A few electrical cars are fine, but what happens when most cars on the road aren’t paying?
Utilities aren’t eager to find out, and so they have been pushing for limits on net metering, including special monthly fees on net-metered homes. Last year the Salt River Project, an Arizona utility, imposed a monthly fee of up to $50 on homes that utilize net metering, effectively wiping out any monetary gain that comes from selling back unused electricity. While the rule has been challenged in court, the Arizona Public Service (APS), the state’s largest electricity provider, has pushed for similar fees. Many other states are also considering imposing fees, capping the total amount of distributed generation eligible for net metering, or doing away with the program altogether.
Utilities aren’t totally opposed to solar power: they increasingly like it, so long as they own it. At the same time APS was pushing for fees on homes that used solar power, it announced its own pilot rooftop solar program, which would see the utility install systems providing up to seven kilowatts of power and give participating customers a $30 monthly credit for 20 years. A similar program went into effect earlier this year in San Antonio. As with third-party power-purchase agreements, the utility would maintain ultimate ownership of the panels and the energy they produce.
Given the arguments typically deployed by utilities against solar, it may seem strange they would embrace the same model. Ultimately, what scares utilities most about distributed generation is the danger it poses to their monopoly over electricity generation. Shielded from competition for more than a century, utilities regard any technology that moves the grid back toward a more open and decentralized system as an existential threat. For the same reason, believers in the free market ought to look at the growth of distributed generation as an opportunity to move toward a more competitive generation system.
This isn’t to say that government ought to put its finger on the scales in favor of distributed solar, or any other energy source—state and federal subsidies and mandates for renewable energy ought to be scrapped, and net metering needs to be replaced with a sell-back system that can be scaled up without creating instability for the grid—but neither should we deny new technologies the potential to bring some much needed creative destruction to our out-of-date energy regulations.
Josiah Neeley is senior fellow and Texas director for the R Street Institute.
You know the campaign season has started when people start talking about ethanol. Various GOP hopefuls have affirmed or reaffirmed their support for existing ethanol mandates, and while there doesn’t seem to be much of a race on the Democratic side at the moment, any candidates who do decide to run against Hillary will probably support it as well. So it’s worth taking a moment to reflect on just how truly awful a program the ethanol mandate is.
First, a note of clarification. It used to be that the federal government subsidized the production of ethanol the old fashioned way: with cash. For every gallon of ethanol blended into gasoline, blenders received a “tax credit” ranging around half a dollar. Foreign imports of ethanol were also subject to a 54-cent tariff. Both of these programs were allowed to lapse at the end of 2011.
Still on the books, however, is the federal Renewable Fuel Standard, or RFS. Like the ethanol tax credit, the RFS came as a result of the Energy Policy Act of 2005 (the Energy Policy Act also extended daylight saving time, so it has a lot to answer for). The RFS mandates a minimum number of gallons of different types of ethanol that must be blended into U.S. gasoline each year. The minimum amount is set to rise over time, and blenders are subject to fines if they do not comply. From the point of view of ethanol producers, tax credits are nice but it’s the mandate that brings home the bacon.
The peculiar nature of the RFS has led to some absurd unintended consequences. For example, cellulosic ethanol (which is made chiefly from grass) isn’t commercially available in the necessary quantities to meet its RFS. Blenders have therefore wound up getting fined for not using a product that doesn’t exist.
Similarly, because the formula used to set the RFS greatly overestimated the number of miles Americans would drive, blenders were required to use more ethanol than could be safely blended into all the gasoline used in American cars. EPA was eventually forced to walk back its own regulations on cellulosic ethanol to better conform with reality, but successfully resisted challenges to lower the overall mandate.
The problems with the ethanol mandate, however, go beyond poor legislative drafting. It is the very idea of the RFS, not just its implementation, that is fatally flawed. The ethanol mandate was billed as an environmentally friendly alternative to gasoline, and a way to wean ourselves off dependence on foreign oil. It is neither. The ethanol mandate has led to the conversion of millions of acres of grassland and wetlands into cornfields. The environmental costs from these conversions swamp any reduced emissions from using ethanol-diluted gasoline. And while oil imports have indeed fallen in recent years, this has been the result of the boom in unconventional oil and gas production, rather than the substitution of biofuels.
Ethanol is also, of course, costly to consumers. A gallon of ethanol costs more than a gallon of gasoline, and it takes 1.5 gallons of ethanol to get the same mileage as a gallon of regular gas. Ethanol costs motorists approximately $10 billion a year in increased fuel costs.
And the cost of ethanol isn’t just felt at the pump. With 40 percent of America’s corn crop being devoted to producing the fuel, the mandate increases prices for food. And since the mandate encourages farmers to grow corn instead of other crops, the effect isn’t just limited to corn.
That’s harmful not only to Americans’ pocketbooks, but also to our national security. Research suggests that higher food prices increase the risk of instability around the world, as developing countries face riots or even revolutions over higher food prices. The Arab Spring was preceded by a spike in food prices, as were a similar series of riots in 2008.
The worst thing about the ethanol mandate, though, is what it says about our democracy. Ethanol is a rare political issue that is not polarized along ideological lines. From left to right, everyone seems to acknowledge that it is horrible policy. And yet not only does the policy continue, but people seem resigned to its continued existence.
If the United States can’t get rid of a policy that is so manifestly unjustified, how can we expect to solve more contentious issues?
Josiah Neeley is Texas Director for the R Street Institute.
Texas has a well-deserved reputation for being a freedom-loving state. We like our regulations same as we like our steaks: rare.
That, at any rate, is the stereotype, and there is a lot of truth to it. But while the Lone Star state adheres to a hands-off approach to governing in many areas, it ain’t always so.
Consider ridesharing. Using smartphone apps to connect drivers with passengers, companies like Uber and Lyft have grown rapidly over the last few years in many big cities by offering a cheaper, higher quality alternative to traditional taxis. Many drivers work part-time, setting their own hours and making good money doing so.
Yet on Thursday, the San Antonio City Council is set to vote on an ordinance that would impose so many restrictions on ridesharing companies as to effectively ban the practice. Under the new regulations, before Uber or Lyft drivers are allowed to pick up fares, they must submit to a background check, fingerprinting, a driving test, a defensive driving course, a physical exam, an eyesight test, a drug test, written and verbal tests of English proficiency, and random vehicle inspections. Given that the individuals in question already have drivers’ licenses, many of these regulatory hoops are redundant or even pointless.
As if that wasn’t enough, the San Antonio ordinance piles on insurance requirements that far exceed any reasonable justification. Ridesharing drivers are required to have $1 million in comprehensive coverage from the moment they accept a fare. By contrast, a San Antonio taxi need only have the state minimum of $60,000 coverage per incident.
The San Antonio ordinance might not pass. Then again, it might. Last month, a Houston ordinance went into effect requiring a 40-step process for driver registration. Though much less onerous than the proposed San Antonio ordinance, the law was sufficiently burdensome that Lyft announced it was suspending operations in the city. Uber has likewise indicated that it will have to pull out of San Antonio if this ordinance is adopted.
San Antonio, and especially Houston, are relatively conservative for big cities. Republican Greg Abbott won both cities during his recent triumph over Democrat Wendy Davis for governor.
Yet even before these recent ordinances, these cities ranked near the bottom of the class on ridesharing regulation. Last month, the R Street Institute released a scorecard ranking America’s 50 largest cities in terms of how they regulate taxis, limos and other vehicle-for-hire companies. San Antonio received a grade of D- and ranked 47th out of 50, behind every other Texas city. Houston was only slightly better, with a grade of C- and a rank of 40. By contrast, the American city with the most “hands off” approach to ridesharing was Washington, D.C. And Austin, which is known for being the more liberal of Texas’ major cities, also passed a comprehensive ordinance legalizing ridesharing.
Now, Portland, which is as liberal as they come, is currently seeking a court injunction barring Uber from operating in the city. The issue isn’t that liberal cities are better on ridesharing; it’s why the most conservative, free market cities aren’t uniformly good on the issue.
Why are Texas cities lagging when it comes to ridesharing? Demographics and entrenched interests are two important factors.
In denser cities with more reliance on public transport, the benefits that come from ridesharing are all the more vital. Some cities have also been dealing with the issue longer than others, which helps public officials and the public at large gain a better understanding of the value alternative vehicle-for-hire companies can provide.
How a city regulates traditional taxi and limo services is also important. When a city puts caps on its taxi fleet this can create a powerful lobby against more competition, whether from traditional or alternative sources. By contrast, where entry into the taxi market is already relatively easy, there may be less resistance to new transportation models.
Whatever the reason, though, this trend cannot continue. Texas continues to urbanize, and its success depends on being able to continue to attract people to live and work in vibrant communities. Cities like San Antonio can’t afford to be seen as backward when it comes to technology and transportation.
Josiah Neeley is Texas Director for the R Street Institute.
Most have greeted the new sci-fi action movie “Pacific Rim” as mindless entertainment, and it certainly is that. But the movie is about much more than just computer generated action sequences and campy dialogue. In fact, it’s an allegory about the effects of globalization on manufacturing employment.
First, some important spoilers. “Pacific Rim” is a film about giant robots fighting giant sea monsters. For reasons that are not clear to begin with, these lizard-like creatures begin to emerge from an inter-dimensional breach deep in the Pacific Ocean, whereupon they attack various port cities. Emerging from the heart of the region most closely associated with globalization anxiety, these monsters represent the forces of creative destruction unleashed: they are unthinking, mysterious, and utterly disruptive.
Today there is growing anxiety about globalization and what it means for many individuals. The ratio of global imports to world GDP has risen from 14 percent in 1970 to just under 30 percent in 2008. At the same time, American manufacturing employment as a percentage of total employment has steadily fallen from 26.5 percent to 9.25 percent over roughly the same time period. Even in absolute terms, manufacturing employment has fallen by more than two million since 2000.
While some of this decline is no doubt due to increases in the productivity of American manufacturing, the recent events in Detroit illustrate the fraught consequences of increased global competition. It’s only natural that these anxieties—like the anxieties of previous times and places—should find expression in seemingly unrelated works of popular culture.
When traditional military forces prove less than adequate against the rising tide of monsters, nations naturally respond by building 250-foot tall robots, controlled by a pair of pilots using a kind of next generation Wii system. As the film explicitly notes, these robots were initially developed using DARPA funding, and represent a kind of industrial policy, each nation deploying its own robot champions. There is a Russian robot team, a Chinese team, an Australian team, and of course an American one, each protecting its home country.
But while the robots are initially successful, the monsters keep growing and invading at an ever-faster pace, overwhelming the efforts of the local industries. In response, the world’s leaders decide to abandon their industrial robot program in favor of literally building giant walls around all of their ports. It is explicitly mentioned that this has cut off trade and forced rationing and other hardships on the population—though it does seem to create a fair number of short-term blue collar jobs actually building the wall. The one city that doesn’t succumb to protectionism is Hong Kong (which happens to be an oft-cited example of free trade success in real life), where the remaining robots all relocate.