Serendip is an independent site partnering with faculty at multiple colleges and universities around the world. Happy exploring!
You are here
Union of Concerned Scientists Global Warming
Smoke in Our Eyes: National Park Grandeur Degraded by Global Warming
My window into global warming ruining a rite of summer came 16 years ago. I was flat on my back on blankets, under the stars in the middle of the night at Glacier Point in Yosemite National Park. At 7,214 feet and more than a half mile above Yosemite Valley, this was a perfect place to watch the August Perseid meteor shower sizzle overhead.
For a precious hour or so, zips of light etched the skies, punctuated by periodic long trails and fireballs. Then, shortly after 3 a.m., a milky film slid across the sky as if a window shade were being closed sideways. My eyes fought to peer through the thickening mystery until even the brightest bursts of light were obliterated. Because the weather forecast called for clear skies, I kept hoping it was rogue cloud cover. I stuck it out until dawn.
It was a daylight I had never seen or smelled. Glacier Point is famous for its spectacular vantage point of looking straight across a gulf to 8,839-foot Half Dome. It is a view made famous by photographer Ansel Adams. Unlike a sunrise that quickly brightens from gold to blinding white, the sun rose behind Half Dome as a pinkish-orange pinhole and bizarrely remained a tiny, reddish-orange pea in the sky more than an hour later.
It was because of smoke during the 2007 wildfire season in the United States, the second worst at the time for acreage burned. It was an apocalyptic encore of the worst-ever 2006 season that saw 9.9 million acres burned. The smoke draped so thick over Half Dome that its chiseled granite features were lost in silhouette. The odor of burnt wood saturated my nose.
After a couple hours, I drove down to Yosemite Valley. Before I got there, I stopped at another view Adams made famous, the sight of the valley from the Wawona Tunnel. On clear days, the sight of 7,500-foot El Capitan on the left, Bridalveil Falls plunging 600 feet on the right, Half Dome in the middle, and conifer forest below can well up the eyes.
On this morning, the view was obscured by a golden dust bowl as smoke settled in the valley.
Half Dome nearly obscured by wildfire smoke. Photo is the author’s own. Smoky normsThis was just a prelude. In 2015, then again in 2020, the number of burned acres from wildfires crossed the 10 million-acre mark. The view of Yosemite Valley has repeatedly been obscured by wildfire smoke, including in 2018, 2020, and last year, when fires forced park closures and evacuations. It is a reason why Yosemite is a poster park of global warming, beset in recent years not just by drought, blistering heatwaves and wildfires, but also by epic blizzards and snowmelt floods. Parts of the park were closed for several days this spring over fears of flooding.
All those extreme events are made more likely by global warming.
Beth Pratt, the California regional executive director for the National Wildlife Federation, told the Guardian newspaper in April, “Yosemite is ground zero for climate change.” A Los Angeles Times feature seconded that, reporting that climate change “has been one of the park’s biggest challenges in recent years, undermining the concept of Yosemite as a refuge where nature prevails unaffected by man-made forces.”
An unsettling haze in Yosemite Valley. Photo is the author’s own.It also undermines the concept of national parks being a refuge for families seeking nature. Wildfires in 2021 temporarily closed Sequoia National Park and a popular part of Big Bend National Park in Texas. Wildfires have led to multiple closures in recent years in and around the Grand Canyon.
Last year, record rainfalls triggered floods and mudslides and temporarily closed Yellowstone. Monsoon rains unleashed flash floods that buried cars in Death Valley and floods that caused closures in Joshua Tree National Park and the Mojave National Preserve. Out east, flash floods from deluges caused landslides and partial closures and evacuations in Smoky Mountains National Park.
This summer has just started and wildfires have already led to closures in parks as far apart as Joshua Tree in California and Big Cypress National Preserve in Florida. Several roads in Sequoia and Kings Canyon National Park remain closed because of severe road damage from epic winter storms.
Parks heating faster than the nation as a wholeIt begs the question of how often families must be smoked out, scorched out, or flooded out of parks for these treasures become a more prominent rallying cry to fight global warming. National parks are unique for their locations at high and low elevations and delicate layers of ecosystems. It makes them ripe for disproportional impacts from climate change, relative to the nation in general.
According to studies in 2018 and 2020 led by Patrick Gonzalez, who was the principal climate scientist for the National Park Service, a White House climate advisor, and a lead author on the Intergovernmental Panel on Climate Change, the mean annual temperature of the parks has increased at double the US rate since 1895.
Without major reductions in the carbon emissions fueling global warming, the impacts on the parks would be endless. Even under a scenario of drastic emissions reductions, Gonzalez’s 2018 study found that more than half of national park area would exceed the 3.6-degree Fahrenheit limits set by the Paris Agreement to avoid catastrophic climate impacts—more than double the 22 percent of the US as a whole that would exceed that temperature. The 2020 study offered an exclamation point:
“Without emissions reductions, climate change could increase temperatures across the national parks, up to 9ºC (16ºF) by 2100 in parks in Alaska. This could melt all glaciers from Glacier National Park, raise sea level enough to inundate half of Everglades National Park, dissolve coral reefs in Virgin Islands National Park through ocean acidification, and damage many other natural and cultural resources.”
Since wildfires and their smoke are currently so prominent in the news, it is of note that Gonzalez’s 2020 study predicts that the frequency of wildfire could increase by up to 300 percent in Yosemite and the Sierra Nevada, and up to 1,000 percent in Yellowstone and the Grand Tetons. In 2019, Gonzalez, despite being pressured by superiors during the Trump administration to not talk about “anthropogenic,” or human-caused climate change, testified to Congress that, “Cutting carbon pollution would reduce human-caused climate change and help save our national parks for future generations.”
Crowds grow even as temperatures riseSo far, saving the parks has not been enough to inspire Congress toward mandatory cuts to carbon pollution. Nor has any amount of wildfire smoke or flood damage kept the crowds away. Park attendance has steadily increased over the years, to about 330 million a year before the pandemic, and 311 million last year as leisure travel returned to close to normal. That is the equivalent of every person in the United States visiting a park during the year. Visitors now spend more than $20 billion a year in the gateway regions of the parks, supporting 323,000 local jobs.
Two studies, one last year in the journal Forest Policy and Economic and another last month in the journal Ecosphere, found that wildfire smoke thus far has not limited park visitation. Boise State University researcher Matthew Clark, lead author of last month’s study, said, “I have actually lived my data. I had driven six hours to go climbing in Yosemite. When we got there, we said, ‘Well, what are we going to do? We’re not going to turn around.’ We stayed anyway.”
Clark added, “I was kicking myself for days afterward. My lungs were burning.”
Paradoxically, global warming might entice even more people to come to the parks, putting pressure on a National Park Service that has long dealt with chronic underfunding of services and a $22 billion backlog of deferred maintenance. A 2015 study by the National Park Service estimates that the warmer temperatures associated with global warming will lengthen the shoulder seasons for visitation in most parks, as so many of them are either in historically colder mountainous regions or in more northerly latitudes.
Examples of that in the study are the popular and often packed Acadia in Maine, Grand Teton in Wyoming, and the Blue Ridge Parkway stretching from the North Carolina side of Great Smoky Mountains National Park to Shenandoah National Park in Virginia.
Conversely, a few parks, particularly in deserts, might become intolerable much of the year, such as Joshua Tree (which would lose all its Joshua trees under high emissions scenarios), Arches in Utah, and Big Bend in the southwest bottom of Texas.
Big temperatures in Big BendIn April, my wife and I traveled to Big Bend. We were there to enjoy one of the nation’s birding hotspots. My wife saw more than 50 new species for her list in one week, with vermillion flycatchers and painted buntings hugging watery areas along the Rio Grande. At night, I was treated to the darkest skies and clearest views of the Milky Way I’ve ever seen.
Most of our time there, the temperatures were comfortably warm. We had only one 100-degree day. That is changing rapidly for future visitors. Under current emissions rates, Climate Central estimates that Big Bend will see the largest increase in temperatures in the national park system by 2100. Its number of life-threatening, 100+ degree days will explode by six and a half times, from 17 days a year to 113 days a year. According to Backpacker magazine, Big Bend is already the nation’s third most dangerous park because of its heat.
On June 23, a 14-year-old boy died hiking in 119-degree heat and his stepfather died in a car crash over an embankment, speeding for help. That followed the respective hiking deaths this past March and in March 2022 of a 64-year-old woman and a 53-year-old woman in temperatures approaching or topping 100 degrees.
Big Bend’s defining river feature, the Rio Grande, is no longer a naturally flowing river because of dams and irrigation diversions for agriculture. It so routinely runs dry that the National Park Service says it is not managed to maintain a flow to sustain riverbank habitat. Half of the 27 historically native fish to the river in New Mexico no longer exist.
On our visit, the Rio Grande was so shallow that on one day, my wife and I easily waded across in knee-deep water into Boquillas, Mexico for lunch. On another day, we felt lucky there was any water at all. We waded across a creek in the spectacular Santa Elena Canyon. Last year, the river completely dried up, leaving the classic cracked patterns of parched earthen floor.
Michelle Holmes, the author’s wife, wades across the shallow Rio Grande to Mexico. Photo is the author’s own.“There’s not a little bit of river that’s dedicated to keeping it a river, and keeping the living things of the river, not even healthy, but surviving,” Raymond Skiles told Marfa Public Radio in far west Texas last year. Skiles was Big Bend’s biologist for 30 years before retiring in 2018.
“This is happening in a national park,” he said. “This is happening in a component of the United States’ Wild and Scenic River system. I have to think, what world is this okay in, for the river just to stop because of human extraction and depletion of the river?”
Hope through the hazeThat question can easily be extended to ask, in what world is it okay to allow our crowning jewels of nature and landscapes to be depleted by our extraction of fossil fuels?
Since that night 16 years ago when my view of the Perseids disappeared, wildfire smoke has almost become a normal part of forecasting whether the skies will allow people to see the meteors. A 2018 headline in San Francisco Bay Area’s Mercury News asked: “Will smoke from California fires block this weekend’s Perseid meteor shower?
Three years later a headline in the Oregonian said, “Perseid meteor shower returns for 2021, wildfire smoke could dampen the show.” In Great Sand Dunes National Park in Colorado, volunteer astronomy ranger Bob Bohley told the New York Times in 2021 that the viewing conditions for much of the summer were “horrible” because of smoke.
“Some nights it’s been so thick that even the brightest stars were hard to make out,” Bohley said. “I would just point in the direction of a constellation and hope folks would see something.”
For decades we have pointed people in the direction of the national parks in the hopes that families would see flora, fauna, landscapes, seascapes, and night skies to peer into infinity. As great as his photography was, Ansel Adams said, “You don’t improve nature. You reveal your impression of nature or nature’s impact on you.”
Perhaps we cannot improve nature, but we can sure preserve it. And if there is anywhere in the nation that holds the promise to inspire unified action on climate change, it is the national parks. We flock to them, knowing we will be impressed and hoping to be inspired. The National Park Service is our most popular federal agency, with 81 percent of respondents in a Pew survey this spring saying they have a favorable view of it and 82 percent saying in a 2019 poll that the nation should use oil and gas leasing fees to pay for the deferred maintenance of the parks.
Such support led to the Great American Outdoors Act, overwhelming passed in 2020 by a usually bitterly divided Congress and signed by President Trump, despite his relentless assault on conservation and science and his gutting of environmental regulations that would have reduced the carbon pollution fueling global warming. The act authorized Congress to spend up to $6.65 billion over five years on deferred maintenance in national parks. The National Parks Conservation Association praised the act for providing “crucial funding” to start repairing the parks.
The association also warned that the funding, at $1.3 billion a year against a $22 billion backlog, should only be the beginning of proper protection, as it does not “account for unforeseen damage our parks will continue to deal with as a result of climate change like the devastating flooding at Yellowstone and raging wildfires at Yosemite.”
The night sky over Big Bend National Park. Photo is the author’s own.To borrow from Bohley back at Great Sand Dunes, no one wants the parks to get to the point where the roads are impassible, the hikes too hot, and the stars are impossible to make out. People all over the world, more than 300-million-a-year-strong, say loud and clear that the national parks are our constellation, right here on earth.
#DangerSeason Unleashed: Killer Heat Threatens 75 Million in US South, No End in Sight Through Next Week
Today climate change has broken a new Danger Season record: 76 million people in the US—or 23% of the total population—are currently under extreme weather alerts including heat, flooding, storms, or wildfire weather conditions. Almost all of those alerts—impacting 75 million—are for extreme heat covering most of Texas, Oklahoma, Missouri and Alabama, and all of Arkansas, Mississippi, and Louisiana. Several counties in California, Colorado, Florida, New Mexico, Illinois, Indiana, Kentucky, and Tennessee are also under extreme heat alerts. And notably, our tally does not include the millions of additional people across the Northeast and the Midwest are experiencing poor air quality due to wildfire smoke from Canada.
Widespread heat alerts cover parts of the US South and Midwest regions on June 28, 2023. Source: dangerseason.ucsusa.orgThe cumulative number of people that have faced at least one alert since the start of Danger Season on May 1 today is 61%, up from 52% percent barely two weeks ago. And the footprint of climate change is all over this heat: as of June 28, 22% of heat alerts since the start of Danger Season would not have been possible without climate change, according to Climate Central’s Climate Shift Index (CSI).
Today’s CSI shows that high temperatures are between one and three times more likely due to climate change, with stronger climate attribution (that is, higher CSI values) along the Gulf Coast.
High daily maximum temperatures on June 28, 2023 are attributable to climate change. Source: Climate Central’s Climate Shift Index (https://www.climatecentral.org/tools/climate-shift-index)As climate change shifts global air and ocean temperatures get warmer, the likelihood of experiencing extreme heat events like this one increase. By midcentury, with no action to address global heat-trapping emissions, much of the region experiencing triple-digit temperature this week would see a tripling of the frequency of such heat.
The current heat dome hovering over North America has been increasingly exposing millions of people in the US to extreme heat since June 23, 2023. Source: Union of Concerned ScientistsThe extreme heat that has affected the Gulf Coast and Southeast this week and last is attributed to what’s known as a “heat dome.” This phenomenon occurs when hot air gets trapped in an area by a zone of high pressure in the atmosphere. The high pressure system pushes air down toward Earth’s surface, which causes the air to heat up even more. In the case of the current heat dome, anomalously warm air coming from the Gulf of Mexico and direct heating from the sun are also contributing too.
As climate change shifts global air and ocean temperatures warmer, the likelihood of experiencing extreme heat events like this one increases. By midcentury, with no action to address global heat-trapping emissions, much of the region experiencing triple-digit temperatures this week would see a tripling of the frequency of such heat.
Today’s air quality alerts across the Midwest are also part of a troubling trend. While we usually associate the impacts of wildfires with states in the western US, where wildfires have grown increasingly large and severe in recent decades, data show that over the last 15 years or so, the Midwest and Eastern US have experienced increases in the frequency of “smoke days” like those their experiencing today.
An onslaught of seemingly unending extreme weather events threatens the US population, and there are still four months of Danger Season to go. We need action now to reduce climate change-causing emissions and also adaptation policies to protect people from the impacts we cannot avoid.
Danger Season is here and clearly demonstrating we are heading into uncharted climate disaster territory.
Climate Reality vs. Public Perception: Will Toxic Haze and the 2023 Danger Season Make a Difference?
The year is only half done and the United States has already been enveloped by acrid orange skies in the East, battered by winter rains and floods in California, seared by record winter temperatures in the South, soaked by a record 26-inch April deluge in Fort Lauderdale, and broiled by record spring heat in the Pacific Northwest, Texas, and Puerto Rico.
The onslaught has led to another round of media headlines and press releases from environmental and public health groups asking whether the nation is at a tipping point of urgency to fight climate change.
A Los Angeles Times headline for reader letters on the floods said, “California rains are a wake-up call for climate upheaval to come.” Many other media outlets and advocacy groups, from Al Jazeera to the American Lung Association, speculated as to whether the recent smoke plumes may also be such a “wake-up call.”
A Vox headline on the orange skies from Canadian wildfires said, “Wildfire smoke reminded people about climate change. How soon will they forget?” A Washington Post story carried the headline: “How the Canadian wildfire smoke could shift Americans’ views on climate.” A Philadelphia Inquirer column carried the headline, “America sleepwalks through a climate crisis. Will this smoke alarm wake us up?”
So far, no alarm bell has been loud enough to stop the sleepwalking. Hurricane Katrina in 2005, Superstorm Sandy in 2012, Hurricanes Harvey and Irma in 2017, and Hurricane Irma in 2021 were all accompanied by the same question. After the 2021 “heat dome” that saw Portland, Oregon hit 116 degrees and Seattle soar to 106, a Los Angeles Times headline said: “Northwest heat wave swamped the vulnerable, was a harsh climate wake-up call.”
The usual and eventual response to such things was summed up in an Associated Press story five years after Superstorm Sandy. The headline was “5 years after Superstorm Sandy, the lessons haven’t sunk in.” It was about most plans for climate security in the New York City area being unrealized.
A harsh new normalWhether we wake up or not, a harsh climate is the new normal. To date in 2023, the United States has already suffered nine climate and weather disasters resulting in at least a billion dollars of damage, according to the National Oceanic and Atmospheric Administration.
That fits with the last five years, which have seen an annual average of 18 billion-dollar weather and climate disasters. That is a six-fold increase over the three such events a year during the 1980s, in adjusted dollars. Since 1980, hurricanes, severe storms, floods, and wildfires from billion-dollar events have cost the nation more than $2.5 trillion in damage and taken 16,000 lives. A quarter of the damages have come in just the last five years, causing major home insurers to astronomically raise rates in states seeing frequent severe weather and climate events, such as California, Florida, Louisiana, Arkansas, Texas, and Colorado. Allstate and State Farm have announced they will not issue any new homeowner policies in California.
This of course is not the entire picture as there are countless storms under $1 billion that still ravage communities and drain state budgets. “It is important to keep in mind that these estimates do not reflect the total cost of U.S. weather and climate disasters, only those associated with events more than $1 billion in damages,” NOAA says. “That means they are a conservative estimate of how much extreme weather costs the United States each year.”
According to one major global property database, nearly 15 million homes, or nearly 1 of every 10, was impacted in 2021 by natural disasters that are worsening with global warming, to the tune of $57 billion in property damage. Those costs are not yet steep enough. Part of that is that humans in the U.S. are normalizing the new normal, simply adapting to a seemingly incremental warmer environment rather than leap to action against an existential threat.
A 2019 study in the Proceedings of the National Academy of Sciences found that temperatures initially considered remarkable are unremarkable about five years later. The normalization effect is so strong that the study concluded: “It may be unlikely that rising temperatures alone will be sufficient to produce widespread support for mitigation policies.”
The study’s lead author, Frances Moore of the University of California Davis, said in a press release, “We saw that extreme temperatures still make people miserable, but they stop talking about it. This is a true boiling-frog effect (referring to the false myth that a frog will not feel gradually warming temperatures until it boils to death). People seem to be getting used to changes they’d prefer to avoid. But just because they’re not talking about it doesn’t mean it’s not making them worse off.”
Disinformation dulls urgencyClimate change denial and skepticism is a key feature of the deep political divide in this nation, fueled by long-running and coordinated campaigns of disinformation, often funded by fossil fuel interests. Most recently, as many cities broke pollution records and New York City momentarily recorded the worst air quality in the world from the plumes of smoke flowing down from Canadian wildfires, climate and pollution deniers blew their own smoke at the public on right-wing media.
The infamous fossil fuel and tobacco industry shill Steve Milloy falsely claimed on FOX News—which itself glorifies oil and gas and lambasts environmental regulations—that the wildfire smoke posed “no health risk.” Not-a-scientist Milloy further pontificated, “This doesn’t kill anybody. This doesn’t make anybody cough. This is not a health event. This has got nothing to do with climate . . .This is not because of fossil fuels.”
Laughing FOX host Laura Ingraham then gave Milloy the floor to deny the science of particulate matter to her audience, which averages 2 million viewers. Milloy promptly said that concern about particulates was “crazy” and “invented” by the EPA. He said particulate matter is so “innocuous,” that any attempt to claim otherwise is “total junk science.”
A world of actual scientists can junk that lie. Fine particulate pollution, known as PM 2.5, kills between 4.2 million and 5.7 million people a year, according to a range of studies. More life years are lost around the globe from PM 2.5, according to the Energy Policy Institute at the University of Chicago, than from cigarette smoking or alcohol.
In the U.S., exposure to PM 2.5 prematurely kills at least 100,000 people a year. That is more than gun deaths and fatal car crashes combined. This is before considering a world with more wildfire smoke. Emergency room visits for asthma doubled in New York during the June plume, with most of the afflicted coming from Black, Latino, and high-poverty neighborhoods.
A 2021 study in Lancet Planetary Health found that worldwide, 33,500 people a year die from cardiovascular and respiratory complications due to the fine particulate matter of acute wildfire smoke. The study said its findings were so “robust” that policy makers should “manage vegetation and mitigate climate change as far as possible.”
The findings are an urgent warning as NOAA says climate change is “supercharging” drought conditions, lengthening wildfire seasons. A new study in PNAS found that the area of California’s summer wildfires has grown fivefold over the last half century and could increase by another 50 percent by 2050. Nearly all the increase is due to global warming that is generally drying out the state. Additionally, fires that engulf communities carry even more risk in the smoke. Smoke from California’s 2018 Camp Fire contained high levels of lead and metals from burning buildings.
Lost in the haze: new poll highlights a stunning public disconnect on climate realityThe overall scientific consensus on global warming cannot get more robust. A decade ago, a review of nearly 12,000 studies conducted between 1991 and 2011 found that 97.2 percent of them agreed that humans were causing global warming. A 2021 analysis of more than 88,000 studies since 2012 now finds 99.9 percent agreement.
If only such agreement could escape the lab. The enduring gap in public awareness and understanding received a fresh exclamation point in a new poll this month by the Yale Program on Climate Change Communication. The survey found that that only 58 percent of people in the U.S. believe that “most” scientists agree on global warming and only 20 percent know that more than 90 percent of climate scientists agree that human-caused climate change is happening.
“Public misunderstanding of the scientific consensus – which has been found in each of our surveys since 2008 – has significant consequences,” the survey said. Among those consequences are a decreased level of concern and support for climate action.
In line with the PNAS “boiling frog” study, climate harm remains a far-off concern for many people despite the six-fold increase of disasters in the last five years compared to the 1980s. Only 48 percent of respondents believe people in the U.S. are being harmed “right now” by global warming. More than half of respondents (55 percent) still say that they have not yet personally experienced the effects of climate change.
Perhaps even more stunning were perceptions about the future. While about 70 percent of people in the poll think future generations, the world’s poor, or plant and animal species will be affected by global warming, only 47 percent think they will personally be affected.
Will this year’s Danger Season build support for action?If the next six months of 2023 are anything like the first six, the ability of people to normalize global warming may be put to its most severe test yet. The nation is only two months into the six-month period that the Union of Concerned Scientists calls “Danger Season,” for the likelihood of climate change-amplified heat waves, severe storms, wildfires, and hurricanes from May through October.
Even if the remainder of this year is relatively calm, there is no long-term relief ahead. The World Meteorological Organization announced this spring that Earth is hurtling into its hottest five-year span yet. Petteri Taalas, the secretary-general of the WMO, said the planet was entering “uncharted territory.”
That calls for uncharacteristic urgency to turn down the heat. There actually are hopeful signs that it is arising, despite the current indifference sowed by disinformation. The Yale survey found that two-thirds of people disagree that it is too late to do anything about global warming. A Pew poll last year found that 69 percent of respondents thought that the U.S. should take steps toward being carbon neutral by 2050 and should prioritize renewable energy over expanding fossil fuel exploration.
The call for action to stop global warming is growing louder in Latino and Black communities hit hardest by fossil fuel pollution and living disproportionately on urban “heat islands” that lack trees and parks.
Climate scientists are not sitting on their 99.9 percent consensus. They have stepped up efforts to expose fossil fuel companies for their deceit and assign responsibility to them for their damage to the planet.
A study this year in the journal Science exposed ExxonMobil for its public skepticism on climate science even as its own scientists accurately predicted what is happening now. A study led by the Union of Concerned Scientists and published last month in Environmental Research Letters found that the emissions traced to the world’s largest fossil fuel producers and cement companies have play a massive role in altering the atmosphere’s drying power and wildfires. More than a third of the forest area burned in the western U.S. and southwestern Canada since 1986—nearly 20 million acres—can be attributed to those emissions.
Also last month, Boston University held a symposium to strategize on fighting fossil fuel disinformation. Benjamin Sovacool, director of Boston University’s Institute for Global Sustainability, said, “Misinformation is pervasive, it’s at a very unique moment in our culture, and a post-truth society cannot survive.”
Nor can a post-truth planet.
La Temporada de Peligros climáticos está aquí. ¿Qué puede significar esto para Puerto Rico?
El verano ya casi está aquí, y se espera sea más caluroso de lo normal según NOAA, la dependencia federal de investigación oceanográfica y atmosférica. Con el verano comienza también la “Temporada de Peligro”, o sea, los meses entre mayo y octubre cuando los fenómenos atmosféricos extremos (inundaciones, incendios forestales, tormentas y huracanes, o calor extremo) no solo se han hecho más frecuentes y dañinos debido al cambio climático, sino que es también más probable que traslapen o que ocurran de manera simultánea. Dada la desastrosa temporada de huracanes que vivió Puerto Rico en 2017 y la huella del cambio climático en el alza en temperaturas y en el poder destructivo de las tormentas tropicales en el Caribe, ¿a qué se enfrenta Puerto Rico durante esta Temporada de Peligro?
HuracanesPrimero veamos el pronóstico para la temporada de huracanes emitido por el Climate Prediction Center (parte del Servicio Nacional de Meteorología, SNM). Se espera que se formen de entre 5 a 9 huracanes; hasta cuatro de ellos se espera sean huracanes mayores.
NOAA. https://www.noaa.gov/sites/default/files/2023-05/IMAGE-Hurricane-Outlook-May-2023-SPANISH-Pie-052422-NOAA.PNGHay una buena noticia en todo esto, y es que tras tres años del fenómeno conocido como La Niña (durante el cual la actividad ciclónica en el Atlántico aumenta), se espera que se desarrolle El Niño, lo cual pudiera tener el efecto de reducir la formación de tormentas tropicales. (Este video en inglés explica brevemente qué cosa es El Niño).
Calor extremoDesde que comenzó la Temporada de Peligro el primero de mayo y hasta el día 21 de junio, el Servicio Nacional de Meteorología (SNM) ha emitido alertas de calor extremo que han afectado a a 72 de los 78 municipios en Puerto Rico en unas 900 ocasiones. El récord de temperaturas para ésta época del año fue batido hace unos días al brincar a 95°F (35°C). Pero el índice de calor llegó a 112°F (44°C) hace unas semanas y a 125°F (52°C) en el norte de la isla la semana pasada.
National Climate Assessment. https://nca2018.globalchange.gov/chapter/20/#fig-20-14Y estas altas temperaturas en Puerto Rico son producto del cambio climático. Los resultados del National Climate Assessment para Puerto Rico muestran que la cantidad de días con temperaturas por encima de los 90F ha incrementado a una razón de 0.5 días por año entre 1970 y 2015. O sea, que en promedio, cada dos años entre 1970 y 2015 se añadió un día más con temperaturas por encima de 90°F (32°C). Y si bien el fenómeno de El Niño este año pudiera mermar la formación e intensidad de tormentas tropicales, las altas temperaturas en el Océano Atlántico y humedad en el Sahel pudieran favorecer el fortalecimiento de los sistemas que se formen en la costa oeste de África. Todo esto dependerá de la fuerza que agarre El Niño este año.
Un análisis llevado a cabo por nuestros colegas de Climate Central estima que las altas temperaturas registradas en San Juan durante 256 de los 365 días entre octubre de 2021 y septiembre de 2022 (¡70 por ciento del año!), son entre tres y cinco veces más probables debido al cambio climático. Y cabe decir que San Juan ocupa un alto lugar en la lista de más de mil ciudades en el estudio, ya que solo 21 países o territorios han visto más del 70% del año con temperaturas fuertemente ligadas al cambio climático.
La privatización de la generación energética amenaza la salud de los puertorriqueñosEl Dr Pablo Méndez Lázaro—especialista en los impactos del calor en la salud—recientemente contó a El Nuevo Día como estas altas temperaturas son peligrosas no solo para la salud humana, sino también para la debilitada infraestructura de energía eléctrica y salud pública en Puerto Rico.
Más de 40,000 consumidores en Puerto Rico se quedaron sin luz hace unos días durante una de las múltiples olas de calor que ha experimentado la isla desde el primero de mayo. A luces, la gestión de LUMA como operador de la red de distribución ha sido desastrosa, con cuando menos siete alzas en los precios del kilovatio-hora que pagan los consumidores residenciales, una tarifa adicional para subvencionar el costo de combustibles fósiles, y los múltiples apagones, todo consecuencia de no aprender de la experiencia del Huracán María.
LUMA ha demostrado ser incapaz de garantizar el suministro de electricidad y de cumplir su promesa de no buscar alzas en las tarifas. La confiabilidad en el servicio energético—auscultado por ejemplo mediante el tiempo que dura la interrupción en servicio—se ha reducido en Puerto Rico desde la entrada de LUMA al sector. Ahora bajo las terribles olas de calor recientes, la vida y la salud de la población de Puerto Rico está en riesgo.
Encima de todo esto se viene sobre Puerto Rico la privatización de la Autoridad de Energía Eléctrica (AEE), la cual es el último clavo en el ataúd de la fiscalización de la reconstrucción energética en Puerto Rico. Al igual que LUMA, la empresa privatizadora de generación Genera PR (subsidiaria de New Fortress Energy) recibirá y administrará los fondos procedentes de FEMA para la reconstrucción. Desde que entró en vigor, el contrato de LUMA ha sido objeto de múltiples interrogantes las cuales nunca han sido contestadas:
“De por sí, el contrato de LUMA presenta una cantidad de interrogantes que nunca hemos visto atendidas en el proceso de quiebra. LUMA tiene prioridad de pago, tiene poder de veto, acapara el presupuesto de la AEE y ha solicitado numerosos aumentos de factura mientras provee un servicio eléctrico deficiente y más caro. Por otro lado, sabemos que la Junta de Control Fiscal está dirigiendo la reestructuración de la AEE de una manera que va a imponer más aumentos en la tarifa para el pago de la deuda. La entrada de otro privatizador, definitivamente, va a aumentar ese problema.”
Lic. Zoé C. Negrón ComasDada la trayectoria histórica de LUMA en la falta de fiscalización y transparencia en sus operaciones, y lo deficiente y caro del servicio, poner la generación en manos de Genera PR no augura muy bien para que Puerto Rico cuente con servicio eléctrico asequible que promueva la recuperación económica, que elimine la costosa y dañina quema de fósiles para la generación y que pueda proteger a la población de impactos climáticos como las olas de calor que están sufriendo, así como de huracanes y otros desastres.
El archipiélago se encuentra en una situación ya muy delicada y peligrosa para la salud y el bienestar de la población frente a olas de calor y huracanes, por no mencionar otros eventos no climatológicos como la jornada de temblores en 2020 que destruyó muchos hogares y dañó infraestructura. ¿Qué será de mi Borinquen si de manera consecutiva se enfrentara a un huracán, a una ola de calor, y a apagones extensos? La probabilidad de que se junten estas tres amenazas han aumentado debido al cambio climático.
Volviendo a la movida privatizadora, esta también pone en riesgo la transición energética hacia fuentes renovables, ya que no hay razón para pensar que el nuevo operador privado vaya a cumplir con las obligaciones de la Ley de Política Pública Energética de 2019 (Ley 17). El Director Ejecutivo de la Autoridad para las Alianzas Público-Privadas (AAPP), gestor del enlace entre la AEE y Genera PR, celebró recientemente con bombos y platillos la movida privatizadora como un gran paso hacia la transición a la energía renovable y al cumplimiento de la política pública energética. Pero los hechos sugieren que Genera PR se moverá en la dirección opuesta a lo que argumenta el titular de la AAPP.
De acuerdo con el análisis del contrato con Genera PR que llevó a cabo el Centro para la Nueva Economía, Genera PR proyecta generar alrededor de la mitad del ahorro total en sus operaciones mediante la conversión de unidades a gas natural, lo cual contrasta claramente con el mandato establecido en la ley de política energética de que el país llegue al 100% de energía renovable para el año 2050. Preocupantemente, Genera PR no recibirá penalidad alguna si no cumple con las metas de energía renovable en dicho proyecto de ley.
Y si bien el proceso bipartidista en la Legislatura que logró la Ley 17 requiere la apertura a un mercado competitivo para romper el monopolio de la AEE, tal parece que el contrato de Genera PR está por fuera de esa estipulación (así como de la Ley 120 de 2018 para transformar el sistema eléctrico). Ciertamente, así lo estima el sindicato de trabajadores de la AEE, UTIER (Unión de Empleados de la Industria Eléctrica y Riego), quien acaba de presentar una demanda para declarar nulo el contrato de Genera PR por violentar las estipulaciones anti-monopolísticas en ambas leyes.
En efecto, como dijo recientemente al semanario Claridad el representante legal del sindicato, la privatización de la AEE es “otra derrota para Puerto Rico” por parte de intereses que no benefician a la mayoría de los puertorriqueños y buscan eliminar todas las medidas para regular los mercados y proteger al pueblo de Puerto Rico y continuar lucrando con la venta y quema de combustibles fósiles mientras el planeta y el Caribe se siguen cocinando.
Los que tienen el poder para resolver el problema son unos neciosPero el gobierno de turno en Puerto Rico, tanto como la Junta, el Negociado de Energía, LUMA, el entrante operador privado Genera PR, (sin olvidar la renuencia de la administración del presidente Biden a intervenir y hacer cumplir en Puerto Rico su propia política pública energética y de cambio climático) insisten neciamente en añadir más a la factura energética, climática, económica y de salud que paga el pueblo de Puerto Rico por la generación y distribución centralizada en base a fósiles. Dejan la factura inflada sobre la mesa para que la paguen las generaciones presentes y futuras. Las alzas en el costo del servicio parecen no tener fin; un aumento que pide LUMA al consumo es aumento que la Junta y el Negociado de Energía aprueban; esto no parece que vaya a cambiar al entrar Genera PR. Y de aprobarse el Plan de Ajuste de la Deuda que la Junta propone, aumentaría la tarifa durante los próximos 35-50 años para pagar a los bonistas de la AEE.
El pueblo de Puerto Rico está pagando, de su bolsillo y con su salud, las malas decisiones e incompetencia de todos estos entes privados y gubernamentales. Ya en 2020, el gobierno de Puerto Rico entró en una deuda de casi mil millones de dólares solamente para capitalizar la entrada de LUMA al mercado energético de Puerto Rico. Eso de por sí fue una locura.
¿En dónde tiene sentido que un ente privado que no tiene capital para entrar al mercado sea subsidiado por su propio cliente? La respuesta: en un país que no es no soberano, donde las personas que tienen que vivir (o morir) con las consecuencias de las decisiones no son los que toman esas decisiones. La población de la colonia está desprovista de mecanismos para enfrentar los impactos del cambio climático.
Hay soluciones reales apoyadas por la ciencia y la justicia socialTodos estos son obstáculos formidables para poner en marcha una visión energética sostenible y resiliente. Entonces pienso en que no hay más remedio que trabajar con “lo que hay” (como decimos en Puerto Rico cuando las condiciones de juego no se pueden cambiar): ahora no será una, si no dos empresas privadas que operan (LUMA) y operarán (Genera PR) sin fiscalización, a quienes se les aprueban todos los aumentos que piden, y quiénes ignoran burdamente, con el aval de todas las instancias federales y estatales, los mandatos legales de energía renovable.
Estos intereses desentendidos del bienestar de los y las puertorriqueños contrastan fuertemente con la visión de la coalición Queremos Sol de reconstruir la red de generación y distribución eléctrica en base a las necesidades sociales y materiales del pueblo de Puerto Rico. En su informe han dicho, entre otras cosas, que hay que regresar a la AEE a un estado de gobernanza, operación y fiscalización en base a la eficiencia y conservación energética, y hacerlo de manera distribuida (es decir, no centralizada). Dicha transformación debe priorizar las necesidades socio-económicas y energéticas de la población y cumplir con la ley boricua de política energética, y crear la transición hacia la generación en base a fuentes renovables.
It’s Danger Season and Workers Need Heat Safety Protections Now—UPS Knows It
What would be the largest single-employer strike in US history may soon be avoided if UPS and the Teamsters union reach an agreement on adding life-saving cooling equipment in more than 90,000 fleet vehicles. UPS is the largest employer here in Louisville, Kentucky, so a strike would have serious implications for the metro region as well as on the entire US economy. UPS workers are asking for protection just as Danger Season has started and the summer is predicted to be hotter than usual. While the cooling equipment is one piece of the worker negotiations, it’s a crucial piece–worker heat protections save lives and are worth fighting like hell for.
My dad often works outdoors in extreme temperatures, so I feel for every single worker and family whose lives have been or could be devastated by preventable heat illness or fatalities.
Currently, workers have no guaranteed protections from heatAccess to water, rest, and shade during dangerously high temperatures is not legally guaranteed for most people who work outdoors or in hot indoor facilities. Some states have passed limited protections for workers, but we need federal measures so that US workers don’t have to choose between a paycheck or their lives.
For years, the Union of Concerned Scientists (UCS) has joined a national coalition of unions, workers, justice leaders, and others to urge Congress to pass the Asunción Valdivia Heat Illness and Fatality Prevention Act–legislation that would direct the Occupational Safety and Health Administration (OSHA) to create a heat standard to keep workers safe during excessive heat. In the absence of congressional action, President Joe Biden has launched initiatives aimed at keeping workers safe. However, many of these measures will take years to be implemented and workers need protections today.
Labor unions fight for protections and hold employers accountableWorkers are the experts when it comes to their lived experiences with heat, and they have been at the forefront fighting for themselves and their people.
“No one knows the hazards of a job better than the people who face them on every shift,” said Steve Sallman, director of the United Steelworkers (USW) Health, Safety and Environment Department, in a blog he wrote for UCS.
As a worker in a tire and rubber plant, Steve felt that his union kept him safe and allowed him to return home to his family each day. On the role unions play, he said, “The USW helps members negotiate strong health and safety provisions, including those pertaining to heat stress, into their contracts. The union enforces those measures on a daily basis and ensures workers have the freedom to report hazardous conditions without fear of retaliation.”
The Coalition of Immokalee Workers achieved enforceable heat stress standards through the Fair Food Program, which is a unique partnership among farmers, farmworkers, and retail food companies. Other worker-led efforts, like those of WeCount! in Miami-Dade County, are also putting pressure on local governments to take action to protect workers now. While employers may say that they mean well and care about protecting workers, union contracts and legislation would hold them accountable–otherwise they are just nice sentiments.
Cooling is not a luxury—it is the difference between life and deathHeat is the leading cause of weather-related deaths. Research estimates that heat exposure is the cause for 170,000 work-related injuries every year and between 600 to 2,000 worker fatalities.
Last summer, a video went viral showing a UPS driver collapsing when he delivered a package to a resident in Scottsdale, Arizona when temperatures reached 110 degrees Fahrenheit. Luckily, that driver recovered. However, in June 2022, a 24-year-old UPS driver, Esteban Chavez Jr., known as Stevie to his loved ones, died after collapsing in his truck on a hot day with temperatures in the 90s.
The figure below shows that workers are more susceptible when the heat index is in the 90s, though workers can experience health risks at lower temperatures, especially when doing strenuous work or wearing protective equipment. No one is immune to extreme heat: the young, strong, and healthy fall victim to this silent killer every year.
Heat illness and deaths can be prevented with the right measures and safety precautions in place. And workers must have guaranteed access to those essential needs–water, rest, shade/cooling – without fear of retaliation. Adding air conditioning or cooling fans to a truck may seem like a simple thing…but it could make the difference between life or death for the person delivering packages to your front door.
Extreme heat threatens workers’ lives and livelihoodsThe costs of inaction are too high. A 2021 peer-reviewed UCS analysis found that, “If we don’t take action on climate change, extreme heat would cause tens of millions of outdoor workers in the US to risk losing a collective $55.4 billion in earnings each year by midcentury (2036-2065).”
UCS video Too Hot to Work: The Effects of Extreme Heat on Outdoor WorkersHistorically, Louisville, KY experienced just one day in the average year with a heat index above 105°F. But, according to UCS research, by midcentury Louisville can expect 27 days per year—nearly a month’s worth—with a “feels like” temperature that’s above 105°F without action to reduce heat trapping emissions.” Remember how we said that workers are at risk when the heat index reaches 90˚F? Louisville could expect to experience an average of 93 days per year – three months worth!—of days over 90 degrees without action to reduce heat trapping emissions. That could put up to $131.4 million of workers’ earnings per year on average at risk in Jefferson County by midcentury, amounting to an average of $1,778 per worker.
For many workers, every dollar counts. Heat protections must ensure that workers are not docked pay for taking water or shade breaks, or for adjusting schedules to keep safe from the heat. No one should have to choose between their health or a paycheck.
UCS Heat Tool shows how often you will endure extreme heat in your area. https://www.ucsusa.org/resources/killer-heat-interactive-toolUPS drivers are not alone, and they are among many workers who must labor in extreme temperatures without mandated, commonsense protections. The good news is we have solutions that can be taken at the local and national levels to keep workers safe. As a proud member of our staff union UCS United and the Progressive Workers Union, I stand in solidarity with workers who are advocating for protections against extreme heat so they can get home safely at the end of a workday.
If this issue hasn’t been on your radar, I urge you to learn more and advocate for workers where you live. Sign up for the UCS newsletter and we’ll keep you posted on actions you can take to support workers and build climate resilience.
Ask a Scientist: Top Takeaways from the New EPA Carbon Pollution Rules
Last month, the Environmental Protection Agency (EPA) proposed new power plant carbon pollution standards that, if strengthened, would go a long way to help meet the Biden administration’s goal of slashing carbon emissions in half from 2005 levels by the end of this decade.
Given the EPA has the responsibility and the obligation to address carbon pollution, these standards—the first to limit carbon emissions from existing coal- and gas-fired power plants—are long overdue. Those currently operating fossil fuel plants generate 25 percent of U.S. global warming pollution, second only to the transportation sector. The rule, which also applies to new gas plants, would avoid as much as 617 million metric tons of carbon dioxide through 2042, the EPA calculated, the equivalent of the annual emissions of 137 million passenger vehicles—about half of the cars in the country.
The proposed standards are the most recent in a series of administration policies to address the climate crisis, including new EPA requirements for vehicle tailpipe pollution and methane leaks from oil and gas facilities. Coupled with the Inflation Reduction Act and the bipartisan infrastructure law, the standards would accelerate a transition that is already underway.
No new coal plants have been built in the United States in the last decade and, according to an April report by the Institute for Energy Economics and Finance, this year US coal-fired power plants will likely burn half of what they burned 10 years ago. The report also found that 173 coal plants are scheduled to close by 2030—54 percent of the fleet—and another 55 by 2040.
Meanwhile, between 2012 and 2022, installed wind and solar power more than tripled, according to American Clean Power. Last year, wind generated 10.2 percent of US electricity and utility-scale solar generated 3.4 percent. Add hydropower’s contribution of 6.2 percent, and the 19.8 percent the three renewables generated in 2022 exceeded coal’s output—19.5 percent—for the first time. In 2012, coal generated 37 percent of US electricity.
Coal’s decline and the rapid growth of renewables helped cut the US electricity sector’s carbon emissions by 24 percent over the last decade. But given the urgency of the moment, the coal and gas plants still operating—as well as any new gas plants that come online—will have to dramatically reduce their carbon emissions. The EPA’s new proposed standards would require them to do just that.
To better understand what the EPA’s standards would accomplish, I turned to Julie McNamara, the Union of Concerned Scientists (UCS) Climate & Energy program’s deputy policy director.
EN: First, why are these new standards such a big deal? What would they accomplish?
JM: Quite simply, if we don’t clean up the power sector, we won’t get anywhere close to reducing the amount of carbon emissions needed to avoid the worst consequences of climate change. Not even close. That’s because the power sector is not only an enormous source of carbon pollution today, but also because a cleaned-up power sector is fundamental to decarbonizing a large swath of the rest of our economy by enabling a transition from fossil fuels to clean sources of electricity.
Although the power sector has gotten cleaner in recent years with renewables racing online and coal transitioning off, we’re still, unfortunately, way off the mark. The pieces are in place for a potentially rapid shift to renewables, but to achieve that shift at the pace and scale the science demands, we cannot rely solely on the growth of renewables and the policy leadership of a handful of states. We have to tackle carbon emissions from all coal- and gas-fired power plants head-on.
And that’s exactly what this rulemaking does. It sets a framework for states and utilities across the country to fully reckon with their power plants’ carbon emissions. Coupled with power plant pollution rulemakings addressing other pollutants, including mercury, coal ash, and nitrogen oxides—as well as incentives in new laws for clean energy technologies—the proposed standards have the potential to set a truly climate-aligned transition to a fully clean electricity system that better protects public health and the environment.
EN: These standards—or at least something based on the same Clean Air Act provision—have been in the works for a long time. In 2015, the EPA issued the Clean Power Plan to reduce carbon pollution from power plants, which at the time were the largest source of heat-trapping emissions in the country. Lawsuits by the coal industry and Republican state attorneys general derailed the plan, and the AGs’ suit wound up in the US Supreme Court. What’s the back story?
JM: In fact, the story goes back further than that—much further. The Clean Power Plan was the culmination of years of legal and technical haggling over EPA authority and obligations, including the landmark Massachusetts v. EPA US Supreme Court decision in 2007 and the EPA’s follow-on Endangerment Finding and Cause or Contribute Finding in 2009, which ultimately set the stage for the agency to not only have the authority—but also the obligation—to regulate power plant carbon emissions.
But, as you noted, despite all the groundwork leading up to the 2015 rule, the Clean Power Plan was immediately challenged, and in 2016—before it went into effect—the Supreme Court stayed the rule. The Trump administration subsequently issued its own set of power plant carbon rules in place of the Clean Power Plan, but it was forcefully rejected by the DC Circuit Court of Appeals on the final day of the Trump administration due to its “fundamental misconstruction” of the Clean Air Act. Meanwhile, in 2021, the case against the Clean Power Plan re-emerged, resulting in the West Virginia v. EPA Supreme Court decision in 2022.
There are two key takeaways from West Virginia v. EPA’s 6-3 majority opinion. First, the court did not question the EPA’s authority to regulate power plant carbon emissions. Instead, it restricted the nature and extent of how the agency can exercise that authority, finding that the Clean Power Plan exceeded those boundaries. Second, the majority relied upon the so-called “major questions doctrine” to arrive at its conclusion, the first time the court explicitly referenced the concept. It asserts that for matters of vast economic or political significance, a federal agency must point to explicit congressional authorization.
In other words, if a directive isn’t clearly spelled out in a law, and it’s a “major” issue, then the court will throw out the federal agency rulemaking. The trouble is, Congress has long relied on passing purposefully flexible laws to enable agencies to act on the best available science. This decision would undermine that. Furthermore, the court did not offer any specific metrics or frameworks for when, exactly, it could invoke the doctrine, suggesting that the court manufactured a way to inject itself at-will into the role of policymaker and technical expert, usurping the roles of the other two branches of government.
Looking ahead, while the Supreme Court has upheld the EPA’s authority to regulate heat-trapping emissions from power plants, fossil fuel interests will no doubt continue to mount legal challenges to stall agency progress. And as the West Virginia v. EPA decision made clear, those interests will continue to be welcomed by an increasingly sympathetic court.
EN: What specifically has the EPA proposed? How is this proposal different than the Clean Power Plan?
JM: In this new rulemaking, the EPA has proposed carbon pollution standards for existing coal-fired power plants, new gas-fired power plants, and a subset of existing gas-fired power plants. The standards range in magnitude of emission reductions required—and when those reductions must be achieved—depending on the type of plant as well as how frequently the facility runs and how long it intends to remain operating. For example, the EPA is proposing to require that any coal-fired power plant intending to operate past 2039 must limit its carbon emissions beginning in 2030 based on a facility using 90 percent carbon capture. For large, frequently used gas-fired power plants, the agency is proposing facility emission limits based on 90 percent carbon capture in 2035 or blending an increasing amount of hydrogen in place of gas over a slightly longer period.
The major difference between this proposal and the Clean Power Plan relates to how EPA determined the level of achievable emissions reductions, which is what tripped up the Clean Power Plan in the courts. The Clean Power Plan factored in plants shifting electricity generation from higher polluting sources to cleaner energy sources—from coal to renewables, for example—in how it set its emission reduction requirements. However, the court determined that basing such a determination on generation shifting required clear congressional authorization. As a result, these new standards only consider facility-specific pollution controls, such as installing carbon capture equipment or burning lower-carbon fuels with higher-carbon fuels, like hydrogen with fossil gas.
The proposed standards would be a major step forward in reducing carbon emissions from coal- and gas-fired power plants, but they don’t go far enough. The proposal covers what would be just 14 percent of existing gas-fired power plant capacity in 2035, has weak requirements for certain new gas-fired power plants, and includes compliance timeframes that stretch too far out into the future. Those provisions will need to be strengthened before the rule is finalized.
EN: How feasible will it be for power plants to capture their carbon emissions or, in the case of gas plants, switch to something like “green” hydrogen, which doesn’t emit carbon?
JM: An interesting, and critically important, aspect of Section 111 of the Clean Air Act—the source of EPA’s authority for this rulemaking—is that while the agency sets emission reduction requirements based on the “best system of emission reduction” that has been adequately demonstrated, states and utilities have wide latitude in the technologies and approaches they can employ to comply with those requirements.
That means there are really two different questions when you ask about “feasibility.” First, what adequately demonstrated technologies can the EPA cite that would result in the biggest emission reductions possible, considering cost and other factors? Second, when states act to comply with those emission standards, do the same technologies the agency relied on to derive the standards make sense, or are there other, superior technologies or approaches—including ones not available to EPA in its evaluation due to the constraints imposed on the agency by West Virginia v. EPA?
The onus is on the EPA to establish that the technologies it considers in setting its standards, such as carbon capture or hydrogen co-firing, are achievable. In the proposed rule, the agency included a recitation of anticipated costs, expected performance, and example projects on this front.
But the calculus changes when it comes to compliance. States may determine that even if a technology such as carbon capture has been adequately demonstrated for EPA’s purposes, it may still result in health, environmental, safety, and other risks that make it less preferable to an alternative compliance approach that would achieve the same level of emissions reductions, such as retiring a coal-fired power plant and replacing it with renewables instead.
The Union of Concerned Scientists maintains that when clean energy technologies go head-to-head with the various facility-specific options to reduce coal- and gas-fired power plants’ carbon emissions, such as hydrogen co-firing or carbon capture and sequestration, clean energy technologies will overwhelmingly win the day.
EN: Not only would the proposed standards cut carbon emissions, they also would significantly reduce toxic pollution, no?
JM: That’s right. Fossil fuel-fired power plants spew out all types of pollutants, not just carbon dioxide and methane, and they are extremely harmful to public health. The less we rely on these facilities, the less of these pollutants there’ll be.
The EPA estimates that, over the next two decades, the proposed new standards for coal-fired power plants and new gas-fired power plants will result in significantly lower emissions of fine particulate matter, sulfur dioxide, and nitrogen oxides. When the resulting public health benefits are combined with climate benefits, the EPA projects this provision of the new standards would result in a total annual net benefit of $5.4 billion to $5.9 billion. When translating these pollution reductions into health outcomes, the agency projects that in 2030 alone these reductions could result in approximately 1,300 avoided premature deaths, more than 2,000 avoided cases of asthma onset, 38,000 avoided school absences, and 66,000 avoided lost work days.
Keep in mind, those numbers do not include reductions anticipated from standards for existing gas-fired power plants, which will drive these numbers even higher. Furthermore, if states and utilities end up complying with EPA’s standards using clean energy, we can reasonably anticipate that the public health benefits will be even greater given the full pivot away from fossil fuels.
EN: How will the proposed standards protect fenceline communities that bear the brunt of power plant pollution?
JM: This is a critical concern. The fact is, for decades, people living in the shadow of fossil fuel-fired power plants—as well as in the vicinity of facilities that extract the fuels as well as pipelines, trains, and trucks that transport them—have been disproportionately harmed by air and water pollution. Given these communities are often disadvantaged, these harms have steadily perpetuated environmental injustices.
The EPA’s proposed standards will play a major role in the fate and future of fossil fuel-fired facilities, so community concerns must be front and center. The proposed standards stipulate that states must engage with communities when planning for compliance, but there’s room for these requirements to be stronger, and for the EPA—as well as other federal agencies—to incorporate more community protections and safeguards related to new infrastructure deployment. States also must fully consider the costs and risks to these communities from compliance pathways that rely on continuing fossil fuel use—even if smokestack carbon emissions decline—compared to switching to clean resources instead.
EN: Finally, what can the general public do to make sure the EPA finalizes the best possible version of these standards and they are not—like the Clean Power Plan—blocked by the fossil fuel industry?
JM: Great question. As important as these proposed standards are, they are just a proposal, and they will require a groundswell of public support to enable the EPA to finalize the strongest possible version by, among other things, closing polluter loopholes and requiring much faster timelines. It will be critical for the general public to weigh in with comments supporting the new standards and calling for the EPA to strengthen them, which they can do here.
The standards also will need strong public support to stand up to an onslaught of fossil fuel industry attacks. The industry has already disseminated disinformation about the rules, and the same fossil fuel-friendly interests that challenged the Clean Power Plan are signaling their intention to take the EPA to court again over the new standards.
As mentioned before, the EPA has not only the authority, but also the obligation, to regulate power plant carbon pollution. And yet, decades in the making, there are still no carbon pollution standards in place. We need all the public power we can amass to overcome the fossil fuel industry’s delaying tactics and see these standards through.
A Year After the Deadly Pakistan Floods Began, Hard Lessons About Climate Loss and Damage
Last summer, from June through August, Pakistan endured extended intense rainfall—exacerbated by climate change—that triggered devastating and unprecedented months-long flooding across the country. The floods killed more than 1700 people, a third of them children; affected 33 million people and displaced 8 million; destroyed more than 2.2 million homes and 4.4 million acres of crops; and cost $40 billion. The people of Pakistan are still reeling from the catastrophic effects of the 2022 floods and the country’s economy is in crisis, even as the 2023 monsoon season is off to a sobering start and the powerful Cyclone Biparjoy is set to make landfall shortly. A year on, here are some hard lessons on the compounding effects of climate and economic injustices, and a call to action from richer countries and international financial institutions.
The Indus River Valley in Pakistan before (left) and after (right) devastating flooding in August 2022. Credit: NASA Five lessons from the floods in Pakistan- Scientific studies show that climate change was an important contributing factor to these floods. A World Weather Attribution study from an international team of scientists found that ‘…the 5-day maximum rainfall over the provinces Sindh and Balochistan [which were two of the worst affected provinces] is now about 75% more intense than it would have been had the climate not warmed by 1.2C.’ A more recent study found that the intense multi-day rainfall was associated with two atmospheric rivers drawing moisture from the Arabian Sea. It also found that ‘The southern provinces of Pakistan received more than 350% of average precipitation in July and August based on the 2001–2021 mean.’ These floods came soon after the nation had experienced an intense heatwave, made 30 times more likely because of climate change. Science is also clear that climate change will continue to make these kinds of disasters more likely and/or frequent, highlighting the urgent need to invest in resilience measures that will better protect people, livelihoods, housing and infrastructure. Pakistan is responsible for just 0.3 percent of cumulative global carbon dioxide emissions and yet is bearing this terrible toll from climate change caused primarily by emissions from richer nations and fossil fuel companies.
- Climate-caused disasters have a disproportionately harsh impact on the poorest people. According to the World Bank, the floods have forced as many as 9.1 million more people in Pakistan into poverty, increasing the nation’s poverty rate by 4 percent above the 2018-19 poverty rate of 21.9 percent. Many of the hardest-hit areas already had high rates of poverty and children suffering from malnutrition. In the wake of the floods, the needs are acute. A March 2023 report from UNICEF noted that, ‘An estimated 20.6 million people, including 9.6 million children, need humanitarian assistance.’ Meanwhile, 1.8 million people were still living near stagnant floodwaters eight months after the floods. UNICEF also noted that ‘The prolonged lack of safe drinking water and toilets, along with the continued proximity of vulnerable families to bodies of stagnant water, are contributing to the widespread outbreaks of waterborne diseases such as cholera, diarrhoea, dengue, and malaria.’ The World Health Organization called the floods ‘the perfect storm for malaria,’ with the nation experiencing its worst outbreak in the last fifty years.
- Pakistan is trapped in a vicious debt spiral, pulled downward by a climate disaster and an economic crisis colliding with an unjust global financial system. Massive crop, livestock, and infrastructure losses caused by the floods, the global energy crisis triggered by the Ukraine war, a fiscal crisis, and political uncertainty have together contributed to record inflation levels in Pakistan. The country is on track for GDP growth of 0.29 percent in FY23, down from 6.49 percent in 2021. In the wake of the floods, the nation has also seen its credit rating repeatedly downgraded, making it harder for it to recover and pay off its loans. And the IMF is imposing draconian fiscal adjustment measures on the heavily indebted and struggling nation as a precondition for releasing prior bailout funding, pushing its economy further into crisis. Without the IMF funds, Pakistan may be forced into defaulting on its loans within months, compounding the misery for its people. The bottom line is that the international financial system must be reformed to better serve the needs of the poorest people, and nations hit by extreme weather events should have options for immediate debt relief and greater access to grants and concessional loans. The worsening and inequitable impacts of climate change make the need for these reforms even more urgent.
- The global community’s response has fallen well short of what’s needed. Pakistan’s flood response plan is still significantly underfunded. A January 2023 international conference aimed at raising funds for climate resilience in Pakistan resulted in over $10 billion in pledges—but about 90 percent of that is in the form of loans that will become available over the next three years and must be repaid. Pakistan’s plight is a terrible one and richer nations simply must step up to do more to help it on its long road to recovery, as must the World Bank and IMF. Meager and uneven post-disaster humanitarian aid and expensive loans are far from sufficient, and will leave millions of people suffering for years to come.
- Addressing climate Loss and Damage fairly must be a top priority at COP28. At COP27, Pakistan led the urgent and moral call for the establishment of a Loss and Damage Fund to address extreme climate disasters, which richer nations reluctantly agreed to. This year, at COP28, nations must reach agreement on the details needed to operationalize the L&D Fund so that it can be resourced soon thereafter, and climate-vulnerable countries can start to receive the funding they desperately need and deserve. The advance work done by the Transitional Committee on Loss and Damage is critical to securing progress ahead of time so that there is a successful outcome at COP28.
Unfortunately, the floods in Pakistan are just one of many climate-driven disasters that have already harmed people around the world—and the world will see many more as the climate crisis worsens. The global community must do more to sharply cut the heat-trapping emissions fueling these extreme events, and also invest in closing the adaptation gap, especially in frontline communities struggling with poverty and the repeated onslaught of disasters.
The Biden-Harris administration should champion reforms to international financial institutions that will go a long way toward making their lending more aligned with climate and sustainable development goals. This is a critical piece of a broader agenda that includes the U.S. delivering on its pledge of $11.4 billion per year in climate finance by 2024. Later this month, President Macron of France will be hosting world leaders at a Summit for a New Global Financing Pact, another opportunity for reforming the global financial system to deliver on the stated goals of ‘fighting inequality, climate change and protecting biodiversity.’
Richer nations like the United States, which are responsible for a majority of global heat-trapping emissions to date, have a unique responsibility to contribute their fair share to sharp cuts in emissions and climate finance. Fossil fuel companies, whose products are directly responsible for climate change, must also be held accountable and contribute to funding to address climate damages. And the international financial system must undergo long-overdue deep reforms to be fit for purpose in a world where climate change is driving more people into poverty and more countries into unsustainable levels of debt.
Worsening Risks of Climate Change Expose the Need for—and Hard Limits of—Property Insurance
Climate change is putting more people and property in harm’s way—and also exposing hard limits to the protection that property insurance can offer. Far too many people don’t have insurance against damage caused by flooding, wildfires, and intensifying storms, either because they are not aware of the risk they face, or because they cannot afford insurance. More people need insurance, but increasingly the climate crisis is making many places too risky to insure at reasonable rates. As the summer Danger Season gets underway, here are some thoughts about insurance in a warming world. I will unpack more as the season progresses.
Insurance is a vital tool, but it is not a panaceaWhen major disasters strike, the toll on people, homes, and infrastructure can be immense. Having access to insurance can be a huge help for communities struggling to recover and get back on their feet. Even ahead of disasters, insurance rates can help signal the level of risk in a particular place (all else equal, higher rates would signal greater exposure to risk), so people can make more informed decisions based on their risk tolerance. And insurance programs can also create incentives, through discounts on insurance rates, for taking precautionary actions—such as investments in floodproofing or fireproofing—to lower one’s risks.
In the U.S., most homeowner flood insurance is purchased through the taxpayer-backed National Flood Insurance Program administered by the Federal Emergency Management Agency (FEMA), although in the last decade the private flood insurance market has also been growing. Wildfire insurance is delivered mainly through private homeowner’s insurance. Many states also offer an option for insurance of last resort, so-called ‘FAIR’ plans, an insurance pool program with bare-bones insurance available at a lower cost for homeowners who may not be able to purchase insurance on the open market otherwise. Reinsurance and innovative approaches like catastrophe bonds and parametric insurance can also help expand the reach of insurance and the ability to be quickly responsive to extreme events.
Of course, this is how things should work theoretically. But there are longstanding gaps and failures in the insurance market—including the fact that many low-income and fixed-income homeowners struggle to pay for insurance and are often forced to go without; that subsidizing insurance without adequate risk disclosure can create perverse incentives that actually increase exposure to risk; and that split incentives between federal and state actors can also create challenges.
The increasingly untenable risks of climate change in many places have been highlighted by the recent news that State Farm and All State have stopped offering new homeowner’s insurance policies in the California market primarily due to growing costs of wildfires, and by the news of skyrocketing costs of flood insurance in Louisiana. Similarly, Hurricane Ian has exposed the precariousness of the Florida insurance market. Our nation’s history of racist and discriminatory housing policy—including mortgage redlining—as well as the current crisis of the lack of affordable, safe housing also result in the fact that some of the places most exposed to climate risks are also where low-income communities and communities of color live.
Globally, the situation is even more dire as most people in the Global South on the frontlines of climate change—such as the millions of people affected by the devastating floods in Pakistan last year—simply do not have any access to insurance.
Insurance is the ‘canary in the coal mine’ for climate changeThe risks of climate change have long been recognized by the reinsurance industry—including SwissRe and Munich Re—but U.S.-based insurance companies have been slower to acknowledge them publicly until now. State Farm, which is the largest provider of homeowners insurance in California, attributed its decision to stop offering new policies in California to “historic increases in construction costs outpacing inflation, rapidly growing catastrophe exposure, and a challenging reinsurance market.” Even for major global reinsurers, the rising spate of extreme events worldwide is severely testing their profitability. A recent report from Moody’s notes that reinsurers are reacting by raising rates for primary insurers and even exiting markets. Florida and California stand out as among the most exposed insurance markets.
There are important specifics to keep in mind when examining public versus private insurance—including access, equity, affordability, taxpayer exposure, and options for investing in science-informed rate-setting—but all types of insurance are suffering from a similar challenge when it comes to climate change. Heated public debates on this topic fail to acknowledge that, with the fundamental underlying physical risks growing worse because of human-caused climate change, simply shifting from one form of insurance to another will not help address the root of the problem.
The conundrum we face is that, clearly, more people need insurance because the areas exposed to risks are growing rapidly, but at the same time insurance is also becoming more expensive and harder to get. Climate change is fueling hotter, drier conditions in the Western U.S. and, coupled with a history of poor forest and fire management and growing development in wildfire-prone areas, this is contributing to catastrophic fire seasons. The warming climate is also contributing to increased flooding in many places, through the increased risk of heavy rainfall events, intensifying storms, and sea level rise. These risks are worsened by the continued increase in development in flood zones. Local zoning laws that are out of step with climate risks, the desire for increasing development as a way to reap property taxes to fund local amenities, and continued subsidization of real estate markets in risky areas all contribute to the challenge too.
Insurance administrators, both public and private, have been relying on outdated models, driven by past data, in predicting risks and setting rates. With climate change, the past is no longer a good predictor for the future. Newer updated models that take climate projections into account are coming into use and more are being developed. NOAA and the NSF recently announced a new research center to help meet insurance industry data needs. FEMA, too, has implemented a new initiative called Risk Rating 2.0, using private sector data and tools, which it says would help it “deliver rates that are actuarily sound, equitable, easier to understand and better reflect a property’s flood risk.” However, the large resulting rate hikes have brought huge protests, and FEMA is now being sued by 10 states and dozens of municipalities.
The latest science shows a systematic increase in many types of climate risks, and these risks can coexist and compound in the same place. Instead of risk being random, with a distribution of the risk profile that is spread out, risks are increasingly shifting toward the high end, and skewing the distribution. That makes it harder to apply older actuarial tables to help calculate and spread the risk across a population or geography, and sustain a well-functioning insurance market. Growing climate-driven risks are also serving to make insurance much more expensive and putting it out of reach for many people.
For true climate resilience, science and equity must go hand-in-handGoing forward, using the latest science to assess and project risks is critical to help protect people and property. Mid-to-long term projections can help people make choices about where to live, how much insurance coverage they need, and if/when to think about moving to a safer place. We also need seasonal and real-time maps and alerts, which are crucial to help get ready for Danger Season with protective measures and emergency plans that can also help lower insured losses. Insurance is one tool in the toolbox, where we can aim to incorporate science and help improve outcomes. But the time horizons most insurers consider are usually no more than 2-5 years out, so it certainly has its limits.
A more long-standing challenge that is equally important to address: insurance does not necessarily serve those who need it most. Low- and fixed-income families may not be able to afford insurance—or to invest in the protective measures it would take to lower their insurance rates—and they have the fewest resources to recover from disasters. Adding well-targeted affordability provisions to the NFIP, for example, is critical, especially as Risk Rating 2.0 causes rates to rise rapidly in some places. The National Academies of Sciences (NAS) has developed recommendations for this, FEMA has put forward an affordability framework and a package of legislative proposals for NFIP reauthorization that include affordability provisions, and Congress must now act to legislate them.
Of course, better risk modeling will only serve to highlight that, whether insurance markets reflect it or not, climate change is dangerously altering physical risks to people and property. So, we also need rapidly scaled-up and equitable investments in climate resilience, not just disaster recovery, that go well beyond what insurance can deliver. And we’ve got to do all we can to sharply cut heat-trapping emissions to limit runaway risks from climate change.
As the 2023 hurricane and wildfire seasons get underway, it’s important for people to check on their insurance coverage and look at the latest flood and wildfire risk maps for where they live. (Remember that fire-denuded landscapes are prone to mudslides and flooding.) And please also raise your voice to call on policymakers to support affordable, accessible, science-informed insurance as part of a broader suite of equitable climate resilience investments.
Species on the Move: How Climate Change Is Re-Making Ecosystems
Human-caused climate change is redistributing species across the globe, re-ordering ecological communities, and even driving genetic changes in some populations. We need to better understand these changes, and to adapt biodiversity conservation strategies to take them into consideration.
To address these issues, the third international Species on the Move conference convened in Bonita Springs, Florida, in May 2023. Key ideas discussed at the meeting included increasing connectivity between protected areas, the need for anticipatory legal and regulatory planning for biodiversity conservation, and reinstituting and protecting Indigenous land and wildlife management practices.
Opening the conference, the director of the Leverhulme Centre for Anthropocene Biodiversity at York University, Chris Thomas, noted that humans have been modifying ecosystems for millennia, and that every species that survives today, even in the most remote areas, exists in an ecosystem that has been altered by humans, including through changes in atmospheric and ocean chemistry, and climate change. In a world where urbanization, forestry, agriculture, fisheries, transportation, water and energy infrastructure, and other human impacts have degraded and continue to alter natural ecosystems, climate change is causing rapid and often hard to predict impacts on species dynamics and distribution.
Deforestation and forest fragmentation are making it harder for species to move in response to climate chnage. Photo: T.R. Shankar Ramen A global shift in species distributions is underwayHundreds of studies during the last two decades have tracked climate-driven changes in species distribution, including for trees, mammals, birds, amphibians, marine and freshwater fish, and insects. For example, many duck species in the Midwest have been moving further north in the winter and much more frequently over-wintering there rather than migrating south in the fall. The National Audubon Society predicts massive reductions in future climate suitability of habitat for hundreds of North American birds as the climate continues to warm.
In the Arctic, where the climate is warming at an exceptionally rapid rate, permafrost thaw is causing slumping and subsidence, creating thermokarst lakes and rapidly changing the profile of peatlands which took thousands of years to develop. Shrubby vegetation is spreading into the tundra, allowing animals including moose, beaver, and snowshoe hares, to live there for the first time.
Beavers are fast colonizing and changing habitat in lowland northwestern Alaska and are expected to spread throughout the area north of the Brooks Range within decades. Where beavers create ponds, permafrost thaw can accelerate locally and create new habitat for species shifting their ranges northward. The Arctic Beaver Observation Network, a group of scientists, Indigenous groups and land managers has been set up to monitor beaver expansion and better understand potential impacts on habitat, fish, and the subsistence lifestyles and health of Alaska Natives.
Beaver pond, Bering Land Bridge National Preserve, Alaska. Photo: National Park Service.A study which analysed over 30,000 range shifts for more than 12,000 terrestrial and marine species found that marine species are outpacing terrestrial species in the race to keep up with global warming. This is most likely partly related the physical obstacles such as roads, cities, and agriculture impeding terrestrial species, as well as their generally greater tolerance for wide temperature ranges.
A race to keep up with climate changeOn land, microclimate–the very specific suite of conditions such as temperature and humidity that each individual organism experiences–can also play a major part in a species’ speed of change. The huge variation in local microclimates probably also partly explains slower than expected shifts in terrestrial species’ distributions. Most species distribution modeling does not include fine-scale microclimate data. Efforts are underway to change this, for instance with the SoilTemp database that has so far compiled 50,000 microclimate time series from across the globe.
Some marine species also experience boundaries to movement, including ocean currents, thermoclines, and shipping lanes, but they are generally more able to track changes in sea temperatures.
In the oceans, prolonged periods of unusually high sea surface temperatures are having a major impact. These annual marine heatwaves (MHWs) have increased by 54% during the last century, and eight of the ten worst MHWs have occurred since 2010. MHWs are causing mass die-offs of foundation species such as corals, kelp, and seagrass as well as fish, seabirds, and marine mammals. They’re also associated with the redistribution of commercially important fish species, causing severe economic losses to fisheries in some cases
While research is still scarce on how marine species are changing the depths at which they live in response to warming water temperatures, many changes have been recorded on land, where plants and animals are being pushed up-slope as the climate changes. For example, in Peru’s Cerro de Pantiacolla range, at least 5 mountain-top bird species have gone locally extinct over a 30-year period. Benjamin Freeman, a biologist at Georgia Tech, coined the phrase “escalator to extinction” to describe the threat to mountaintop species which are adapted to cooler conditions and have nowhere to go as the climate becomes too warm.
Protected areas and connectivityBecause natural habitats worldwide continue to be destroyed and degraded by urban and agricultural development, and unsustainable resource extraction, increasing the amount of wild lands in protected areas is vital for maintaining biodiversity. But protected areas are themselves under threat from rapid climate change, and conservation planners are urgently working to understand and ameliorate the risks.
Protea flowering in Table Mountain National Park, South Africa. Photo: UNESCO/David G. F. Smith.Scientists in Table Mountain National Park in South Africa, for example, have focused on identifying the ecological traits that make species particularly vulnerable to climate change, and then developing management strategies for priority species. This approach has found that South Africa’s iconic protea flowers are among those species most at risk, and that 85% of the park’s reptile species, and 67% of its amphibians are highly vulnerable to climate change.
In Florida’s Everglades National Park, more than a century of drainage and land development have shrunk and degraded the wetlands, disrupting freshwater flows and destroying important habitat. Efforts to restore natural flows of freshwater into the Florida Everglades are helping to slow the incursion of mangroves into this important freshwater ecosystem as sea levels rise. National Park Service scientist Erik Stabenau calls this “fighting water with water”.
According to Venetia Briggs-Gonzalez, a wildlife biologist at the University of Florida, coastal development, increased salinity and damage to coastal nesting habitat from sea level rise and storms has already shifted the center of gravity for threatened American crocodiles from the Atlantic coast to the Gulf coast of Florida.
Crocodiles are adaptable and resilient, but they need both freshwater and sandy coastal nesting sites–two resources that are becoming increasingly scarce. Sea level rise and consequent loss of scarce habitats is also an existential threat to several endangered species in Everglades National Park and the Florida Keys which have no new habitat to move into, including Cape Sable seaside sparrow, Key deer and silver rice rat.
Protected areas tend to contain higher-quality habitat and greater biodiversity than areas outside, but they will not necessarily act as stepping-stones to allow species to move in response to climate change. If there is not enough connectivity between protected areas, they may no longer protect the species they were designed to harbor.
A study of almost 30,000 protected areas showed 58% would experience a connectivity failure with 2°C of warming. Serengeti National Park in Tanzania, Sagarmatha National Park in Nepal and the U.S.’s Cloud Peak Wilderness would all experience connectivity failures. Other research shows that deforestation and fragmentation has left 62% of tropical forests without enough connectivity to facilitate species shifts.
American crocodile, Florida Everglades. Photo: National Park Service. The importance of Indigenous knowledge and valuesConservation managers and scientists often fail to take account of the cultural importance of nature and the fact that for many people, including most Indigenous peoples, the natural world and human culture are inextricably linked. Individual species often play a critically important part in traditions, religions, sacred and seasonal practices, and foodways.
One review identified 385 culturally important wild species from across the globe. Seventy-eight percent of these species were important for Indigenous or ethnic groups. Indigenous peoples manage or have tenure over 40% of the world’s intact terrestrial ecological landscapes, about a quarter of the globe, but their land rights are constantly under threat or being eroded, and their traditional cultural and governance practices have historically been marginalized or actively repressed. Lands that are actively managed by Indigenous people often have better conservation results than traditional protected areas.
First Nations in Canada, including the Kitasoo/Xai’xais Nation in British Columbia, are working in partnership with scientists to co-develop approaches to marine and coastal conservation which incorporate, or are led by Indigenous knowledge, priorities, and governance strategies.
Indigenous knowledge holders have long, place-based historical contexts, a deep understanding of natural history, especially in relation to subsistence harvesting and spiritual practices, and fine-scale knowledge of local climate and seasonal variability.
Diverse Indigenous peoples have managed Pacific salmon for thousands of years, developing complex socio-cultural systems that include traditional laws, harvesting practices, and spiritual beliefs. European settler-colonists deliberately disrupted these systems in the territories now known as the United States and Canada. The ongoing effects of these histories of colonization now interface with modern climate change.
Today, there are efforts to revitalize traditional fisheries management practices to strengthen food system sustainability and increase resilience to environmental changes, including climate change. The Kitasoo/Xai’xais Nation embody a conservation ethic in their everyday life and worldview, and this is reflected in tribal laws and the responsibilities of hereditary chiefs. In 2022, the Kitasoo/Xai’xais Nation declared a first-of-its-kind Indigenous Marine Protected Area (MPA) centered on Kitasu Bay. The new MPA will be managed by the Nation and Indigenous guardians will be recognized by the federal government as having park ranger authority.
What are the paths forward?With species on the move, ecosystems being re-shaped, we need to start looking more critically at adaptation and resilience options. When species do shift distribution, they do so at different speeds and often in different directions, so species assemblages are changing. Greater connectivity between terrestrial protected areas and other intact ecological systems is vital to facilitating species movements.
Assisted migration or species translocations may become necessary in some situations, but what are the ethical and regulatory considerations of doing this, and what might the unintended consequences be? Are there mechanical adaptive strategies than can be adopted, for example manipulating microclimate by providing artificial shade and shelter for birds at risk of heat stress in places such as the Mojave Desert, or as is already being tested by WWF and South African National Parks in dry karoo habitats?
Kent Redford, in his book Strange Natures, has addressed the fast-developing field of synthetic biology and its potential to provide conservation solutions in a changing climate. Projects are already underway to use genetic engineering to eradicate rats on important seabird islands, prevent the up-slope spread of avian malaria to protect mountain birds in Hawaii, and increase genetic diversity in populations of endangered species, such as black-footed ferrets. These approaches come with significant legal and ethical questions that have not yet been fully examined.
The ‘i’iwi is threatened by avian malaria in Hawai’i. Photo: National Park Service.At the conference, University of California, Los Angeles (UCLA) law professor Robin Kundis Craig called for anticipatory adaptation policy planning. For example, as threatened or endangered species move from current protected areas, it will be important to put in place protective legislation or regulations in the areas they will likely move to, which sometimes will be in different countries.
To accommodate and facilitate climate-driven species shifts on land and at sea we will need to address fisheries regulations, designation of shipping lanes, aquaculture and invasive species controls, species translocation protocols, and many other regulatory and legal regimes.
We have already entered an ecological world that we haven’t known before. Human influence on ecosystems has been ongoing for millennia, but the changes over the last 500 years stemming from colonialism and empire-building, industrialization, agricultural expansion, species translocations and introductions, natural resource exploitation, infrastructure development, armed conflict, and tourism, have been completely transformative. Now we are dealing with rapid climate change too.
Species are moving and reorganizing across the globe’s disappearing and highly fragmented ecosystems at a time when the socio-ecological systems, and Indigenous and traditional knowledge and management practices that have underpinned ecological stability for millennia are endangered, under extreme pressure or degrading. With species actively and unpredictably on the move, we need to anticipate and plan for change, to be able to conserve biodiversity, ecosystems services and eco-cultural systems.
California Agriculture Could Use an Ancient History Lesson
When I was an agricultural engineering student, I took a class called History of Agriculture.
I loved that class, in part, because I love agriculture, but also because I love ancient history. I used to study ancient history just for fun, and when I had the opportunity to write a paper for that class, I decided to write about the origins of agriculture and civilization in Mesopotamia and Egypt.
Mesopotamia is one of the cradles of human civilization. Agriculture started there about 10,000 years ago, and about 5,500 years ago they invented the cuneiform writing system. The first documents ever written were about agricultural production!
Egypt is another fascinating civilization that started agriculture around the same time as Mesopotamia. Egypt began flourishing after the unification of the Upper and Lower Egypt with the pharaoh Menes about 5,000 years ago. They created their own written language, the hieroglyphs, and developed their agriculture well enough to feed a population of millions at their peak.
While ancient cultures are substantially different to our current reality and there is much more to say than what I write here, learning about ancient Mesopotamian and Egyptian civilizations can give us some clarity to better understand California today.
Only if we know and understand the past can we hope to avoid repeating the same mistakes our ancestors made, as well as learning from their achievements.
This is a tale about the natural resilience of the land.
Map of the location of Ancient Egypt and Mesopotamia, indicating the main rivers, Nile River in Egypt, and Euphrates River and Tigris River in Mesopotamia. A fertile land can be a fertile civilizationThe swath of land between the Euphrates and Tigris Rivers in Mesopotamia was one of the most fertile lands known to humans, as was the riparian land of the Nile River in Ancient Egypt. Both regions had to grow wheat to sustain their armies and their populations of about 1.5 million citizens (around the second millennium BCE). Their rivers provided the much-needed water. Neither region received enough precipitation to irrigate its croplands, so the societies had to rely on surface water from the rivers for agriculture.
In that time, irrigation depended on the labor of enslaved people using a tool called shaduf that moved small amounts of water to higher elevations. Of course, this required massive amounts of work and the crops received only the water they needed to grow. As a result, all the water applied was taken by the plants, and the natural salts carried by the water were left in the soil near the roots.
A shaduf. Source: “Toward the sunrise, being sketches of travel in Europe and the East, to which is added a Memorial sketch of the Rev. William Morley Punshon” (1883), Flickr.But there was a major difference between both regions: the Nile had regular, predictable summer floodings that covered large floodplains and washed the accumulated irrigation salts away from the croplands. Those floodings “cleaned” the soil, avoiding salt accumulation in Egypt.
The water of the Euphrates and Tigris Rivers was of high quality. However, flooding was rather unpredictable and destructive in Mesopotamia, which didn’t help flush salts out of the soils; the network of irrigation canals evaporated the water concentrating the natural salts even more. After decades of using the same techniques of applying minimum water, combined with winds that carried saline dust over the area, those salts from irrigation made the soils arid and unfeasible for agriculture.
Unhealthy soil can be the end of a civilizationNot being able to grow grain because of unhealthy soils translated into not being able to feed armies. This was a significant reason why large Mesopotamian civilizations over and over again fell before relatively smaller armies of mountain tribes. Those smaller tribes would conquer the region and move the capital to another place with salt-free soils for agriculture. Then, the tribes became a new civilization, used the same irrigation techniques, and the cycle started again.
This repeated itself in Mesopotamia over multiple civilizations over millennia: Sumerians, Akkadians, Hittites, Assyrians, Babylonians….
Meanwhile in Egypt, there was a single civilization for three millennia: the Egyptians. Dynasties of pharaohs changed, but every new ruler either was or became an Egyptian, preserving their traditions, religion, and ways of life.
That was the power of the Nile: keeping the soils fertile by using the natural resilience of the land. Even with irrigation, the Nile would flood and wash away the salts and the negative anthropogenic effects of agriculture.
Preserving the natural resilience of the land made Egypt a very stable civilization for 3,000 years, from its inception until the Europeans conquered it in the 4th century BC.
Sound familiar?When I consider California, unfortunately, it reminds me more of Mesopotamia than of Ancient Egypt. California has the “mightiest” agriculture in the world, but the cost is too steep.
The arrival of the Europeans, especially during the Gold Rush, started the destruction of the natural resilience of California.
Unsustainable agriculture in the San Joaquin Valley, which is the most profitable agricultural region in the United States by far, has caused the destruction of 95% of its original wetlands, making it an arid land rather than the once-humid region it was. That destruction also ended most of the natural resilience of the San Joaquin Valley to cope with climate extremes, including the unpredictable and destructive floods we are seeing this year.
The intense use of pesticides in the San Joaquin Valley has contributed to air pollution so much that the region is among the top three places with the worst air quality in the United States.
Excessive groundwater mining by faceless corporations has dried thousands of domestic and community wells. In addition, the soil sinks about one inch per month in some places here. That aquifer storage cannot be recovered.
Excessive fertilization has polluted aquifers with nitrates. Toxic chemicals from pesticides are in the soil polluting community aquifers, and they are almost impossible to clean. That pollution combined with the increased soil salinity created by drip irrigation with low-quality groundwater is transforming the once fertile soils of the San Joaquin Valley into a desert.
On the left, Central Valley sub-basins are classified by their nitrate contamination. Priority 1 are the most contaminated areas. On the right, salt concentrations in water. According to the Salt Control Program of the Central Valley Salinity Alternatives for Long-Term Sustainability (CV SALTS), “Salt accumulations have resulted in approximately 250,000 acres being taken out of production and 1.5 million acres have been declared salinity impaired in the Valley. If not addressed, the future economic impacts of salts on the Valley could exceed $3 billion per year.” Source: Central Valley Salinity Alternatives for Long-Term Sustainability (CV-SALTS).All this contributes to water insecurity and serious health threats for millions of Californians.
But maybe our main concern should be about future food and nutrition security. We are allowing our groundwater (which is our water savings account) to be depleted to grow cash crops for wealthy corporations. They only use agriculture as a source of quick money for investors in exchange for the destruction of California’s future food and nutrition security. It is sadly ironic that the wealthiest agricultural region in the United States already has one of the highest rates of food insecurity among farmworkers. And it is not because of lack of food, but because of lack of socioeconomic justice.
A call for actionWe need to find the balance between keeping a reliable production of healthy and diverse food, and preserving California’s resilience to be able to grow food in the future. History has shown us how the most powerful civilizations fell because they lost the natural resilience that allowed them to feed their population because of bad agricultural practices and inadequate land use changes. But now we have the knowledge to correct those mistakes of the past.
I feel for farmworker communities and farmers who will have to retire their land from production in order to face the raw reality of California’s water supply. The question is, since the water that exists in California belongs to all Californians, how do we want it to be distributed? Do we want to give our water to corporations that use it to destroy our state? Or do we want to give it to local farmers who contribute to circular economies and care for their community?
We must repurpose about one million acres of cropland for agriculture to have water sustainability if we want California to have agriculture in the future. How we repurpose that large area (larger than the state of Rhode Island) will determine the future of California and our agriculture.
If we transition cropland used for conventional agriculture into land uses with positive side effects, the change will have an outstanding beneficial ripple effect in the health, economy, and environment of millions of Californians, as well as for small, medium, and large farmers. But if we do it wrong, allowing polluters and opportunistic corporations to take advantage of our water and our agriculture for their private gain, we may destroy California’s Central Valley.
We need healthy food and a healthy environment. Why don’t we distribute water to those who are making California a better, more sustainable place?
We will need to choose, and we should choose wisely.
Lecciones de historia antigua podrían ayudar la agricultura de California
Cuando era estudiante de ingeniería agronómica, tomé una clase llamada Historia de la Agricultura.
Esa clase me encantó, en parte, porque me encanta la agricultura, pero también porque me encanta la historia antigua. Solía estudiar historia antigua solo por diversión, y cuando tuve la oportunidad de hacer un trabajo para esa clase, decidí escribir sobre los orígenes de la agricultura y la civilización en Mesopotamia y Egipto.
Mesopotamia es una de las cunas de la civilización humana. La agricultura comenzó allí hace unos 10.000 años, y hace unos 5.500 años inventaron el sistema de escritura cuneiforme. ¡Los primeros documentos escritos fueron sobre producción agrícola!
Egipto es otra civilización fascinante que inició la agricultura casi al mismo tiempo que Mesopotamia. Egipto comenzó a florecer después de la unificación del Alto y el Bajo Egipto con el faraón Menes hace unos 5.000 años. Crearon su propio lenguaje escrito, los jeroglíficos, y desarrollaron su agricultura lo suficientemente como para alimentar a una población de millones de personas en su apogeo.
Si bien las culturas antiguas eran sustancialmente diferentes a nuestra realidad actual y hay mucho más que decir de lo que escribo aquí, aprender sobre las antiguas civilizaciones mesopotámica y egipcia puede brindarnos cierta claridad para comprender mejor la California actual.
Sólo si conocemos y entendemos el pasado podemos evitar repetir los mismos errores que cometieron nuestros antepasados, así como aprender de sus logros.
Esta es una fábula sobre la resiliencia natural de la tierra.
Mapa de la ubicación del Antiguo Egipto y Mesopotamia, indicando los principales ríos, el río Nilo en Egipto, y el río Éufrates y el río Tigris en Mesopotamia. Una tierra fértil puede ser una civilización fértilLa franja de tierra entre los ríos Éufrates y Tigris en Mesopotamia era una de las tierras más fértiles conocidas por los humanos, al igual que la tierra ribereña del río Nilo en el Antiguo Egipto. Ambas regiones necesitaban cultivar trigo para sustentar sus ejércitos y su población de unos 1,5 millones de ciudadanos (alrededor del segundo milenio AEC), y sus ríos proporcionaban el agua que tanto necesitaban. Ninguna región recibía suficientes precipitaciones para regar sus tierras de cultivo, por lo que ambas sociedades dependían del agua superficial de los ríos para la agricultura.
En ese tiempo, el riego dependía del trabajo de personas esclavizadas que usaban una herramienta llamada shaduf que movía pequeñas cantidades de agua a elevaciones más altas. Por supuesto, esto requería grandes cantidades de trabajo y los cultivos recibían solo el agua que necesitaban para crecer. Como resultado, toda el agua aplicada era absorbida por las plantas y las sales naturales transportadas por el agua se quedaban en el suelo cerca de las raíces.
Un shaduf. Fuente: “Hacia el amanecer, siendo bocetos de viajes por Europa y Oriente, a los que se añade un boceto conmemorativo del reverendo William Morley Punshon” (1883). https://www.flickr.com/photos/internetarchivebookimages/14596206898Pero había una gran diferencia entre ambas regiones: el Nilo tenía inundaciones de verano regulares y predecibles que cubrían las grandes llanuras aluviales donde se practicaba la agricultura y arrastraban las sales de riego acumuladas en las tierras de cultivo. Esas inundaciones “limpiaban” el suelo, evitando la acumulación de sales en Egipto.
El agua de los ríos Éufrates y Tigris era de gran calidad. Sin embargo, las inundaciones en Mesopotamia eran impredecibles y destructivas, lo que no ayudaba a eliminar las sales de los suelos; la red de canales de riego evaporaba mucha agua concentrando aún más las sales naturales; y después de décadas usando las mismas técnicas de aplicación mínima de agua, combinadas con vientos que arrastraban polvo salino sobre la zona, esas sales provenientes del riego volvían los suelos de cultivo áridos e inviables para la agricultura.
Un suelo insalubre puede ser el fin de una civilizaciónNo poder cultivar granos debido a suelos insalubres se traducía en no poder alimentar a los ejércitos. Esta fue una razón importante por la que las grandes civilizaciones mesopotámicas cayeron una tras otra ante ejércitos relativamente más pequeños de tribus menores. Esas tribus conquistaban la región y trasladaban la capital a otro lugar con suelos libres de sal para la agricultura. Luego, estas mismas tribus se convertían en una nueva civilización, utilizaban las mismas técnicas de riego y el ciclo comenzaba de nuevo.
Esto se repitió en Mesopotamia a lo largo de múltiples civilizaciones durante milenios: sumerios, acadios, hititas, asirios, babilonios, …
Mientras tanto, en Egipto, hubo una sola civilización durante tres milenios: los egipcios. Las dinastías de los faraones cambiaban, pero cada nuevo gobernante ya era o se convertía en egipcio, conservando sus tradiciones, religión y formas de vida.
Ese era el poder del Nilo: mantener los suelos fértiles usando la resistencia natural de la tierra. Incluso con riego, el Nilo inundaba y lavaba las sales y los efectos antropogénicos negativos de la agricultura.
La preservación de la resiliencia natural de la tierra hizo de Egipto una civilización muy estable durante 3.000 años, desde sus inicios hasta que los europeos la conquistaron en el siglo IV AEC.
¿Suena familiar?Cuando pienso en California, desafortunadamente, me recuerda más a Mesopotamia que al Antiguo Egipto. California tiene la agricultura “más rentable” del mundo, pero el costo es demasiado elevado.
La llegada de los europeos, especialmente durante la fiebre del oro, inició la destrucción de la resiliencia natural de California.
La agricultura insostenible en el Valle de San Joaquín, que es con mucha diferencia la región agrícola más rentable de Estados Unidos, ha causado la destrucción del 95% de sus humedales originales, convirtiéndola en una tierra árida en lugar de la región de humedales que alguna una vez fue. Esa destrucción también puso fin a la mayor parte de la resiliencia natural del Valle de San Joaquín para hacer frente a los extremos climáticos, incluidas las inundaciones impredecibles y destructivas que estamos viendo este año.
El uso intensivo de pesticidas en el Valle de San Joaquín ha contribuido tanto a la contaminación del aire que la región tiene los tres primeros lugares con la peor calidad del aire en Estados Unidos.
La extracción excesiva de agua subterránea por parte de corporaciones ha secado miles de pozos domésticos y comunitarios. Además, el suelo se hunde alrededor de una pulgada al mes en algunos lugares por aquí. Esa capacidad de almacenamiento del acuífero no se puede recuperar. La fertilización excesiva ha contaminado los acuíferos con nitratos. Pesticidas casi imposibles de limpiar están en el suelo contaminando los acuíferos comunitarios. Esa contaminación combinada con la salinidad del agua de riego y los sistemas de irrigación por goteo están transformando los suelos que una vez fueron fértiles en el Valle de San Joaquín en un desierto.
A la izquierda, las subcuencas del Valle Central están clasificadas por su contaminación por nitratos. Prioridad 1 son las zonas más contaminadas. A la derecha, concentraciones de sal en el agua. De acuerdo con el Programa CV SALTS, “Las acumulaciones de sal han resultado en aproximadamente 250.000 acres fuera de producción y 1,5 millones de acres han sido declarados afectados por la salinidad en el Valle. Si no se aborda, los impactos económicos futuros de las sales en el Valle podrían exceder los $3.000 millones por año.” Fuente: CV-SALTS.Todo esto contribuye a la inseguridad hídrica y graves amenazas para la salud de millones de californianos.
Pero tal vez nuestra principal preocupación debería ser la futura seguridad alimentaria y nutricional. Estamos permitiendo que nuestra agua subterránea (que es nuestra cuenta de ahorro de agua) sea vaciada por parte de grandes corporaciones que solo usan la agricultura como una fuente de dinero rápido para los inversionistas. A cambio de su beneficio personal, nos dejan la destrucción de la futura seguridad alimentaria y nutricional de California. Es tristemente irónico que la región agrícola más rica de Estados Unidos ya tenga una de las tasas más altas de inseguridad alimentaria entre los trabajadores agrícolas. Y no es por falta de alimentos, sino por falta de justicia socioeconómica.
La moraleja: un llamado a la acciónNecesitamos encontrar el equilibrio entre mantener una producción sostenible de alimentos saludables y diversos, y preservar la resiliencia de California para poder cultivar comida en el futuro. La historia nos ha mostrado cómo las civilizaciones más poderosas cayeron porque perdieron la resiliencia natural que les permitía alimentar a su población. Todo por las malas prácticas agrícolas y los inadecuados cambios de uso de suelo. Pero ahora tenemos el conocimiento para corregir esos errores del pasado.
Lo siento por las comunidades de trabajadores agrícolas y agricultores que tendrán que retirar sus tierras de producción agrícola para enfrentar la cruda realidad del suministro de agua de California.
La pregunta es, ya que el agua que existe en California es de todos los californianos, ¿cómo queremos que se distribuya? ¿Queremos dar nuestra agua a corporaciones que la usan para destruir nuestro estado? ¿O queremos dársela a agricultores locales que contribuyen a las economías circulares y cuidan de su comunidad?
Debemos reusar alrededor de un millón de acres de tierras de cultivo para que la agricultura tenga un uso sostenible del agua si queremos que California tenga agricultura en el futuro. La forma en que reusemos esa gran área (más grande que el estado de Rhode Island) determinará el futuro de California y nuestra agricultura.
Si hacemos una transición de tierras de cultivo de agricultura convencional a usos de la tierra con efectos secundarios positivos, el cambio tendrá un efecto beneficioso sobresaliente en la salud, la economía y el medio ambiente de millones de californianos, así como para los agricultores pequeños, medianos y grandes. Pero si lo hacemos mal, permitiendo que los que más contaminan y las corporaciones oportunistas se aprovechen de nuestra agua y nuestra agricultura para su beneficio privado, podemos destruir el Valle Central de California.
Necesitamos alimentos sanos y un medio ambiente sano. ¿Por qué no distribuimos nuestra agua a quienes están haciendo de California un lugar mejor y más sostenible? Tendremos que elegir, y debemos elegir sabiamente.
OMB Proposes Science-Based Guidance for Modernizing Regulatory Review
In April, President Biden issued an Executive Order on Modernizing Regulatory Review with some critical steps to help update and improve guidance for agency rulemaking processes. As part of this action, the Office of Management and Budget (OMB) has proposed updates to Circular A-4, a bedrock guidance document, which will greatly improve how federal agencies account for the benefits and costs of their actions and therefore better serve the public interest.
What is Circular A-4?Dating back to the 1980s, cost-benefit analysis has been an integral part of how federal agencies evaluate the impacts of different regulatory options and inform decisions on which ones to move forward based on maximizing net benefits (unless there is a statutory requirement to use a different approach). OMB’s Circular A-4 is a critical guidance document for these regulatory cost-benefit analyses, and it has not been updated in twenty years. This document, which remains obscure to most of the public, is nevertheless crucial to how agencies evaluate the costs and benefits of decisions that affect all our lives. The Office of Information and Regulatory Affairs (OIRA) is the specific arm of OMB that is tasked with providing guidance and reviewing all agency regulations, including the cost-benefit analyses, to make sure they meet statutory and legal requirements.
Undoubtedly, cost-benefit analyses have shortcomings, including their inability to fully account for benefits and costs that are hard to monetize but nevertheless significant. Decisions about which benefits and costs get counted and the underlying assumptions used to estimate those costs and benefits can significantly affect the outcome of these analyses–and have important equity and justice implications. And, of course, on issues where public health is the overriding concern (e.g. for regulating toxic chemicals), health-based standards are the legal requirement and economic considerations must be secondary. Nevertheless, there are many applications of cost-benefit analyses where sound principles and guidance can help inform better decision-making to protect people and the environment.
OIRA’s proposal to update Circular A-4 makes significant strides in improving the methodology and data used to guide the evaluation of costs and benefits and ultimately the choices made by agencies. The proposed updates are especially critical to help ensure that environmental and public health priorities are better and more equitably accounted for in our policymaking processes, in line with the latest science and economics. There are also some areas for improvement that could make this guidance better when it is finalized.
OIRA has also proposed accompanying updates to circular A-94, which provides general guidance on the use of cost-benefit analysis for certain categories of federal actions, which are not regulatory in nature, including guidance on discount rates, treatment of uncertainty, treatment of inflation and measuring the economic incidence on households and businesses. This, too, has not been updated in twenty years.
OMB is taking comment on the Circular A-4 update and on the Circular A-94 update through June 6, 2023. Please consider adding your voice in support of these important changes.
Four significant improvements proposedThe new Circular A-4 guidance is a significant revamp and expansion of the previous guidance and includes several important improvements. In a preamble to the guidance, OMB lays out its rationale for these changes. Below are four major areas of improvement in the proposal:
- Changes to the discount rate which will help ensure that agencies are using a more appropriate discount rate and taking better account of intergenerational equity. As the guidance notes, when benefits or costs occur in different time periods, they cannot simply be added together, and an appropriate discount rate must be applied to translate future dollar values to their value today. Generally, impacts that occur closer in time to today are understood to have a higher value for people. However, the choice of discount rate matters and the higher the discount rate applied, the lesser is the value of costs and benefits that occur far in the future relative to those that occur today. This can materially affect choices we make today that could have impacts far into the future. OMB’s current guidance for discount rates applied to costs and benefits occurring at different times is to apply a 3%-7% discount rate, depending on whether society’s perspective or private entities’ perspectives are being considered in a particular circumstance. The updated proposed guidance lowers the discount rate to reflect current economic realities, the latest economic thinking, and the need to address intergenerational equity for impacts that cut across longer time periods. As it notes, the ‘real (inflation-adjusted) rate of return on long-term U.S. government debt provides a fair approximation of the social rate of time preference.’ Based on treasury returns over the most recent 10-year period, which are much lower than for previous periods used in the old guidance, this results in a proposal for the social rate of time preference to be pegged at 1.7%. The update also proposes to eliminate the use of the capital discount rate (currently estimated at 7%) and replace it with the shadow price of capital, reflecting the latest economics. And finally the proposal recognizes that there are special ethical considerations involved when costs and benefits occur across generations. To address intergenerational equity, the proposal aims to reflect the latest economic thinking and is taking comment on some options including whether to use a declining discount rate.
- Methodologies for conducting distributional analyses that will help ensure a more equitable assessment of costs and benefits to different populations. The old circular A-4 guidance gives very short shrift to distributional analyses even though it recognizes that the costs and benefits of regulatory actions often are not borne equally by different segments of the population. The reality is that, to date, most agency rulemakings have not included such distributional analyses. The proposed guidance recognizes the importance of undertaking analyses that break down impacts by group based on income, which is often the most tractable pathway. But it also notes that other approaches might be more appropriate in certain circumstances. “Other economic and demographic categories such as those based on race and ethnicity, sex, gender, geography, wealth, disability, sexual orientation, religion, national origin, age or birth cohort, family composition, or veteran status—among others—may be relevant to a particular regulation.” Where possible and appropriate, an analysis of incidence could usefully be extended to include some of these categories. In addition, there is guidance on how to set baselines for doing distributional analysis and how to conduct a weighted cost-benefit analysis reflective of the marginal utility of income for populations with different income levels (recognizing than an extra dollar is more valuable for a person with lower income than a richer person). This more substantive guidance provides a much more solid basis for agencies to undertake these kinds of distributional analyses and integrate them into regulatory cost-benefit analyses, although it stops short of requiring it.
- Much more thorough consideration of transboundary effects of regulations which is a huge improvement and is especially critical in the context of global challenges like climate change. The updated proposal goes a long way toward providing greater context and justification for regulatory cost-benefit analyses to consider the full geographic scope of the impact of regulations beyond the boundaries of the U.S. It recognizes that U.S. citizens and residents can be affected by regulatory impacts beyond U.S. geographic borders; and that the U.S. might have strategic interests in recognizing transboundary effects especially when seeking reciprocal action from other nations and in the context of regulations to address global commons issues.
- Better treatment of uncertainty in evaluating the benefits, costs and incidence of regulatory actions. The guidance recognizes the importance of including uncertain effects to ensure robust analysis, rather than just eliminating or minimizing impacts that have uncertainty attached to them. It calls for ‘credible, objective, realistic, and scientifically balanced’ analysis of uncertainty. This includes presentation of a range of scenarios if appropriate and clearly noting when irreversibility is material to the material to the analysis, such as when ‘regulating an exhaustible resource or an endangered species.’
The OMB is also undertaking a careful process of external peer review and public comment on this proposal, which will help ensure that the final guidance is technically robust, serves the public interest, and is durable across future administrations. In a forthcoming action, OMB will also be issuing Guidance for Assessing Changes in Environmental and Ecosystem Services in Benefit-Cost Analysis and is now accepting nominations for independent experts to participate in scientific peer review of this guidance.
The President’s Executive Order also directs OIRA, working together with relevant agencies, to take steps to help ensure more inclusive public participation in regulatory processes, including for underserved communities.
Room for further improvementWhile the proposal to update Circular A-4 makes important strides forward, there are some areas where further improvements are needed. Four specific aspects that OMB should address in the final proposal are:
- Regular updates to the risk-free consumption discount rate as the 30-year average Treasury rate changes over time.
- Adoption of a declining discount rate schedule for longer time horizons, in keeping with the latest economic literature on this topic
- Clearer guidance on conducting distributional analysis of the impacts of regulations, including specifying when to require it instead of just leaving it optional, and providing more direction on how to incorporate distributional impacts for historically marginalized and disadvantaged communities. Distributional analyses should be a requirement for health and safety related regulations. Given the clear evidence of racial disparities in exposure to pollution and consequent negative health outcomes in our nation, especially for Black, Latino and Indigenous populations, it is critical that distributional analyses are not simply done on the basis of income.
- Specifically calling for a regular cycle of updates based on the best available science and economics
Our nation’s ability to move forward policies that safeguard the broad interests of the public and improve the well-being of all people depends on the tools and frameworks we use to inform regulatory decisions.
Examples abound for how the current outdated Circular A-4 guidance has fallen short, leading to less protective regulations and even regulations that exacerbate long-standing inequities and injustices. The cost-benefit analyses used by the US Army Corps of Engineers, using the monetized value of property and infrastructure to calculate the costs and benefits of its investments, have left lower income communities less well defended from flooding than richer communities. EPA regulations have not adequately protected the health of EJ communities marked by a legacy of structural racism. Investments in climate action that will be critical for the well-being of our children and grandchildren are heavily discounted and made to seem less beneficial by applying inappropriately high discount rates.
OMB has a unique opportunity now to finalize robust, science-based, and equitable guidance through critical updates to Circular A-4. We look forward to these much-needed and long-overdue changes and improvements being finalized expeditiously so agencies can implement them, and we can all reap the benefits of better policies and regulations.
Atlantic Hurricane Season 2023: the Good, the Bad, the Ugly—and More
Another Atlantic hurricane season is upon us starting on June 1st. What will this season bring? Hard to foretell, but some data can help us with the basics.
First of all, the goodColorado State University released its forecast (usually the first out every year), which projects an Atlantic hurricane season somewhat weaker than recent years, with 13 named storms (winds of 39 mph or higher), six hurricanes (winds of 74 mph or higher), and two major hurricanes (Category 3, 4 or 5, with winds of 111 mph or higher).
The National Oceanic and Atmospheric Administration (NOAA) forecast, released on May 25, calls for similar numbers: 12 to 17 total named storms, of which 5 to 9 could become hurricanes, with 1 to 4 being major. NOAA has a 70% confidence in these ranges (but see more about uncertainty below).
This forecast is a bit different from (and note, better than) first forecasts for recent years:
- 2022: 14 to 21 named storms, with 6 to 10 hurricanes and 3 to 6 of those becoming major hurricanes;
- 2021: 15 to 21 named storms, with 7 to 10 becoming hurricanes, of which 3 to 5 could become major hurricanes;
- 2020: 19 to 25 named storms, with 7 to 11 turning into hurricanes, 3 to 6 of which would be major.
Another good fact is that 2023, like 2022, did not have a named storm form before the official season start, unlike the 7 straight years from 2015 to 2021.
Source: https://www.noaa.gov/news-release/2023-atlantic-hurricane-season-outlook The (maybe) badWe certainly know there is usually quite some difference between the forecast and the real numbers at the end of each season. 2022 fortunately kept to the lower end of the forecast range with 14 named storms, eight hurricanes, and two major hurricanes. 2021 ended up being the third most active season in recorded history, with 21 named storms, 7 hurricanes, and 4 major hurricanes (two Category 3 and two Category 4). 2020 went well above the forecast, with a record 30 named storms, 14 of which turned into hurricanes, seven becoming major hurricanes. 2020 was also the second time in tropical storm naming history (after 2005) that the National Hurricane Center had to use the Greek alphabet letters to name storms after the regular list of names was exhausted.
These numbers show us that forecasts are useful but not binding—they come with an associated probability that those numbers will actually happen. NOAA stated that there is a “40% chance of a near-normal season, a 30% chance of an above-normal season and a 30% chance of a below-normal season.” We see from those numbers that the actual season can pretty much go anywhere—be near-normal, above-normal, or below normal. No wonder this year’s probability is said to have “larger-than-normal uncertainty,” likely linked to the uncertainty on El Niño formation and strength.
(Little bit of science behind forecasts)
These forecasts are based on some pretty standard measurements, monitored throughout the year, plus some relevant factors. Conditions behind this year’s forecasts are conducive to a somewhat below-average season. That is mainly due to a predicted development of an El Niño by late summer or early fall, which historically tampers down Atlantic tropical cyclone (i.e., tropical storms and hurricanes) activity.
Depending on how early and strongly an El Niño develops, updated forecasts (usually released throughout the season to account for changes in predicting variables) are likely to change substantially. However, if an El Niño does not develop soon or strongly enough, sea surface temperatures are already so high (similar to those of the record-breaking 2020 season) that 2023 may end up very busy.
The National Oceanic and Atmospheric Administration’s Climate Prediction Center said there is a high chance El Niño conditions will form between May and July, and a higher probability it will form by fall (80% to 90%). It also said there is a high chance that it will be a strong El Niño. However, overall, there is a large uncertainty in this year’s forecast. We wait.
The uglyWe know it takes only one hurricane making landfall to potentially have a huge impact. The fact that mean hurricane intensification rate has increased significantly along Atlantic coast over the last 40 years is a worrisome trend (this trend was not detected for the Gulf coast). Previous studies have also identified the trend of rapid intensification (see here and here).
A hurricane intensifying faster makes forecasts trickier, with less advance warning time, which leaves coastal communities and emergency responders with less time to prepare and react. This is particularly worrisome for communities that lack the resources to properly prepare and, if needed, evacuate. As mentioned on a previous blog I co-authored with my colleague Juan Declet-Barreto, it has been observed that disaster relief and recovery measures have lagged for Black, Latinx, and low-income people, and that communities of color receive less disaster relief and recovery funds than white communities.
The Colorado State University forecast report lists the following probabilities of major hurricanes making landfall:
• 44% for the entire US coastline
• 22% for the US East Coast including the Florida peninsula
• 28% for the Gulf Coast from the Florida panhandle westward to Brownsville
• 49% for the Caribbean
And there are other concernsIn addition to numbers and probabilities themselves, there are various other issues related to hurricanes and their impacts. Yale Climate Connections reported that, based on analyses from the National Hurricane Center, most hurricane-related deaths are not directly caused by storm surge or wind associated with the hurricane, but are due to carbon monoxide poisoning, electrocution, vehicle incidents, heat-related issues, and other indirect causes—many far from the coast, such as what happened in 2021 with the remnants of hurricane Ida. Something to keep in mind when preparing for a hurricane.
Also, a 2022 study of Florida residents has shown that repeated exposures to hurricanes were positively associated with post-traumatic stress symptoms, generalized worries, global distress, and functional impairment. The study covered exposures to hurricanes Irma and Michael, strong hurricanes which happened in successive years, and included direct and indirect exposures as well as exposure to media coverage of hurricane damages and potential hurricane threats. It showed that people did not “acclimate” to the impacts but rather the psychological symptoms accumulated and intensified, suggesting the potential for a mental health crisis.
All this to show that hurricane impacts go well beyond the physical and local. It is important for those working with disaster preparedness, relief and recovery—as well as decisionmakers—to account for all possible consequences that can affect not only socioeconomic and environmental aspects of an impacted community, but also individual people’s lives and livelihoods—which can linger for quite some time, so many may not have fully recovered before the next hurricane hits. This story has some examples of these lingering consequences, which many people don’t realize are there and need to be addressed.
And of course, because hurricanes are becoming stronger, wetter, slower, and more destructive, and all these trends have been linked to anthropogenic global warming, it is essential and urgent that nations transition away from fossil fuels and invest on renewable, clean energy. As I said before, nations must act now to reduce emissions and avoid the worst of climate change and its impacts, keeping communities safe, listening to the science, and safeguarding critical systems across the globe.
Biomethane Threatens to Upend the Clean Hydrogen Tax Credit
The Inflation Reduction Act’s new hydrogen production tax credit, known as code 45V, is intended to incentivize a shift to low-carbon hydrogen production by offering producers a credit that increases in value as the carbon emissions associated with produced hydrogen declines. With an outsized credit for the lowest-carbon tier, the incentive’s aim is clear: Drive deployment of hydrogen production technologies that will be needed by, and aligned with, the nation’s overall clean energy transition.
But instead of meeting that straightforward aim, a series of implementation loopholes threaten to fully undermine it. This includes loopholes related to biomethane, whereby heavily polluting fossil fuel-fired hydrogen production facilities—the very facilities the tax credit is trying to incentivize a shift away from—can cloak themselves as “clean” and reap full tax credit rewards, without having done anything but pushed around paper.
As a result, billions of dollars of public funds meant to drive the cleaning up of hydrogen could actually result in the bolstering of fossil fuel incumbents, not new clean facilities. Worse, by incentivizing hydrogen production pathways entirely misaligned with the true needs of the nation’s clean energy transition, the harmful consequences of allowing these loopholes could be felt long into the future.
The Treasury Department, in collaboration with the Department of Energy and the Environmental Protection Agency, is responsible for protecting against such outcomes by developing guidance for tax credit implementation to ensure qualifying “clean” hydrogen is truly clean.
To date, most stakeholder engagement has focused on the lifecycle greenhouse gas (GHG) accounting framework required to accurately report the carbon intensity of electrolytically produced hydrogen, or hydrogen generated by using electricity to split water into hydrogen and oxygen. As detailed in this accompanying blogpost, that’s because the process is simultaneously the likeliest to meet the highest-qualifying credit tier, as well as the likeliest to unintentionally cause significant increases in carbon emissions if there are insufficiently rigorous implementation guardrails in place.
But the issues demanding rigorous attention in tax credit implementation don’t stop there.
Critically, the treatment of biomethane, or methane produced from anaerobic digestion of organic matter such as animal manure, sewage, or from landfills, threatens to undermine the veracity of carbon intensity calculations for both steam methane reforming as well as electrolytic pathways.
While it may seem obscure, the consequences are not. Simply on the basis of outdated biomethane assumptions and permissive paper accounting, heavily polluting, fossil fuel-fired hydrogen production projects could suddenly count as “clean.”
As it finalizes tax credit implementation guidance, the Treasury Department must cinch tight this loophole by disallowing: 1) fuels to count as “carbon negative,” 2) the offsetting of pollution, and 3) the use of “book-and-claim” accounting by fossil fuel users.
How do biomethane, lifecycle carbon accounting, and the tax credit interact?Biogas is the direct product of anaerobic digestion, primarily comprised of a mixture of methane and carbon dioxide, plus much smaller amounts of other gases. When biogas is processed to remove carbon dioxide and those other gases, it becomes biomethane, also sometimes referred to as “renewable natural gas.”
Once biomethane has been processed, it’s indistinguishable from the traditional fossil methane coursing through the gas grid. Which means that when it’s burned, the resulting carbon dioxide emissions are the same, as are the potent upstream climate impacts of it leaking out of gas infrastructure.
However, because of lifecycle GHG emissions accounting—the process by which the carbon intensity of produced hydrogen is evaluated for the tax credit—otherwise-equivalent hydrogen production pathways that run on fossil methane versus biomethane can end up at completely opposite ends of the hydrogen carbon intensity spectrum, with one heavily polluting and one, somehow, “clean.”
This split in outcomes is derived from the fact that lifecycle GHG accounting considers the upstream emissions from fuel production and sourcing alongside the emissions that occur at the final point of fuel production or use. Such a holistic perspective is important. Think, for example, of how the once-considered climate benefit of gas-fired power plants compared to coal-fired power plants rapidly eroded as greater understanding of the magnitude and impact of upstream methane leakage dramatically shifted the lifecycle impact of using gas, even though the smokestack comparisons never changed.
However, decisions around where lifecycle GHG boundaries are drawn, and what assumptions underpin the included components, have consequences that can lead to entirely different conclusions.
These boundary and assumption questions are at the heart of the debate over implementation criteria for the electrolytic hydrogen production pathway, as referenced above. But they are also highly relevant to the treatment of biomethane and, similar to choices around lifecycle accounting for electrolytic pathways, are at great risk of undermining the whole point of the tax credit if the Treasury Department fails to get it right.
Here’s why: Under certain assumptions, biomethane can be calculated as “carbon negative”—and everything goes sideways from there.
How can biomethane be considered “carbon negative”?The fundamental question that biomethane lifecycle accounting turns on is: What would have happened to the original biogas if it had not been used for the production of hydrogen?
Under certain outdated accounting frameworks, the baseline assumption is that without voluntary intervention, methane from certain pollution sources would simply have been vented to the atmosphere. Capturing and using it, therefore, would swap potent methane emissions for comparatively lesser carbon dioxide emissions. When that framework is translated into lifecycle accounting, the GHG benefit of avoiding vented methane swamps the carbon dioxide subsequently generated from the gas’s use, which ultimately leads to eye-poppingly negative carbon intensity values. That means use of the fuel, even when it results in carbon dioxide emissions, would be calculated as a net positive for the climate.
This assumption is a relic of outdated perspectives where climate pollution is costless, businesses undertaking operations that result in climate pollution have no responsibility to limit it, and ongoing pollution is taken as a given. Carbon-negative accounting associated with limiting that pollution therefore functions as a de facto offset program.
If this perspective ever held water, it unequivocally now does not.
Today, climate pollution is increasingly either directly regulated—see, for example, recent EPA standards for carbon pollution from vehicles, power plants, and oil and gas operations—or affixed with a fee. And for the pollution sources that have escaped actionable accountability to date, wherever they sit in the economy, it’s a question of when, not if.
As a result, the right baseline assumption for biomethane lifecycle GHG accounting should be one where the methane has been entirely captured or avoided to start, such that there is no assumed net climate benefit from its use.
Adopting this approach would also help to avoid the present perverse incentive of rewarding the production of otherwise-avoidable methane, leading to bad outcomes like increasing the size of concentrated animal feeding operations (CAFOs) or keeping organic waste in landfill streams instead of pursuing policies to limit and divert it. This is a particularly relevant concern when it comes to biomethane, as many of the sources from which it is derived bring with them towering environmental injustices and acute harms that reach far beyond their climate contributions.
What are the implications of carbon-negative fuels for the hydrogen tax credit?Because carbon-negative fuels effectively open the door to offsetting, their presence can wreak havoc in climate policies that were not explicitly designed with their inclusion in mind. This makes rigorous guardrails around their incorporation key. For the hydrogen production tax credit, there are three primary concerns.
First, by using even just a small share of carbon-negative methane, traditional fossil fuel-based resources can suddenly be calculated as “clean.”
This is clearly of relevance for steam methane reforming and autothermal reforming pathways, as these facilities could qualify for the highest credit tier simply on the basis of a minor share of the fuel being used, not on the ability to capture carbon as the credit had ostensibly been designed to create space for.
However, the implications are also relevant to electrolytic hydrogen production pathways, as carbon-negative fuel accounting could also mean that gas-fired power plants running on a minor portion of biomethane could be considered “clean” as well—and could even net out some blend of coal-fired power. This would entirely undermine the otherwise-fought-for rigor around the electrolytic production path.
Second, because of the above, the “clean hydrogen” tax credit will subsidize hydrogen production projects that are now, and will always be, heavily polluting, not clean.
These projects are at further risk because even if the fuel remains calculated as carbon-negative going forward—which it will not—biomethane is extremely resource-constrained and is of interest to many fossil fuel-dependent industries. This means that it is certain to increase in price over time, with priority going to those industries that do not have other means of decarbonizing their processes—which is very much not the case for hydrogen production.
The tax credit, then, will waste enormous amounts of money propping up and building out a fossil fuel-based hydrogen industry entirely incompatible with the clean energy transition, briefly “clean” on paper but, ultimately, heavily polluting to the end.
Finally, frameworks allowing carbon-negative fuels have traditionally treated the gas grid as a continuous whole and allowed for the decoupling of environmental attributes from physical deliverability, so these implications would be capable of stretching to reach any and every polluter across the country. This means, for example, that a hog farm in Missouri could produce biomethane claimed by a steam methane reformer in California, or a dairy farm in Minnesota could produce biomethane claimed by gas-fired power plant in Florida.
Beyond the straightforward absurdity of this approach, as well as the challenging questions it introduces for appropriate accounting for highly consequential gas grid methane leakage, this nation-spanning “book-and-claim” framework would also undermine the deliverability requirements central to upholding the rigor of the electrolytic production pathway.
How can the Treasury Department close these loopholes?If Treasury allows carbon-negative accounting for biomethane, it will effectively be using taxpayer dollars intended for building out clean hydrogen infrastructure to instead offset the costs of methane pollution abatement from an entirely different set of sectors. Furthermore, the hydrogen production projects that are enabled as a result of carbon-negative accounting will be neither durable nor scalable contributors to climate progress, and they will come at the expense of supporting truly clean investments.
The implementation guidelines under development by the Treasury Department provide an opportunity to guard against these harmful and misguided outcomes. To start:
- Treasury must not allow carbon-negative accounting—in any part of the lifecycle GHG analysis framework. Such accounting would turn the tax credit into a de facto offset program and undermine the incentive’s ability to direct investments toward its true purpose: supporting the buildout of durably climate-aligned production projects.
- If despite these risks Treasury still allows carbon-negative values, Treasury must not allow carbon-negative fuels to be used for offsetting or netting. This would mean in practice that a carbon-negative fuel would be cut off at “zero” and would not be able to average out heavily polluting production, be it electrolytic across production hours, or steam methane reforming across fuel blends or varying carbon capture levels.
- In its treatment of biomethane in the hydrogen tax credit, Treasury should not allow use of book-and-claim accounting by fossil fuel users. This practice enables steam methane reformers and gas-fired power plants that are actually running on fossil methane to use paper accounting to declare their processes “clean” without any shift in technology or practice, fully counter to the intent of the program and unhelpfully diverting funds from clean hydrogen producers to biomethane polluters. This could also lead to pernicious ends in the electrolytic pathway if netting is allowed, as any electrolysis project coming in just above the GHG threshold could “source” negative carbon intensity electricity from the nearest gas plant to even out their numbers.
Without question, methane leaked to the atmosphere from manure lagoons and other sources of biogas is deeply harmful to the climate. The top priority should be to avoid its creation in the first place—but the methane that remains should be captured and put to productive use. However, in evaluating the lifecycle GHG value of that resulting methane, the starting assumption should be that the methane is a source of pollution requiring direct action, not an optional offset.
This is all the more critical given that sources of biomethane are often heavily entangled with significant environmental and moral injustices, far more complex than a singularly climate-focused policy can address, and at great risk of being exacerbated, not alleviated, if the Treasury Department were to keep such an outdated and misguided assumption in place.
California Legislature Could Make Overdue Changes to Water Rights if These Three Bills Pass
For the first time in several decades, policy makers in Sacramento seem poised to actually do something about California’s dysfunctional water rights systems. There are three promising policies winding their way through the Legislature this session. All three bills just made it out of the committee review process, and are slated to be voted on by June 2. These incremental changes are a long-overdue start toward addressing California’s outdated and unjust water rights system.
The package of water rights bills before the Legislature offers critical updates to the State Water Resources Control Board’s (“Water Board”) ability to make informed and timely water management decisions and build climate resilience for the future for everyone in the state.
“Water is protected for the use and benefit of all Californians. California’s waters cannot be owned by individuals, groups, businesses, or governmental agencies.”
State of California Water BoardAs I recently explained, the origin of California’s water rights systems is steeped in the bigotry of the 19th century. The system privileges a class of senior water rights holders to the detriment of others. A lot must be done to change the water rights system in California, yet, even communities that have been “systematically marginalized” by the system share an interest in improving its enforcement, including stopping water theft, as outlined in the 2022 California Curtailment Cases Amicus Brief.
These bills are simple reforms that do not represent a paradigm shift, but rather provide the state with common sense tools and oversight authority to ensure the current water rights system works. If passed, they will reduce uncertainty and facilitate more adaptive water management.
The three water rights modernizations under deliberation are
- Assembly Bill 460 (Bauer-Kahan): Water Board can immediately stop illegal diversions with “interim relief” powers and higher penalties for water theft.
- Assembly Bill 1337 (Wicks): Water Board can curtail diversions regardless of basis of right (includes pre-1914 rights).
- Senate Bill 389 (Allen): Water Board can investigate and verify pre-1914 water rights.
Should we expect a political showdown and stormy fight on the floor? Or will California’s Assembly and Senate be willing to pass common sense modernizations to right-size the water rights system to the challenges posed by the 21st century?
Let’s dig into each bill and explore why California needs them to become law and how the bills could improve the water rights system.
Assembly Bill 460 clarifies the Board’s emergency powers to hold illegal water users accountableAB 460 (Bauer-Kahn) protects farmers, ranchers, tribes, disadvantaged communities, wildlife, and other users from the bad actors who try to take advantage of the lax water rights enforcement process at the Water Board.
This legislation is needed to provide the Water Board with the ability to administratively provide “interim relief.” In other words, they need authority to issue temporary restraining orders to immediately stop water theft and prevent irreversible harm to other water users and/or the environment. This bill would only affect those who violate existing law and does not introduce new restrictions on legal water rights holders. It also increases financial penalties to better discourage water theft.
One of the motivations for AB 460 is to prevent another “Shasta River Rebellion” or “Standoff” which made the headlines in August 2022. The Shasta River Water Association (SRWA) deliberately and illegally diverted flows of up to 30 cubic feet per second from the Shasta River east of Yreka for 8 days in direct violation of the Water Board’s emergency drought curtailment orders. During this period, the river’s flow dropped a third of the required minimum summer flow (18 of 50 cubic feet per second). That flow is necessary to protect habitat for species like juvenile salmon. SWRA was fined only $4,000 for their water theft.
The SWRA delivers water to a group of around 110 farmers and ranchers and one lumber mill located between the towns of Grenada and Montague near the Oregon-California border. That fine amounted to $40-50 each. The SWRA is a senior water rights holder with a single 1912 appropriative water right for irrigating 3854.77 acres, decreed by a 1932 Court decision:
Superior Court of California in and for the County of Siskiyou: Determination of Relative Rights, Based Upon Prior Appropriation, Judgement and Degree No. 7035. Source: Water Board, p.172.The SWRA actions last August caused irreparable harm to salmon and tribes in the watershed. The Shasta River is one of the last salmon producers for the Klamath River.
During the Assembly’s February informational water rights hearing, Karuk Tribal Council Member Arron “Troy” Hockaday (Karuk) spoke. Council Member Hockaday’s contribution to the hearing starts around 2:11 through 2:26, and it is worth listening for yourself.
Driven by drought and water quality conditions in the Klamath River, since 2017 the Karuk committed to only harvesting 200 salmon for their ceremonies. As he explained, the tribe continues the practice to today, in an effort to protect their own future needs they limit their current use and remind us that fish are the most senior water rights holders. Hockaday described how just two weeks before the SRWA’s illegal diversion from the Shasta River, a thunderstorm went over the McKinney Fire sending ash and debris into the river leading to the creation of a 50-mile stretch of the Klamath River with zero oxygen, that killed tens of tens of thousands of fish:
Then two weeks later the farmers tell us they’re gonna take water from the Shasta River. And they took it. I cried that day, I’m still emotional right now. Living here and seeing what they did is devastating. It’s our future. It’s our future for our children, our culture, our way of life. And to have us get kicked when we’re already down is a very sad day.
Council Member Arron “Troy” Hockaday (Karuk)Perhaps this is an extreme and flagrant example. But with climate change, more droughts, and periods of scarcity before us, might this standoff set an ugly precedent for other senior rights holders? AB 460 seeks to prevent just that.
Due to restrictions in the current law, it can take weeks, or longer, for the Board to stop unauthorized water use. Illegal diverters, like the SWRA, can drain rivers in the meantime, like they did last August, while the Board goes through a complicated process to stop water theft.
In addition to giving the Water Board the authority to immediately stop water theft, AB 460 increases the enforcement penalties for those in violation of current law while also allowing the Water Board to immediately halt water theft like those from August 2022. If AB460 is enacted as proposed, violators could be subject to fines of up to $10,000 per day and $2500 per acre-foot of water diverted.
Assembly Bill 1337 levels the playing field for all rights holdersThe second bill before the Legislature, AB 1337 (Wicks), gives the Water Board the authority to be able to curtail all water rights when there is a shortage. During California’s frequent droughts, the Water Board needs to be able to balance residential, cultural, agricultural, industrial, and environmental water demands. Some senior water rights holders have sued the Water Board over their orders to stop (curtail) water uses during drought emergencies.
The 2022 California Curtailment Cases’ court of appeal ruling suggested that a legislative fix, which AB 1337 provides, is necessary to give the Water Board broader authority to enforce the current water rights system, including senior water rights holders. Recently, reporter Dan Walters summarized the legal history motivating this bill. As he explains it, AB 1337 would finally give the Water Board “the legal authority to curtail diversions from rivers—even by those who now hold the most senior water rights, those gained prior to the state asserting authority over water in 1914.”
I previously explained how unjust the current system is, in part because of how pre-1914 water rights were generally unavailable to minorities. Allowing the Water Board jurisdiction over this group of senior water right holders ensures everyone is fairly participating in the state’s water system.
This irrigation canal in Yolo County is one of many irrigation canals throughout California that move surface water inland from rivers for use by farms and cities. Source: Amanda Fencl. Senate Bill 389 gives the Board authority to verify senior water rights holders’ claimsShockingly, major classes of water rights still do not require any kind of permit or license from the Water Board. As the Water Board explained in February, these represent roughly 45% of all water rights in the state and account for 30-35% of all surface water diversions by volume. The bill gives the Board ability to gather information to investigate these water rights claims and determine if they are accurate, or as is sometimes the case, inflated. Verifying senior water rights could make more water available for other rights holders during droughts.
SB 389 (Allen) would give the Water Board the ability to investigate and verify pre-1914 rights by determining whether a diversion is based on its type—riparian right, appropriation, or some other basis. A unified permitting system, as proposed by this bill, would “reduce uncertainty for all right-holders and enable more efficient and transparent real-time management of water,” according to a 2015 PPIC report.
How can you help?For the first time in decades, our state legislators are taking on some of the most broken pieces of the current water rights system. Yet, there is fierce opposition from the water industry to stop them and preserve the status quo. UCS and our coalition partners are asking for public support to improve the outdated water rights system.
Together, if passed, these bills could give the Water Board the ability to regulate all water rights holders in the state equally; to immediately stop water theft that could cause irreparable harm to others; and to actually identify and verify the most senior water rights claims in the state.
These modest changes are a first step in retrofitting the water rights system for the 21st century challenges ahead.
Do you live in California? Click here to take action!
Western Wildfires are Burning Through Local and State Budgets
As a Californian, summer still holds the promise of family vacations and visits to favorite swimming holes, but it increasingly triggers concerns about drought, extreme heat, and wildfires—or what we at UCS first named “danger season.”
Both extreme heat and wildfires are directly linked to climate change. Previous research by UCS scientists actually quantified the contribution of major carbon producers (like Chevron and ExxonMobil) to increased temperatures, and now we’ve done the same for wildfire.
For years, fossil fuel companies have socialized the costs of their pollution while privatizing the benefits. Since local and state governments are on the frontlines of paying for worsening wildfires, they should also be on the leading edge of holding fossil fuel companies accountable.
Taxpayers are, largely, picking up the bill for worsening wildfiresIf you’ve been following our new research, The Fossil Fuels behind Forest Fires, you know that almost half of the rise in fire-danger conditions in western North America since 1901 can be traced to carbon pollution from 88 fossil fuel companies and cement manufacturers. This alarming finding clarifies the significant role and responsibility of fossil fuel companies to not only stop their harm moving forward, but also to address damage they have already done.
It is clear that fossil fuel companies need to take immediate measures to limit emissions, and it is also clear that they will continue to drag their feet and deceive the public for as long as possible.
Perhaps less obvious is the importance of state and local governments in holding the fossil fuel industry accountable.
Adapted from The Costs of Wildfire in California, An Independent Review of Scientific and Technical Information. Source: CCST 2020.Wildfires are expensive—before, during, and after they occur. In recent years, the costs of wildfires have skyrocketed. According to Statista, emergency firefighting costs have increased exponentially over the last ten years in California, from $140 million in 2012 to more than $1 billion in 2021 and 2022. And this is only a small portion of total costs. It does not include efforts to prevent and mitigate fires before they start, or the losses to buildings and infrastructure, human health, ecosystems, and economies after they are extinguished. To date, much of the costs associated with wildfires have been passed on to the public.
A recent report from Pew Charitable Trust found that unlike the federal government, state and local governments must balance their spending and revenue every budget cycle while also navigating the direct impacts of fires on communities. Importantly, the study found that states most commonly draw on general fund revenue for wildfire management—that means taxpayer dollars. And in years with tight budgets, those dollars can come at the expense of other critical services such as schools and health care.
Wildfire suppression expenditures in California from fiscal year 2012 to fiscal year 2022, in million U.S. dollars. Source: Statista 2023. Three ways local and state governments can hold the fossil fuel industry accountableWhile state and local governments must pass more proactive wildfire policies (such as emergency funds) to help manage rising costs, holding the fossil fuel industry accountable is a critical part of the solution. Here are three ways that state and local governments are ensuring that the fossil fuel industry, rather than taxpayers, pay for their harms.
1. LitigationLocal and state governments can preserve access to justice through the courts for those experiencing climate impacts.
The Guardian writes that 2023 will be a watershed year for climate litigation. In addition to lawsuits filed by individuals and groups harmed by climate change, there are pending climate change lawsuits filed by more than 40 municipalities and states across the U.S. and its territories against the fossil fuel industry for fraud and financial damages, among other charges.
This includes lawsuits filed by the state attorneys general of Connecticut, Delaware, Massachusetts, Minnesota, Rhode Island, Vermont, and the District of Columbia, along with cities like Charleston, South Carolina. While many of these cases have been stuck in procedural limbo, a recent Supreme Court ruling means they will finally advance in states courts. And, for the first time, fossil fuel company defendants will be forced to disclose internal company documents and correspondence.
In the fight against “Big Tobacco,” these required disclosures were a turning point, revealing the tobacco industry’s playbook of denial and deception—a playbook that has largely been emulated by the fossil fuel industry.
Source: YouTube. 2. InvestmentsState and local governments manage a lot of money, whether that be annual state, county, or local budgets, or long-term investments for retirement programs. They should include financial and economic risks posed by climate change when making investment decisions on behalf of constituents. They can choose to stop the money pipeline to fossil companies by not banking with the private sector banks that finance the fossil fuel industry.
In addition, state and local entities can pursue “environmental, social and governance” (ESG) investing. At least two states have laws in force requiring public entities to develop ESG policies or standards that apply to state investments, particularly concerning public pension fund investment decisions. Lawmakers in at least four states have introduced bills to require or encourage the consideration of ESG factors in state investment decisions, though there has been a backlash in some conservative states.
3. OversightState and local governments should use every tool at their disposal to require greater transparency from the fossil fuel industry and pressure fossil fuel companies and their investors to
- stop engaging in greenwashing or funding the spread of climate disinformation.
- fully disclose the climate impacts and economic risks of their businesses.
- update their business models to enable sharp emissions reductions from their products and operations at a pace and scale consistent with the goals of the Paris Agreement on climate change.
One example of greater oversight is California’s response to unprecedented spikes in gasoline prices during the summer of 2022. Fossil fuel companies did not provide sufficient information to explain the increased gasoline prices and, subsequently, failed to attend a California Energy Commission (CEC) hearing to solicit additional information.
In response, the Legislature passed new legislation in March 2023 to create the Division of Petroleum Market Oversight, housed in the CEC. This division has the authority to gather additional information from the fossil fuel industry, including the ability to subpoena records from oil companies and to refer any failures to respond to the California Attorney General for prosecution.
Emissions from the products of fossil fuel companies and cement manufacturers have fundamentally reshaped the climate of western North America and left behind a scarred, charred landscape in which people, communities, and the ecosystems that enable their existence are suffering. While we are making progress in addressing wildfire risk, the resilience-building needed is vast and, to date, the general public has largely been footing the bill. UCS’s new analysis underscores the responsibility of fossil fuel companies for a portion of the impacts and costs of coping with wildfires, and state and local governments have the power to hold them accountable. And they should.
El reúso de tierras de cultivo puede aportar la justicia ambiental, socioeconómica y del agua a California
No hay suficiente agua en California para sostener nuestras prácticas actuales y todos lo saben.
En años normales y en años secos, la agricultura, la industria y los hogares de California extraen más agua subterránea de lo que deberíamos. Y cuando tenemos años húmedos con mucha nieve y embalses llenos, no tenemos la infraestructura para recargar los acuíferos subterráneos de los que depende gran parte del estado.
Este déficit deja a California en un estado de sequía interminable y en riesgo permanente de inseguridad hídrica, incluso en años como este cuando llueve mucho.
Peor aún, el suelo se está hundiendo en algunos lugares a más de una pulgada por mes (2,5 cm al mes), lo que reduce el almacenamiento potencial de agua subterránea en California. Ese espacio acuífero no se puede recuperar porque, debido a la compactación residual, el hundimiento de la tierra continuará durante décadas o siglos, incluso después de detener el bombeo. Para reducir significativamente el proceso de hundimiento de la tierra, debe haber un aumento significativo en los niveles de agua subterránea.
La volatilidad de los años húmedos y secos, la falta de infraestructura de agua y el continuo agotamiento de los recursos hídricos subterráneos se suman a la pérdida de resiliencia que California ha sufrido para poder hacer frente a futuras sequías y preservar la seguridad alimentaria futura.
Hasta ahora, todas las propuestas para lograr la sustentabilidad del agua y superar este desajuste en el suministro y la demanda de agua no se han podido lograr sin incurrir en pérdidas económicas y de empleo sustanciales para el sector agrícola. La agricultura es, por mucho, el principal usuario de agua en California, con el 80% del uso de agua en el estado.
Si bien muchos agricultores han estado mejorando la eficiencia del agua con el riego por goteo, el ahorro de agua se ha traducido en áreas más grandes de cultivos con mucha dependencia en el riego. También ha habido recortes de agua estatales y federales notables en el agua superficial, pero a menudo se han compensado con más extracción de agua subterránea.
Es decir, los esfuerzos de conservación del agua realizados por los agricultores han sido utilizados (especialmente por las grandes corporaciones) como una justificación para ser más insostenibles con el uso del agua en California.
La buena noticia es que conocemos el problema. La mala noticia es que no estamos haciendo lo suficiente para resolverlo.
Pero, ¿cómo podemos resolver la escasez y el uso excesivo de agua sin causar nuevos problemas? En otras palabras, ¿hay alguna manera de reducir el uso de agua y mantener a todos contentos?
Creo que sí: el reúso estratégico de tierras de cultivo.
Tenemos que usar las tierras de cultivo de manera diferenteEl reúso estratégico de tierras de cultivo es el cambio en el uso de la tierra de una actividad económica que produce efectos secundarios negativos (como dañar la salud de las personas y el medio ambiente) a nuevos usos de la tierra que producen efectos secundarios positivos.
Durante los últimos cuatro años, he estado estudiando en detalle cómo hacer bien el reúso de tierras de cultivo, de manera que beneficie a todas las partes interesadas involucradas. Nuestro equipo de Unión de Científicos Conscientes, la Red SocioAmbiental y Educativa (SEEN), la Universidad de California Merced (Gestión de Sistemas de Agua y el Instituto de Investigación de Sierra Nevada) y otros colegas hemos encontrado una manera de lograrlo. Nuestro estudio principal, Agua, medioambiente y justicia socioeconómica en California: un marco de trabajo para reusar de tierras de cultivo con múltiples beneficios, se publicó recientemente en la revista Science of the Total Environment.
Pero la verdad es que este trabajo ha sido duro para mí. No técnicamente: nuestro equipo de investigación tiene muchos profesionales talentosos con mentes agudas e ideas innovadoras. Sin embargo, mirar por primera vez los resultados y darme cuenta de lo que significaría el análisis para las personas que conozco fue emocionalmente difícil.
Vivo en una región que depende de la agricultura y veo de primera mano su impacto en las comunidades locales. Me gusta mucho la agricultura, y analizar objetivamente los efectos de la agricultura convencional ha sido particularmente difícil para mí.
¿Qué encontró nuestro estudio?El equipo estimó los costos y beneficios ambientales y socioeconómicos de retirar y reutilizar tierras de cultivo en áreas de amortiguamiento de una milla (1.600 m) de 154 comunidades rurales desfavorecidas en el Valle Central. De ellas, 123 comunidades se encuentran en el Valle de San Joaquín (región sur del Valle Central) y albergan a medio millón de personas.
Mapa de las comunidades rurales del Valle Central clasificadas como comunidades desfavorecidas por el Departamento de Recursos Hídricos de California con <3,700 acres (15 km2 de área). El Valle de Sacramento contiene 31 de esas comunidades, mientras que el Valle de San Joaquín contiene 123. El condado de Tulare es el condado con más de esas comunidades (37), seguido por Fresno (24) y Kern (20) Fuente: https://www.sciencedirect.com/science/article/pii/S0048969722070632.Estimamos el potencial de estas áreas de amortiguamiento para contribuir a las reducciones en el uso de agua, el uso de pesticidas, la lixiviación de nitratos tóxicos y las emisiones de gases de efecto invernadero, así como el potencial para la recarga artificial de acuíferos y los impactos económicos y laborales del retiro de tierras agrícolas y reúso con industrias limpias y energía solar.
Por ejemplo, el uso actual de la tierra dentro de una milla de las comunidades desfavorecidas alrededor del Refugio Nacional de Vida Silvestre de Pixley en el condado de Tulare, aproximadamente a medio camino entre Bakersfield y Fresno, se ve así: una combinación de alfalfa (el cultivo que demanda más agua en la región y se usa para alimentar a las vacas ), almendra y pistacho (también demandan mucha agua), y uva de mesa.
Ejemplo de tierras de cultivo dentro y una milla alrededor de varias comunidades rurales desfavorecidas en el condado de Tulare. Fuente: https://www.sciencedirect.com/science/article/pii/S0048969722070632.Para lugares como la región de Pixley, reutilizar estratégicamente las tierras de cultivo podría mejorar las condiciones socioeconómicas de los residentes y trabajadores al brindar nuevas oportunidades para industrias limpias y para los dueños de tierras, y con el tiempo podría revertir el estado de desventaja de algunas pequeñas comunidades rurales.
Retirar las tierras de cultivo dentro de una milla de estas 154 comunidades rurales desfavorecidas más pequeñas del Valle Central podría reducir el uso de agua agrícola en 1,77 millones de acres-pie por año (576 mil millones de galones por año o 2,2 millones de metros cúbicos). Eso es aproximadamente la misma cantidad de agua utilizada por dos tercios de todos los californianos (26 millones de personas) dentro de sus casas en un año. La reducción anual del uso de agua sería equivalente a la estimación promedio actual del sobregiro de agua del Valle de San Joaquín de 1,8 millones de acres-pie por año.
Si consideramos solo la reducción en el uso del agua subterránea en esa zona de amortiguamiento de una milla alrededor de estas comunidades en el Valle de San Joaquín, resultaría en una reducción del uso del agua de más del 40%, aunque el área afectada solo representa el 11% del área de tierras agrícolas totales. Si el uso del agua se redujera mediante el reúso de tierras de cultivo al mismo tiempo que se trajera el exceso de agua del Valle de Sacramento, esa combinación podría recuperar hasta un 80% a 90% de la extracción de agua insostenible estimada actualmente en el Valle de San Joaquín en promedio (aunque, según los pronósticos de sequía actuales, puede que no sea suficiente dadas las expectaciones de un futuro más seco).
Beneficios para la salud y el medio ambiente más allá de la reducción del aguaRetirar las tierras de cultivo una milla alrededor de las 154 comunidades rurales desfavorecidas más pequeñas del Valle Central también podría reducir la infiltración de nitrato agrícola en los acuíferos locales en 105.500 toneladas por año (233 millones de libras por año). Eso es aproximadamente 1 libra de nitrato tóxico por persona al día que se filtra hacia los acuíferos de las comunidades y que podría detenerse.
El nitrato agrícola proviene de los fertilizantes y está relacionado con varias condiciones de salud, incluido el “síndrome del bebé azul”, abortos espontáneos y otras condiciones. El uso de fertilizantes sintéticos también es notablemente ineficiente: más del 50% del fertilizante aplicado en el Valle Central de California se filtra a los acuíferos locales, mientras que el 10% se convierte en potentes gases de efecto invernadero y el 5% se pierde por escorrentía. Sólo alrededor de un tercio del nitrato aplicado como fertilizante agrícola permanece en el suelo y es utilizado por los cultivos.
El reúso de tierras de cultivo conduciría a una reducción de las emisiones de gases de efecto invernadero (principalmente óxido nitroso) equivalente a 2,2 millones de toneladas métricas de dióxido de carbono por año. Eso es equivalente a lo que emiten anualmente medio millón de automóviles que funcionan con combustibles fósiles.
Los resultados del estudio son aún más sorprendentes cuanto más se profundiza: el reúso de las tierras de cultivo también podría tener un gran impacto positivo en la salud de los residentes y trabajadores locales.
Como he escrito antes, es difícil escuchar historias de personas mayores con sangrado nasal después de que los campos aledaños sean rociados con pesticidas y niños que sufren de asma sistemáticamente en estas comunidades rurales desfavorecidas rodeadas de agronegocios industriales. La deriva de pesticidas expone a las personas cercanas a una cantidad alarmante de pesticidas inhalados o que caen sobre su piel, ropa y dentro de sus hogares.
Retirar las tierras de cultivo una milla alrededor de las 154 comunidades rurales desfavorecidas más pequeñas del Valle Central podría reducir el uso de 5.388 toneladas de pesticidas por año (alrededor de 12 millones de libras por año).
El reúso de tierras de cultivo podría significar seguridad hídricaEn el Valle Central, 64 pequeñas comunidades desfavorecidas (42% de las estudiadas) son atravesadas por un río o un canal, de las cuales 48 tienen un excelente potencial de recarga de acuíferos.
De las 154 comunidades, 139 tienen áreas de potencial recarga moderadamente bueno o mejor. De esas, 99 comunidades están a menos de una milla de un canal o un río.
En la parte sur del Valle Central (Valle de San Joaquín), 73 comunidades se encuentran a menos de una milla de un río o canal y también tienen un potencial de recarga moderadamente bueno o mejor.
Por ejemplo, Teviston en el condado de Tulare tiene un potencial de almacenamiento de agua subterránea excelente, está atravesado por un río y está a unas 0,6 millas de distancia de un canal. Sin embargo, Teviston necesitó ayuda debido a la sequía de 2012-2016, y sus pozos fallaron nuevamente en 2021. Teviston es uno de los muchos ejemplos claros de cómo el reúso de tierras puede crear seguridad hídrica en California.
La recarga de acuíferos también tiene el potencial de aumentar el almacenamiento de aguas subterráneas, reducir la sobreexplotación de aguas subterráneas y aumentar la generación de energía hidroeléctrica sin afectar sustancialmente los caudales ambientales.
Teviston, el condado de Tulare y las comunidades desfavorecidas cercanas, y su índice de almacenamiento de agua subterránea en el suelo (SAGBI, por sus siglas en inglés, que es un indicador de la calidad del suelo para la recarga). Teviston tiene un excelente potencial de almacenamiento de agua subterránea en el suelo (SAGBI entre 85 y 100), está atravesada por un río y está a unas 0,6 millas de distancia de un canal; sin embargo, Teviston necesitó ayudas del estado debido a la sequía de 2012-2016 y sus pozos fallaron nuevamente en 2021. Mejoras económicas para comunidades rurales desfavorecidasLa retirada de tierras agrícolas es controvertida porque aislada puede provocar pérdidas de ingresos y empleo. Pero haciendo las cosas inteligentemente (es decir, con una buena estrategia), nuestro estudio reveló un resultado muy positivo que debe considerarse. Las posibles pérdidas agregadas de la retirada de tierras de cultivo dentro de una milla de cada comunidad podrían ser de hasta $4.200 millones por año y una reducción de 25.682 puestos de trabajo en todo el Valle Central. Esos trabajos pagan un promedio de alrededor de $45.000 por año por puesto de trabajo, pero a menudo, un puesto de trabajo en agricultura se puede compartir entre dos (o más) personas, lo que significa que los salarios reales de los trabajadores agrícolas normalmente son menos que eso. Solo retirar estas tierras de cultivo promedia una pérdida de $27,3 millones por año en ingresos agrícolas y 167 empleos en total por comunidad.
Pero los beneficios potenciales de invertir $27 millones por comunidad durante diez años a través del reúso de tierras de cultivo son de hasta $15.800 millones por año y 62.697 nuevos empleos para la región. Eso promedia $103 millones de nuevos ingresos por año y comunidad y 407 empleos por comunidad.
En total, este escenario da como resultado un valor equivalente anual de $11.400 millones por año y 37.014 empleos que pagan un 67% más en promedio, una victoria económica para áreas que históricamente han estado en desventaja. Es por esto que el reúso de tierras de cultivo debe hacerse de manera estratégica; debemos asegurarnos de que se lleve a cabo de una manera que compense el costo potencial con más ingresos y mejores oportunidades laborales (más sobre nuestra metodología aquí, texto en inglés).
¿Aprovecharemos la oportunidad que se nos presenta?Imaginemos cómo puede ser el futuro de estas 154 comunidades en el Valle Central: en lugar de una economía exclusivamente agrícola, el estado y los intereses privados invierten $27 millones de dólares por comunidad desfavorecida cada año durante diez años en industrias limpias y energía renovable, como generación y almacenamiento de energía solar. Los ingresos para los siguientes 30 años serían de $15.800 millones por año con una adición de 63.000 puestos de trabajo que pagan un promedio de 67% más que los trabajos agrarios típicos en la región. Todavía habría mucho terreno para áreas verdes, recarga de acuíferos, corredores de vida silvestre y otras oportunidades, ahorrando agua y mejorando la salud pública
A menudo recomiendo combinar paneles solares con corredores de vida silvestre e incluso con proyectos de recarga de acuíferos. Ese es el ejemplo perfecto de un proyecto multibeneficio que ahorra agua, genera nuevos ingresos y mejores empleos, mejora el medio ambiente y preserva la salud de los vecinos.
La dura realidad es que la agricultura está utilizando más agua de la que tenemos. Decidir lo que vamos a incentivar para resolver el problema de la agricultura de California es un desafío muy importante que tenemos por delante, pero tenemos la oportunidad de hacer que funcione para todos.
Al priorizar la sostenibilidad y la justicia, y al combinar la agricultura inteligente y las industrias limpias, podemos aumentar la resiliencia social, económica y ambiental de las comunidades rurales y de la propia agricultura. Al incentivar la creación de efectos secundarios positivos, podemos fomentar economías locales y comunidades prósperas donde todos, localmente y la sociedad en general, pueden beneficiarse.
Realizing Maine’s Tremendous Offshore Wind Potential
A proposed offshore wind procurement bill in Maine would go a long way to enable the state to meet its climate and clean energy targets and become a national leader in floating offshore wind technology.
The newly updated legislation (LD 1895), which the legislature is considering this week, builds on recommendations from Maine’s Offshore Wind Roadmap and policies adopted by other leading states. It would require the Maine Public Utilities Commission (PUC) to conduct a competitive bidding process to procure 1,000 megawatts (MW) of offshore wind capacity by 2030 and 2,800 MW by 2035, enough to generate more than half of Maine’s electricity demand. Meeting these targets would strengthen Maine’s economy by creating high-quality jobs and helping stabilize energy costs. LD 1895 also would ensure that offshore wind is developed responsibly and equitably in the Gulf of Maine.
Here are five reasons why LD 1895 is so important for Maine:
Offshore wind is critical to achieve Maine’s climate and energy goalsMaine’s climate and clean energy laws are well-aligned with science-based targets for reducing heat-trapping emissions, and a recent Union of Concerned Scientists (UCS) report noted that Maine has one of the strongest climate action plans in New England.
That said, offshore wind development will be critical to meet state and regional climate and clean energy requirements. Recent studies project that offshore wind development in Maine could range from 500 MW to 1,000 MW in 2030 to 5,000 MW to 8,000 MW by 2050 (see reports by DNV, Evolved Energy Research, Synapse, E3 and Silkman).
With 2,800 MW installed and operating by 2040, offshore wind would provide more than half of Maine’s electricity demand, even if the state’s demand more than doubled as the state moves to replace fossil fuels with clean electricity for transportation and heating.
Procurement targets are an effective toolMore than two decades of adopting and implementing renewable electricity standards in 30 states shows that procurement targets have been a key driver for deploying land-based wind and solar, achieving economies of scale, increasing technology innovation, and lowering costs.
At least 10 other states, including six in the Northeast, have adopted offshore wind procurement targets totaling 81,000 MW over the next 20 years, according to the American Clean Power Association (ACP). These targets range from 1,400 MW by 2030 in Rhode Island to 11,000 MW by 2040 in New Jersey to 25,000 MW by 2045 in California. While LD 1895’s 2,800 MW by 2035 procurement target is more modest, most of the other states with higher targets have much larger populations and greater energy demand. Most states also have interim targets that ramp up over time, which gives the industry more investment certainty and will be necessary to build out the offshore wind supply chain.
Adopting a procurement requirement would give Maine a much better chance to compete in the race with other states to develop offshore wind and related supply chain infrastructure. According to ACP, more than 51,000 MW of offshore wind is currently under development in the United States, 84 percent of which is on the East Coast. This includes 938 MW currently under construction, 18 projects in advanced development representing 16,564 MW, and 18 projects in early development totaling 33,875 MW. While only a few floating offshore wind projects are currently operating globally, the development pipeline more than doubled over the past year from 91,000 MW from 120 projects to 185,000 MW from 230 projects.
Developing floating offshore wind projects in the Gulf of Maine will be important to meet state and regional targets and build on the University of Maine’s leadership in developing the technology. While floating offshore wind is more challenging to develop than fixed-bottom projects, the Gulf of Maine has the highest, most consistent wind speeds on the East Coast. According to the National Renewable Energy Laboratory (NREL), floating offshore wind represents 63 percent of the total technical potential for offshore wind in the North Atlantic and 65 percent at the national level.
Maine can capitalize on falling costs and federal incentivesLike land-based wind and solar photovoltaics, the cost of offshore wind is projected to fall rapidly over time as the technology matures and the supply chain grows. The cost of fixed-bottom offshore wind projects has already fallen by 48 percent, from $162 per megawatt-hour (MWh) in 2010 to $84/MWh in 2021, due to development in other countries.
While the costs of floating offshore wind projects are currently higher than fixed-bottom projects, they are projected to be similar by 2035. NREL and DNV expect floating offshore wind costs to dip to $60 to $80/MWh by 2030 and $45 to $50/MWh by 2035. The Biden administration, which set a goal of 15,000 MW of floating offshore wind by 2035, also has set a price target for floating offshore wind power at $45/MWh by 2035, which is less than the cost of electricity from new natural gas plants.
Incentives in the historic federal climate bill passed last fall will help lower the costs even further. In fact, the timing of Maine’s offshore wind procurement targets in LD 1895 is designed to take advantage of federal tax credits, which could lower the capital costs of projects built in the Gulf of Maine by at least 30 percent. Funding also is available to help build out Maine’s supply chain and offshore wind component manufacturing, spur investments in port infrastructure, and encourage transmission planning.
While the cost of offshore wind projects has increased in the past two years due to inflation, rising commodity prices, and supply chain pressures, UCS expects these effects to be temporary. Land-based wind similarly experienced temporary cost increases during the 2008 economic recession. But since 2009, the cost of land-based wind has dropped by more than two-thirds as inflationary pressures eased, the technology advanced, and the US-based supply chain matured.
Offshore wind can help stabilize energy costsIn addition to becoming increasingly cost-competitive over time, offshore wind also can help stabilize energy costs for households and businesses by reducing regional reliance on imported gas and oil. Dependence on gas for about half of the power generation in New England resulted in an 83-percent supply-rate increase in Maine last year, and a 49-percent supply-rate increase this year, representing a $32 jump in a typical household’s monthly electricity bill. Offshore wind power has no fuel costs, so power costs are more stable and predictable over time than fossil fuel-fired power. Wind also can help protect ratepayers from price volatility caused by such events as the Russian war in Ukraine or extreme weather. Moreover, offshore wind also would produce more electricity during the winter heating months when New England energy demand is greatest.
Offshore wind development resulting from LD 1895 also could put downward pressure on electricity and gas prices—and save ratepayers money. Electricity generation is dispatched at the regional level according to increasing costs. Since wind and solar have no fuel costs and low operating costs, utilities typically dispatch them first. Gas generators, on the other hand, have relatively high fuel and operating costs, so utilities typically dispatch them last to meet demand, thereby setting the market price of electricity. Thus, by reducing the need for more expensive gas generators that are on the margin, offshore wind could lower wholesale electricity prices.
- For example, a 2020 NREL study found that deploying 7,000 MW of offshore wind in grid operator ISO-New England’s (ISO-NE) transmission area would reduce regional wholesale electricity prices by 13 percent and result in total electricity production cost savings of as much as 18 percent.
- An analysis by ISO-NE that found that if 1,600 MW of offshore wind had been online during the 2018 cold snap, New England consumers would have saved more than $80 million, carbon dioxide emissions would have been 11 percent lower, and more reserves would have been available to maintain a stronger reliability margin.
As someone who lives in a small coastal fishing community near Maine’s Acadia National Park, I can see up close the importance of developing offshore wind in a responsible and equitable way that includes strong labor and environmental standards and protections for the fishing industry, local communities, and tribes. The project labor agreements and labor “peace” agreements in LD 1895 are supported by the BlueGreen Alliance (of which UCS is a founding member) and have been adopted by other states to ensure offshore wind projects are built with strong labor standards and are able to maximize federal incentives available through the Inflation Reduction Act and infrastructure laws.
LD 1895 wisely includes tax incentives that would encourage wind development outside of Lobster Management Area 1 and offset the modest additional costs of locating projects further offshore. In addition, the bill would provide funding for local fishing communities, which would create new jobs and tax revenue, and for independent scientific research to determine the best way for Maine to embrace the vast benefits of wind power while protecting wildlife, fisheries and the environment.
LD 1895 also guarantees a place at the table for federally recognized and state acknowledged tribes and would require developers to consult with tribes every step of the way. Further, the bill includes workforce development, employment, and contracting opportunities, as well as financial and technical assistance to support robust monitoring of the fisheries that local tribes care about most.
While these provisions are a good start, more engagement with the tribes is clearly needed to avoid, minimize, and compensate for any negative impacts from offshore wind development. Tribal nations must be included in the development, permitting, and management of offshore wind projects. Likewise, more data and research is needed to ensure impacts to cultural resources are adequately considered in the US Bureau of Ocean Energy Management’s suitability analysis to identify potential offshore wind leasing areas.
Pass this bill!It’s time to make offshore wind a reality in Maine, and LD 1895 offers a path forward for responsible and equitable development of this key resource. If you’re a Maine resident, you can tell your state senator or representative to support LD 1895 by clicking here.
California’s Water Rights System is Inequitable, Inadequate, and Possibly, About to Change
During a California State Assembly informational hearing earlier this year, there seemed to be consensus that California’s 19th century water rights system is not well suited to the social context and climate of the 21st century. Change is necessary and may be coming.
This outdated water rights system is based on historic and continued disenfranchisement and dispossession. It has persisted for more than a century, despite known inequities and increasing inadequacies in the face of climate change. It persists because powerful actors benefit from the current system and its haphazard enforcement, and they vehemently resist any proposed changes.
They can be convincing. After all, water rights seem overwhelming and complex. I’ve studied water policy in California for more than a decade, and water rights was something I had always left for the lawyers—people like Doug Obegi, senior attorney at the Natural Resources Defense Council, who explains, “…we’ve made water policy more complicated than it really is, and that has made it harder for the public to engage on these issues, because it gets cloaked in this kind of mythical sense that it’s too complicated.”
Indeed, “it’s complicated” is not the end of this story; it’s only the beginning.
Note: As a California transplant and a white person of settler-origin, it’s important to reflect on whose narratives have dominated the discussion of water rights. The history of the water rights system is one of state-sponsored violence against Native Americans and the structural racism encoded into our legal system. It would be irresponsible to not acknowledge that. I draw heavily from several episodes of the podcast, West Coast Water Justice, which is produced by the Indigenous-led non-profit Save California Salmon. In this post, I’m using “Native American” or “Indigenous” throughout based on the Union of Concerned Scientists’ style guide, but these terms are imperfect and not always preferred. Where possible I use an individual’s specific Indigenous community name. Trigger warning below for settler violence and genocide.
A primer on California’s water rights system and its settler rootsTo understand why today’s water rights are so convoluted and steeped in bigotry, we must go back in time to more than a century ago.
First, the water rights system is designed to distribute surface water, or the water that flows through the state’s 189,454 miles of rivers, countless streams and creeks. Surface water rights determine who, where and for what purpose surface water is used. California’s current water rights system is tied to the conquest of California by the United States from Mexico (1848) and the enormous influx of settlers during the Gold Rush (1848-1855). A more thorough history would include the settling and colonization of California by the Spanish (1500s) and Mexican (early 1800s) governments, but for brevity, I focus here on the intentional and duplicitous actions of the California state government (established in 1850) and its co-conspirator: the United States federal government.
California has both types of water rights systems used in the United States. Riparian rights were inherited and adapted from British common law and are primarily used in rainy eastern states. A riparian area is the place where land and water meet; riparian water rights are linked to owning the land that borders water. Riparian landowners may only use the water on the land that is adjacent to the water source.
White settlers of the west invented a second system called appropriative rights during the 19th century. Appropriative rights do not require the holder to own land adjoining a water source, the way riparian rights do. They allow the holder to divert water from one point, and use (or “appropriate”) it at a different point if beneficially used. Up until 1914, appropriative water rights could be claimed in California by simply posting notice of a claim–known as “staking”–and then putting water to use. (Literally, this often meant just hammering a sign onto a post by the river.)
An appropriative right originates when the holder can physically control water for beneficial use. Beneficial use was originally defined as agricultural, industrial, or household use, and more recently has been modified to include recreation and ecosystems. The doctrine of prior appropriation means that an appropriative right is based on a ‘first in time, first in right’ priority system. Unlike riparian rights, which stand whether or not water is used, appropriative rights holders must show continual water use. An appropriative right can be lost after five years of non-use if it is not contracted out (transferred) to another entity.
The 1886 California Supreme Court ruling Lux v. Haggin formally blended the two systems with riparian rights treated as (mostly) more senior than appropriative rights, although not in all cases. In practice, this means “water generally went to those with the most creative lawyers and engineers” as journalist Mark Arax writes in The King of California.
Both of these systems were designed in concert with federal and state policies to benefit white settlers and strengthen the state and federal governments’ hold on newly claimed but unceded territories. The 1862 Homestead Act, for example, allowed adult citizen heads of households to gain ownership of a piece of land by occupying and “improving” it for a period of five years. Of course, Native Americans were not eligible for US citizenship until 60 years later, in 1924. By incentivizing the occupation of land, ownership by force, and exploitation of natural resources, the state and federal governments promoted the violent dispossession and genocide of Native Americans from the land.
The system was designed to keep water out of Indigenous handsThe system’s origination in dispossessing Native Californians from the land, and reallocation to the settler colonizers is an historic and ongoing violence. The state considers the pre-1914 appropriative claims as among the most senior of water rights. From the 1850s through today, those rights have enabled settlers to mine and farm further and further inland, away from sources of water. The time-based nature of appropriative rights leads to a system of prioritization where the oldest and most senior rights are held almost exclusively by white men.
Native Americans in California, while living there since time immemorial (quite literally ‘first in time’), have been systematically dispossessed of their land and water. One of the first pieces of legislation passed by the inaugural California legislature was an Act for the Government and Protection of Indians (1850). The Act facilitated –among other heinous things– land dispossession, indentured servitude of minors and adults to white settlers, and no legal rights for Native Americans to things like due process. The subsequent decades, marked by the California Indian Wars (1850-1880) were a state-sponsored genocidal campaign to further terrorize and dispossess tribes, including paying bounties for murdered Native Americans.
Between 1851 and 1852, 18 “peace and friendship” treaties were “negotiated” by the federal government with Native Americans in California. These treaties traded the federal government’s –often undesirable– proposed reservation lands for any claim Native Americans might hold to their land in California.
Map of California’s Unratified Indian Reservations 1851-52, on display in Washington, DC, as part of the Nation to Nation: Treaties Between the United States and American Indian Nations exhibit at the National Museum of the American Indian, open until January 2025. Photo by Amanda Fencl.Why would any Indigenous community sign onto such a bad deal? UC Davis Professor Beth-Rose Middleton Manning explains here that tribes were under a near-constant threat of violence, including in the treaty “negotiation” process. Chief Caleen Sisk of the Winnemem Wintu Tribe describes, for example, how her relatives narrowly escaped a treaty signing meeting turned mass murder by jumping into a river. The Indigenous treaty signers understood that they would at least secure a place to live away from settler violence, even if on smaller, less desirable, pieces of land far from their ancestral homes.
However, the state of California actually lobbied the federal government against ratifying these 18 treaties because they thought the tribes were going to get too much land, especially in the Central Valley. Thus, the treaties were never ratified by the US Senate, land was never legally taken from tribes, yet they remained landless in practice. The treaties’ existence was then held in secret until 1905. Only after decades of legal battles did any of the tribes get any kind of compensation for their unceded territory.
Inequities resulting from this era, including from the unratified “secret” treaties, is that the state does not adequately recognize Native American water rights. Countless individuals and businesses have since gained titles to stolen land and secured water rights without recognizing Indigenous sovereignty. For anyone that wants to learn more, California Indian Legal Services has a helpful three-part overview of tribal water rights in California and Save our Salmon developed an “Advocacy and Water Protection in Native California” curriculum that covers much more than water rights.
Native American scholars like Dr. Cutcha Risling Baldy (Hupa, Yurok, Karuk), Dr. Brittani Orona (Hupa, Hoopa Valley Tribe), and Dr. Andrew Curley (Diné) have illustrated how Indigenous people are in “complicated and fundamentally entangled political landscapes” by having to assert sovereignty and seek recognition and water rights within the United States’ settler-colonial legal system.
California formally apologized in 2019 for genocide, but the question of making water rights right was not considered then.
The system continues to privilege white settlersHow much water gets diverted from rivers and who gets to use it is dependent on the State Water Resources Control Board. I will call them “the Board” from here on out. They implement and enforce water rights prioritizations.
The 1913 Water Commission Act established a Water Commission with the authority to permit new appropriative rights as of 1914, the Act’s effective date. This essentially created two classes of appropriative water rights holders: 1) pre-1914 (“seniors”) and 2) post-1914 (“juniors”). The two classes face different rules, different levels of oversight, and most importantly different levels of curtailments (or restrictions on water use) during periods of shortage; the most junior right holders are curtailed first in California. If you follow water issues in this state, you may have read about severe curtailments in recent years.
When there is not enough water to go around, the Board has to implement and enforce the water rights priority system. Curtailments are one of these tools. The Board can only curtail water rights holders after a Governor-declared drought emergency or a “critically dry” year (which only happens after there have been two previous dry years). Courts have ruled that the Board cannot curtail senior water rights solely on a finding that there’s not enough water in the system to meet those rights. They can, however, once they have enacted an emergency regulation that sets a minimum flow in the river.
In other words, declaring a drought emergency like Governor Gavin Newsom did in 2021, while a prerequisite, is not always enough to stop senior rights holders (again, held almost exclusively by white men) from continuing to withdraw water even during extremely dry years. A new report last month summarizes how “limited and cumbersome options for enforcing curtailments allow bad actors to cause irreparable harm while risking only modest financial repercussions”.
Another challenge: the Water Commission did not have the authority to decide on water rights prior to 1914, and The Board, its current incarnate, remains without power to adequately regulate pre-1914 senior water rights. The most senior rights holders do not have to get a permit or license from the state of California to withdraw water. While many of their water diversions have been vetted by filed “claims” with the court, not all have been verified by the Board. The Board generally knows who the pre-1914 senior water rights holders are, but most of the essential water rights claims documentation are inaccessible paper records. “So we simply don’t know who can legally use water at a given time and place,” as this LA Times opinion writer cautions. Furthermore, the Board relies most heavily on self-reported annual information by the water right holders themselves to inform their calculations about how much water is available for use.
Still with me? This was a lot to process and probably why water rights is a thorny mess that even experts and water enthusiasts don’t engage in. For me, knowing the history is even more motivation to improve the water rights system.
California has made some of its biggest water policy changes during times of crises. What changes are possible during a year with record breaking rainfall and an impending “big melt“?
California’s antiquated water rights system is not ready for climate changeA 2015 study estimated that post-1914 appropriative water right allocations were approximately five times the state’s average annual runoff. In the state’s major basins the allocations account for up to 1000% of natural surface water supplies. This means that what water rights holders can divert (on paper) in California is far, far more than how much water actually exists.
Given increased whiplash extremes from drought to flood, only rarely does any rights holder get 100% of their allocation. This over-allocation combined with the climate-driven uncertainty of water availability, is no match for “a state [water rights] regulatory system dramatically unprepared to address chronic water shortages and an ecosystem collapse,” according to a Sacramento Bee investigation.
In theory, the water rights prioritization system is intended to prevent conflicts by laying out a clear order of priority for whose water use gets cut first in times of scarcity. In practice, sometimes there is not enough water to fulfill even the most senior water rights. In an increasingly extreme climate, California state officials will face increasingly extreme challenges to implementing even the current, unjust prioritization system.
There are several commonsense changes already suggested by others that could help fix this system. None are nearly as bold as the call for outright abolition and starting over, but they are long overdue modernizations.
The proposals currently before the California state legislature to modernize the water rights system do not begin to address the historic injustices and current inequities wrought by the water rights system, but they are critical updates to make informed water management decisions and build climate resilience into our future.
Ask a Scientist: Calling Out the Companies Responsible for Western Wildfires
The US wildfire season used to last about four months, beginning in late summer or early autumn. These days, it stretches six to eight months, according to the US Forest Service, and in some places it’s now a year-round affair.
In just five years, from 2018 through 2022, wildfires scorched 38.3 million acres across the country. That’s nearly 60,000 square miles, slightly bigger than the state of Georgia. Last year alone, nearly 69,000 wildfires burned 7.6 million acres, more than 40 percent of which were in Alaska.
Not only is the fire season longer, wildfires are burning larger areas more severely and at higher elevations. The average acreage that has burned every year since 2000—7 million—is more than double the annual average of 3.3 million acres in the 1990s, even though the annual average of 70,025 wildfires a year since 2000 is 12 percent less than in the 1990s.
There are a number of reasons why there has been so much more wildfire destruction this century, particularly in the western United States and Canadian southwest. Encroaching development in fire-prone areas and widespread fire suppression are among them. But another major culprit is climate change, which has intensified the heat and drought that have always been factors in western North America.
That climate change obviously didn’t just happen on its own. It mainly comes from burning fossil fuels, and a new Union of Concerned Scientists (UCS) peer-reviewed study—published on May 16 by Environmental Research Letters—calculates just how much of the acreage burned in forest fires in the western United States and southwestern Canada can be attributed to the carbon emissions from the world’s largest fossil fuel companies and cement manufacturers and their products.
The new report is the latest in a series based on a 2014 peer-reviewed study by Richard Heede, director of the Climate Accountability Institute, which identified 90 companies—83 fossil fuel producers and seven cement manufacturers—that were responsible for nearly two-thirds of all industrial carbon dioxide and methane emissions between 1854 and 2010. Just seven private and state-owned companies—BP, Chevron, ExxonMobil, Gazprom, the National Iranian Oil Company, Saudi Aramco, and Shell—accounted for a whopping 18.7 percent of total emissions.
Since that 2014 study, which laid the foundation of what is called climate source attribution science, UCS scientists have collaborated with Heede on two other studies that pinpointed the major carbon producers’ culpability for specific climate change-related trends.
In 2017, UCS climate scientist Brenda Ekwurzel led a study that found that the 90 companies’ emissions contributed roughly 57 percent of the increase in atmospheric carbon dioxide, as much as half of the jump in global mean surface temperature (the average of the sea surface temperature and the air temperature over land), and as much as a third of global sea level rise since the mid-1850s. Ekwurzel was joined by Heede, then-UCS climate scientist Peter Frumhoff, and four other scientists.
Two years later, UCS climate scientist Rachel Licker conducted a study with Ekwurzel, Frumhoff, Heede, and two other scientists to determine the major carbon producers’ contribution to ocean acidification, caused when oceans absorb increasingly more carbon dioxide from the atmosphere, disrupting an ocean environment that had been relatively stable for tens of millions of years. Licker et al. found that the companies—down to 88 due to mergers—were responsible for about 55 percent of ocean acidification between 1880 and 2015.
The new study, focusing on the 88 carbon producers’ role in western North American wildfires, was conducted by six UCS Climate and Energy Program staff members and University of California Merced climatologist John Abatzoglou. The UCS scientists included Kristina Dahl—who led the study—Licker, Delta Merner, Pablo Ortiz, and Carly Phillips. They found that 48 percent of the increase in the region’s fire-friendly conditions since 1901—specifically drier land and vegetation—can be traced to the 88 companies’ carbon emissions. They also calculated that the companies’ emissions have been responsible for 37 percent of the burned forest area in the region since 1986.
Dahl, Ortiz, and Phillips—along with their colleagues Alicia Race and Shana Udvardy—also published a brief report based on the Environmental Research Letters study. Titled The Fossil Fuels Behind Forest Fires, it provides a concise overview of the peer-reviewed study and makes policy recommendations for the Biden administration and Congress.
Just before the Environmental Research Letters study came out, I had a chance to ask Phillips some questions to provide more context. Phillips, who has a doctorate degree in ecology from the University of Georgia, was a Kendall fellow at UCS from 2018 to 2020 and then a researcher at the University of Victoria in British Columbia. She rejoined UCS earlier this year as a member of the Science Hub for Climate Litigation, just in time to contribute to the study.
EN: Welcome back to UCS. Why don’t you start by explaining why it is so important for scientists to figure out the role these companies and their products have played in creating the climate crisis.
CP: Major fossil fuel companies and their trade groups knew that burning fossil fuels would dramatically reshape our climate since at least the 1960s. In the 1970s and ’80s, Exxon’s own scientists predicted—with startling accuracy—how global temperatures would increase, as well as the consequences for humanity. Despite having this knowledge and the opportunity to change course to avoid catastrophic warming, fossil fuel companies took a page from the tobacco industry’s playbook and orchestrated a decades-long campaign to deceive the public, cast doubt on climate science, and delay climate action at all levels. But just like Big Tobacco in the 1990s, the fossil fuel industry must be held accountable for the damage it has caused. Dozens of cities, counties, and states across the country have filed lawsuits to do just that.
At UCS, our Science Hub for Climate Litigation supports a broad range of climate accountability lawsuits by fostering litigation-relevant research, connecting litigators to the scientists and scientific research that can best inform their cases, and providing training for scientists to engage with the legal system. Studies such as our new one, which directly link climate change impacts to the industry’s emissions, provide a critical piece of the accountability puzzle and may help to inform existing and future cases.
EN: OK. But how was Richard Heede able to calculate that, at the time, 90 companies were responsible for nearly two-thirds of the carbon emissions between 1854 and 2010? What went into that calculation?
CP: Great question. These 90 entities, now 88 due to mergers, are oil and gas producers, coal companies, and cement manufacturers. Rick Heede combined publicly available records about extracting and producing carbon-intensive materials with scientific data about the amount of carbon dioxide and other heat-trapping gases released into the atmosphere from manufacturing and burning each of these products. His calculations account for emissions from such processes as venting as well as the fact that creating some products, such as steel, can actually store some carbon. As a result, he was able to nail down the magnitude of what has been going into the atmosphere and its origin.
EN: You and your colleagues point out that the increase in burned acreage across the western United States and southwestern Canada over the last few decades is partly due to a rise in what’s called the “vapor pressure deficit.” It’s a bit counterintuitive. Climate change heats up the air, which then can hold more moisture, but the fact that the air can hold more moisture dries out soil and vegetation, making them more flammable.
CP: Right. Vapor pressure deficit (VPD) is not super-intuitive. It measures the difference between the amount of moisture in the air and the amount of moisture air could hold if it were totally saturated. The relationship between VPD and wildfire can be explained by three key concepts. One you already mentioned: Warmer air can hold more moisture. Another is “diffusion,” when molecules move from areas of high concentration to areas of lower concentration, similar to how food coloring disperses through a glass of water. And the third is “gas exchange.” When plants open their pores to exchange oxygen for carbon dioxide during photosynthesis, they lose water to the atmosphere because the concentration of moisture in a leaf is usually greater than that of the surrounding air. When VPD is high, meaning the air could hold a lot more moisture, water moves more quickly from the plant to the atmosphere, which ultimately dries out the plant.
EN: The Fossil Fuels Behind Forest Fires report briefly mentions the fact that wildfires threaten the health and well-being of the folks who live in the region. Wildfires obviously can destroy homes and businesses, but even when fire engulfs unpopulated land, wildfire smoke can have a profound impact on public health miles away.
CP: Exactly. Wildfire smoke is extremely dangerous and particularly harmful for vulnerable populations, including seniors, children, low-income community residents, and outdoor workers. A day of exposure to intense wildfire smoke is roughly equivalent to smoking seven cigarettes.
The smoke is full of tiny particulates that can pass through the lungs and into the bloodstream, which can lead to a higher incidence of asthma, heart disease, and premature death from such respiratory diseases as COVID-19. For pregnant people, smoke exposure increases the risk of pre-term birth.
Public health experts say that there’s no safe amount of exposure to wildfire smoke. For people who work outdoors, that can pose a huge challenge. They may have to choose between protecting their health and protecting their livelihood. Staying inside, meanwhile, is not necessarily much safer. Smoke has a way of getting into homes and businesses, so it is very difficult to avoid. Some people can afford an air purifier, but people who can’t have to breathe smoke-filled air. And, like you said, the impact may be most acute in the area surrounding a fire, but the full effect is definitely global. Wildfire smoke from the West Coast in 2020 degraded air quality in your hometown of Washington, D.C., and in cities as far away as Western Europe. Particulates from Western wildfire smoke have even been recorded on Greenland ice sheets.
EN: The Fossil Fuels Behind Forest Fires report offers recommendations for the Biden administration and Congress to hold carbon producers accountable. What should they do? What are your top recommendations?
CP: There are lots of things that the federal government could do to hold corporations accountable. For starters, the Securities and Exchange Commission should finalize, implement, and enforce strong rules mandating standardized corporate climate disclosure. These rules would require companies to report carbon emissions from not just their operations, but also from the use of their products. Second, Congress and the Justice Department should investigate the fossil fuel industry’s ongoing climate disinformation campaigns like the House Oversight Committee began to do last Congress, so that the general public can better understand exactly how the industry deceived everyone. Third, communities and litigators should pursue legal routes to hold fossil fuel polluters accountable for their disinformation campaigns and their scientifically proven contributions to climate damage. Just last month, the US Supreme Court cleared the way for cases filed in California, Delaware, Hawai’i, Maryland, New Jersey, and Rhode Island to move forward in state courts after more than five years of the fossil fuel industry defendants’ procedural delay tactics.
EN: Last January, the US Department of Agriculture announced that $490 million from the Inflation Reduction Act (IRA) will be dedicated to projects to reduce fire risks in seven western states. That funding is on top of the $440 million the Bipartisan Infrastructure Law earmarked for wildfire mitigation efforts. What’s your take on those efforts, and what else should the administration, Congress and state governments do to prevent wildfires?
CP: Given how western forests evolved alongside fire, wildfires will never be fully preventable—nor should they be. But we can do a lot to reduce the risks to communities.
The funding in the IRA and infrastructure law is a critical first step to reduce the risk of catastrophic fire in some areas of the US west by employing a range of strategies like removing vegetation, thinning dense stands of trees, and conducting prescribed burns, which is when a fire is deliberately set under specific weather conditions to manage a forest. But it’s important to stress that these initiatives are just a first step. Reducing the incidence and risk of devastating wildfires and learning to coexist with fire will require a yearly and multigenerational commitment to forest management.
Beyond the IRA and infrastructure law projects, federal agencies could promote improvements in at-risk areas by, among other things, establishing “defensible space” around property and requiring developers to use fire-proof and fire-resistant building materials. They also should prioritize these kinds of efforts in the highest-risk, lowest-resourced communities.
Congress, meanwhile, could provide and maintain funding for at-risk forests and scale up risk-reduction treatments, including forest thinning followed by prescribed burning. Congress can also remove obstacles to Indigenous-led land management, especially cultural burning, which has shaped western forests for millennia.
Beyond these forest-centric strategies, Congress should pass legislation to rapidly reduce heat-trapping emissions and ensure an equitable and just future. But that should go without saying.