EOS

Syndicate content Eos
Science News by AGU
Updated: 10 hours 36 min ago

Seaweed Surges May Alter Arctic Fjord Carbon Dynamics

Fri, 05/16/2025 - 13:22
Source: Journal of Geophysical Research: Oceans

In high-latitude Arctic fjords, warming seas and reduced sea ice are boosting seaweed growth. This expansion of seaweed “forests” could alter the storage and cycling of carbon in coastal Arctic ecosystems, but few studies have explored these potential effects.

Roy et al. present a snapshot of the carbon dynamics of seaweed in a fjord in Svalbard, a Norwegian archipelago in the High Arctic, highlighting key comparisons between different seaweed types and between various fjord zones. The findings suggest that warming-driven seaweed growth could lead to the expansion of oxygen-deficient areas in fjords, potentially disrupting local ecosystems.

A team from the National Centre for Polar and Ocean Research in Goa, India, led the Indian Arctic expeditions in 2017, 2022, and 2023. On these expeditions, researchers collected 20 seaweed samples and 13 sediment samples from a variety of locations across Kongsfjorden, a nearly 20-kilometer-long fjord in Svalbard. Then they analyzed the signatures of stable carbon isotopes and lipids (biomolecules made mostly of long hydrocarbon chains) in the seaweed samples.

They found that red, green, and brown seaweeds had different stable carbon isotope fingerprints, reflecting their distinct ways of obtaining carbon from their surroundings. However, the different seaweeds had similar lipid signatures, suggesting that they developed similar lipid synthesis processes in their shared Arctic fjord environment.

The researchers also detected differences in carbon isotope and lipid signatures in sediments from different parts of the fjord. These data suggested that inner-fjord sediments may contain organic matter from a variety of sources, including seaweed, fossilized carbon, and land plants imported by melting glaciers or surface runoff, whereas organic matter in outer-fjord sediments has a larger proportion of seaweed lipids.

Notably, sediment samples collected beneath areas of high seaweed growth showed chemical evidence of low-oxygen conditions, possibly because of microbes consuming oxygen while feeding on seaweed. If these microbes are the cause of the low-oxygen conditions, continued warming-driven growth of seaweed forests could lead to expansion of oxygen-starved zones in Kongsfjorden and other High Arctic fjords, potentially destabilizing these ecosystems, the researchers say. (Journal of Geophysical Research: Oceans, https://doi.org/10.1029/2024JC021900, 2025)

—Sarah Stanley, Science Writer

Citation: Stanley, S. (2025), Seaweed surges may alter arctic fjord carbon dynamics, Eos, 106, https://doi.org/10.1029/2025EO250187. Published on 16 May 2025. Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Revised Emissions Show Higher Cooling in 10th Century Eruption

Fri, 05/16/2025 - 12:00
Editors’ Highlights are summaries of recent papers by AGU’s journal editors. Source: Geophysical Research Letters

Using recent improvements in our understanding of volcanic emissions, as well as comparisons to ice core measurements of non-sea-salt sulfur, Fuglestvedt et al. [2025] developed revised estimates of the emissions of the Eldgjá eruption. These sulfur and halogen emission estimates were incorporated in an atmosphere/climate model simulation of the 10th century.

The resulting predictions show higher aerosol optical depth and more cooling during the eruption than predicted previously. In addition, the simulated effects on the ozone layer show depletions related to halogen emissions. The larger amount of cooling improves the comparison to tree-ring proxies of temperature. The work demonstrates that improved emissions resolve past disagreements between the simulated cooling from an atmosphere/climate model and the tree-ring based records of temperature, providing new insight on the consequences of a volcanic eruption 1,000 years ago.

Citation: Fuglestvedt, H. F., Gabriel, I., Sigl, M., Thordarson, T., & Krüger, K. (2025). Revisiting the 10th-century Eldgjá eruption: Modeling the climatic and environmental impacts. Geophysical Research Letters, 52, e2024GL110507. https://doi.org/10.1029/2024GL110507

—Lynn Russell, Editor, Geophysical Research Letters

Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

New Global River Map Is the First to Include River Bifurcations and Canals

Thu, 05/15/2025 - 13:01
Source: Water Resources Research

Global river datasets represent rivers that flow downstream in single paths that follow surface elevation, but they often miss branching river systems found in areas such as floodplains, canals, and deltas. Forked, or bifurcated, rivers also often exist in densely populated areas, so mapping them at scale is crucial as climate change makes flooding more severe.

Wortmann et al. aimed to fill the gaps in existing global river maps with their new Global River Topology (GRIT) network, the first branching global river network that includes bifurcations, multithreaded channels, river distributaries, and large canals. GRIT uses a new digital elevation model with improved horizonal resolution of 30 meters, 3 times finer than the resolution of previous datasets, and incorporates high-resolution satellite imagery.

The GRIT network focuses on waterways with drainage areas greater than 50 square kilometers and bifurcations on rivers wider than 30 meters. GRIT consists of both vector maps, which use vertices and pathways to display features such as river segments and catchment boundaries, and raster layers, which are made up of pixels and capture continuously varying information, such as flow accumulation and height above the river.

In total, the effort maps approximately 19.6 million kilometers of waterways, including 818,000 confluences, 67,000 bifurcations, and 31,000 outlets—6,500 of which flow into closed basins. Most of the mapped bifurcations are on inland rivers, with nearly 30,000 in Asia, more than 12,000 in North and Central America, nearly 10,000 in South America, and nearly 4,000 in Europe.

GRIT provides a more precise and comprehensive view of the shape and connectivity of river systems than did previous reference datasets, the authors say, offering potential to improve hydrological and riverine habitat modeling, flood forecasting, and water management efforts globally. (Water Resources Research, https://doi.org/10.1029/2024WR038308, 2025)

—Rebecca Owen (@beccapox.bsky.social), Science Writer

Citation: Owen, R. (2025), New global river map is the first to include river bifurcations and canals, Eos, 106, https://doi.org/10.1029/2025EO250173. Published on 15 May 2025. Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

An Ancient Warming Event May Have Lasted Longer Than We Thought

Thu, 05/15/2025 - 12:44
Source: Geophysical Research Letters

Fifty-six million years ago, during the Paleocene-Eocene Thermal Maximum (PETM), global temperatures rose by more than 5°C over 100,000 or more years. Between 3,000 and 20,000 petagrams of carbon were released into the atmosphere during this time, severely disrupting ecosystems and ocean life globally and creating a prolonged hothouse state.

Modern anthropogenic global warming is also expected to upend Earth’s carbon cycle for thousands of years. Between 1850 and 2019, approximately 2,390 petagrams of carbon dioxide (CO2) were released into the atmosphere, and the release of another 5,000 petagrams in the coming centuries is possible with continued fossil fuel consumption. However, estimates of how long the disruption will last range widely, from about 3,000 to 165,000 years.

Understanding how long the carbon cycle was disrupted during the PETM could offer researchers insights into how severe and how long-lasting disruptions stemming from anthropogenic climate change may be. Previous research used carbon isotope records to estimate that the PETM lasted 120,000–230,000 years. Piedrahita et al. now suggest that the warming event lasted almost 269,000 years.

Evidence of the PETM is indicated in the geological record by a substantial drop in stable carbon isotope ratios. This drop is split into three phases, each representing a different part of the carbon cycle’s disruption and recovery. Previous estimates of when the isotopic drop ended have varied widely because of noise in the data on which they’re based.

In the new research, scientists studied six sedimentary records whose ages have been reliably estimated in previous work: one terrestrial record from Wyoming’s Bighorn Basin and five marine sedimentary records from various locations. Rather than using only raw data, as in previous studies, they used a probabilistic-based detection limit to account for analytical and chronological uncertainties and constrain the time frame of the PETM.

The recovery period in particular, this new study suggests, took much longer than previous estimates indicated—more than 145,000 years. The extended recovery time during the PETM likely means that future climate change scenarios will influence the carbon cycle for longer than most carbon cycle models predict, according to the researchers. (Geophysical Research Letters, https://doi.org/10.1029/2024GL113117, 2025)

—Rebecca Owen (@beccapox.bsky.social), Science Writer

Citation: Owen, R. (2025), An ancient warming event may have lasted longer than we thought, Eos, 106, https://doi.org/10.1029/2025EO250188. Published on 15 May 2025. Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

气候变暖正在改变欧亚大陆的干旱状况

Thu, 05/15/2025 - 12:42
Source: AGU Advances

This is an authorized translation of an Eos article. 本文是Eos文章的授权翻译。

确定全球干旱状况的变化在多大程度上归因于自然水文气候变化,又在多大程度上是由气候变化造成的,是一项复杂的任务。科学家经常使用复杂的计算机模型来模拟过去的气候变化,并识别前所未有的干旱状况。这些模型还可以帮助识别导致这些状况的因素,例如温度、降水和土地利用变化。然而,这些模型也可能存在偏差,这可能会影响某些地区干旱估计的可信度。

由于树木年轮在较温暖、较潮湿的年份长得较宽,而在较干燥、较寒冷的年份则长得比较薄,因此它们可以作为自然气候变化的记录,并为基于模型的水文气候重建提供一种补充方法。为了研究欧洲和亚洲的干旱情况,Marvel 等人利用新出版的《大欧亚干旱图集》(GEDA)进行了树木年轮测量,该图集包含了公元 1000 年至 2020 年之间生长的数千棵树木的记录。

研究团队依照政府间气候变化专门委员会第六次评估报告所定义的陆地区域对GEDA数据进行了划分。他们利用从1000年至1849年的树木年轮测量数据,估算了每个地区平均帕尔默干旱严重程度指数(PDSI,一种常用的干旱风险衡量指标)在工业化前的变化。随后,他们评估了这些工业化前的变化是否能够解释现代(1850-2020年)的PDSI值。

研究人员发现,在许多地区,现代PDSI的变化可以用全球气温上升来更准确地解释,这表明21世纪的干旱状况不太可能仅仅由自然变化引起。研究结果表明,随着气候变暖,东欧、地中海和俄罗斯北极地区都变得越来越干旱,而北欧、中亚东部和西藏则变得越来越湿润。

研究人员指出,除了气候变化之外,树木的年轮还会受到其他因素的影响。然而,这些因素不太可能对其结果产生重大影响,因为像GEDA这样的数据库通常包括来自选择性采样的地点和树种的数据,其中气候是影响树木年轮生长的主要因素。(AGU Advances, https://doi.org/10.1029/2024AV001289, 2025)

—科学撰稿人Sarah Derouin (@sarahderouin.bsky.social)

This translation was made by Wiley. 本文翻译由Wiley提供。

Read this article on WeChat. 在微信上阅读本文。

Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Can Desalination Quench Agriculture’s Thirst?

Thu, 05/15/2025 - 12:42

This story was originally published by Knowable Magazine.

Ralph Loya was pretty sure he was going to lose the corn. His farm had been scorched by El Paso’s hottest-ever June and second-hottest August; the West Texas county saw 53 days soar over 100 degrees Fahrenheit in the summer of 2024. The region was also experiencing an ongoing drought, which meant that crops on Loya’s eight-plus acres of melons, okra, cucumbers and other produce had to be watered more often than normal.

Loya had been irrigating his corn with somewhat salty, or brackish, water pumped from his well, as much as the salt-sensitive crop could tolerate. It wasn’t enough, and the municipal water was expensive; he was using it in moderation and the corn ears were desiccating where they stood.

Ensuring the survival of agriculture under an increasingly erratic climate is approaching a crisis in the sere and sweltering Western and Southwestern United States, an area that supplies much of our beef and dairy, alfalfa, tree nuts and produce. Contending with too little water to support their plants and animals, farmers have tilled under crops, pulled out trees, fallowed fields and sold off herds. They’ve also used drip irrigation to inject smaller doses of water closer to a plant’s roots, and installed sensors in soil that tell more precisely when and how much to water.

“We see it as a nice solution that’s appropriate in some contexts, but for agriculture it’s hard to justify, frankly.”

In the last five years, researchers have begun to puzzle out how brackish water, pulled from underground aquifers, might be de-salted cheaply enough to offer farmers another water resilience tool. Loya’s property, which draws its slightly salty water from the Hueco Bolson aquifer, is about to become a pilot site to test how efficiently desalinated groundwater can be used to grow crops in otherwise water-scarce places.

Desalination renders salty water less so. It’s usually applied to water sucked from the ocean, generally in arid lands with few options; some Gulf, African and island countries rely heavily or entirely on desalinated seawater. Inland desalination happens away from coasts, with aquifer waters that are brackish—containing between 1,000 and 10,000 milligrams of salt per liter, versus around 35,000 milligrams per liter for seawater. Texas has more than three dozen centralized brackish groundwater desalination plants, California more than 20.

Such technology has long been considered too costly for farming. Some experts still think it’s a pipe dream. “We see it as a nice solution that’s appropriate in some contexts, but for agriculture it’s hard to justify, frankly,” says Brad Franklin, an agricultural and environmental economist at the Public Policy Institute of California. Desalting an acre-foot (almost 326,000 gallons) of brackish groundwater for crops now costs about $800, while farmers can pay a lot less—as little as $3 an acre-foot for some senior rights holders in some places—for fresh municipal water. As a result, desalination has largely been reserved to make liquid that’s fit for people to drink. In some instances, too, inland desalination can be environmentally risky, endangering nearby plants and animals and reducing stream flows.

Brackish (slightly salty) groundwater is found mostly in the Western United States. Click image for larger version. Credit: J.S. Stanton et al. / Brackish Groundwater in the United States: USGS professional paper 1833, 2017

But the US Bureau of Reclamation, along with a research operation called the National Alliance for Water Innovation (NAWI) that’s been granted $185 million from the Department of Energy, have recently invested in projects that could turn that paradigm on its head. Recognizing the urgent need for fresh water for farms—which in the US are mostly inland—combined with the ample if salty water beneath our feet, these entities have funded projects that could help advance small, decentralized desalination systems that can be placed right on farms where they’re needed. Loya’s is one of them.

“We think we have a clear line of sight for agricultural-quality water.”

US farms consume over 83 million acre-feet (more than 27 trillion gallons) of irrigation water every year—the second most water-intensive industry in the country, after thermoelectric power. Not all aquifers are brackish, but most that are exist in the country’s West, and they’re usually more saline the deeper you dig. With fresh water everywhere in the world becoming saltier due to human activity, “we have to solve inland desal for ag…in order to grow as much food as we need,” says Susan Amrose, a research scientist at MIT who studies inland desalination in the Middle East and North Africa.

That means lowering energy and other operational costs; making systems simple for farmers to run; and figuring out how to slash residual brine, which requires disposal and is considered the process’s “Achilles’ heel,” according to one researcher.

The last half-decade of scientific tinkering is now yielding tangible results, says Peter Fiske, NAWI’s executive director. “We think we have a clear line of sight for agricultural-quality water.”

Swallowing the High Cost

Fiske believes farm-based mini-plants can be cost-effective for producing high-value crops like broccoli, berries and nuts, some of which need a lot of irrigation. That $800 per acre-foot has been achieved by cutting energy use, reducing brine and revolutionizing certain parts and materials. It’s still expensive but arguably worth it for a farmer growing almonds or pistachios in California—as opposed to farmers growing lesser-value commodity crops like wheat and soybeans, for whom desalination will likely never prove affordable. As a nut farmer, “I would sign up to 800 bucks per acre-foot of water till the cows come home,” Fiske says.

Loya’s pilot is being built with Bureau of Reclamation funding and will use a common process called reverse osmosis. Pressure pushes salty water through a semi-permeable membrane; fresh water comes out the other side, leaving salts behind as concentrated brine. Loya figures he can make good money using desalinated water to grow not just fussy corn, but even fussier grapes he might be able to sell at a premium to local wineries.

Such a tiny system shares some of the problems of its large-scale cousins—chiefly, brine disposal. El Paso, for example, boasts the biggest inland desalination plant in the world, which makes 27.5 million gallons of fresh drinking water a day. There, every gallon of brackish water gets split into two streams: fresh water and residual brine, at a ratio of 83 percent to 17 percent. Since there’s no ocean to dump brine into, as with seawater desalination, this plant injects it into deep, porous rock formations—a process too pricey and complicated for farmers.

But what if desalination could create 90 or 95 percent fresh water and 5 to 10 percent brine? What if you could get 100 percent fresh water, with just a bag of dry salts leftover? Handling those solids is a lot safer and easier, “because super-salty water brine is really corrosive…so you have to truck it around in stainless steel trucks,” Fiske says.

Finally, what if those salts could be broken into components—lithium, essential for batteries; magnesium, used to create alloys; gypsum, turned into drywall; as well as gold, platinum and other rare-earth elements that can be sold to manufacturers? Already, the El Paso plant participates in “mining” gypsum and hydrochloric acid for industrial customers.

Loya’s brine will be piped into an evaporation pond. Eventually, he’ll have to pay to landfill the dried-out solids, says Quantum Wei, founder and CEO of Harmony Desalting, which is building Loya’s plant. There are other expenses: drilling a well (Loya, fortuitously, already has one to serve the project); building the physical plant; and supplying the electricity to pump water up day after day. These are bitter financial pills for a farmer. “We’re not getting rich; by no means,” Loya says.

Rows of reverse osmosis membranes at the Kay Bailey Hutchison Desalination Plant in El Paso. Credit: Ada Cowan

More cost comes from the desalination itself. The energy needed for reverse osmosis is a lot, and the saltier the water, the higher the need. Additionally, the membranes that catch salt are gossamer-thin, and all that pressure destroys them; they also get gunked up and need to be treated with chemicals.

Reverse osmosis presents another problem for farmers. It doesn’t just remove salt ions from water but the ions of beneficial minerals, too, such as calcium, magnesium and sulfate. According to Amrose, this means farmers have to add fertilizer or mix in pretreated water to replace essential ions that the process took out.

To circumvent such challenges, one NAWI-funded team is experimenting with ultra-high-pressure membranes, fashioned out of stiffer plastic, that can withstand a much harder push. The results so far look “quite encouraging,” Fiske says. Another is looking into a system in which a chemical solvent dropped into water isolates the salt without a membrane, like the polymer inside a diaper absorbs urine. The solvent, in this case the common food-processing compound dimethyl ether, would be used over and over to avoid potentially toxic waste. It has proved cheap enough to be considered for agricultural use.

Amrose is testing a system that uses electrodialysis instead of reverse osmosis. This sends a steady surge of voltage across water to pull salt ions through an alternating stack of positively charged and negatively charged membranes. Explains Amrose, “You get the negative ions going toward their respective electrode until they can’t pass through the membranes and get stuck,” and the same happens with the positive ions. The process gets much higher fresh water recovery in small systems than reverse osmosis, and is twice as energy efficient at lower salinities. The membranes last longer, too—10 years versus three to five years, Amrose says—and can allow essential minerals to pass through.

Data-Based Design

At Loya’s farm, Wei paces the property on a sweltering summer morning with a local engineering company he’s tapped to design the brine storage pond. Loya is anxious that the pond be as small as possible to keep arable land in production; Wei is more concerned that it be big and deep enough. To factor this, he’ll look at average weather conditions since 1954 as well as worst-case data from the last 25 years pertaining to monthly evaporation and rainfall rates. He’ll also divide the space into two sections so one can be cleaned while the other is in use. Loya’s pond will likely be one-tenth of an acre, dug three to six feet deep.

(Left to right) West Texas farmer Ralph Loya, Quantum Wei of Harmony Desalting, and engineer Johanes Makahaube discuss where a desalination plant and brine pond might be placed on Loya’s farm. Credit: Ada Cowan

“Our goal is to make it as painless as possible.”

The desalination plant will pair reverse osmosis membranes with a “batch” process, pushing water through multiple times instead of once and gradually amping up the pressure. Regular reverse osmosis is energy-intensive because it constantly applies the highest pressures, Wei says, but Harmony’s process saves energy by using lower pressures to start with. A backwash between cycles prevents scaling by dissolving mineral crystals and washing them away. “You really get the benefit of the farmer not having to deal with dosing chemicals or replacing membranes,” Wei says. “Our goal is to make it as painless as possible.”

Another Harmony innovation concentrates leftover brine by running it through a nanofiltration membrane in their batch system; such membranes are usually used to pretreat water to cut back on scaling or to recover minerals, but Wei believes his system is the first to combine them with batch reverse osmosis.That’s what’s really going to slash brine volumes,” he says. The whole system will be hooked up to solar panels, keeping Loya’s energy off-grid and essentially free. If all goes to plan, the system will be operational by early 2025 and produce seven gallons of fresh water a minute during the strongest sun of the day, with a goal of 90 to 95 percent fresh water recovery. Any water not immediately used for irrigation will be stored in a tank.

Spreading Out the Research

Ninety-eight miles north of Loya’s farm, along a dead flat and endlessly beige expanse of road that skirts the White Sands Missile Range, more desalination projects burble away at the Brackish Groundwater National Desalination Research Facility in Alamogordo, New Mexico. The facility, run by the Bureau of Reclamation, offers scientists a lab and four wells of differing salinities to fiddle with.

On some parched acreage at the foot of the Sacramento Mountains, a longstanding farming pilot project bakes in relentless sunlight. After some preemptive words about the three brine ponds on the property—“They have an interesting smell, in between zoo and ocean”—facility manager Malynda Cappelle drives a golf cart full of visitors past solar arrays and water tanks to a fenced-in parcel of dust and plants. Here, since 2019, a team from the University of North Texas, New Mexico State University and Colorado State University has tested sunflowers, fava beans and, currently, 16 plots of pinto beans. Some plots are bare dirt; others are topped with compost that boosts nutrients, keeps soil moist and provides a salt barrier. Some plots are drip-irrigated with brackish water straight from a well; some get a desalinated/brackish water mix.

Eyeballing the plots even from a distance, the plants in the freshest-water plots look large and healthy. But those with compost are almost as vigorous, even when irrigated with brackish water. This could have significant implications for cash-conscious farmers. “Maybe we do a lesser level of desalination, more blending, and this will reduce the cost,” says Cappelle.

Pei Xu, has been co-investigator on this project since its start. She’s also the progenitor of a NAWI-funded pilot at the El Paso desalination plant. Later in the day, in a high-ceilinged space next to the plant’s treatment room, she shows off its consequential bits. Like Amrose’s system, hers uses electrodialysis. In this instance, though, Xu is aiming to squeeze a bit of additional fresh—at least freshish—water from the plant’s leftover brine. With suitably low levels of salinity, the plant could pipe it to farmers through the county’s existing canal system, turning a waste product into a valuable resource.

“I think our role now and in the future is as water stewards—to work with each farm to understand their situation and then to recommend their best path forward.”

Xu’s pinto bean and El Paso work, and Amrose’s in the Middle East, are all relevant to Harmony’s pilot and future projects. “Ideally we can improve desalination to the point where it’s an option which is seriously considered,” Wei says. “But more importantly, I think our role now and in the future is as water stewards—to work with each farm to understand their situation and then to recommend their best path forward…whether or not desalting is involved.”

Indeed, as water scarcity becomes ever more acute, desalination advances will help agriculture only so much; even researchers who’ve devoted years to solving its challenges say it’s no panacea. “What we’re trying to do is deliver as much water as cheaply as possible, but that doesn’t really encourage smart water use,” says NAWI’s Fiske. “In some cases, it encourages even the reverse. Why are we growing alfalfa in the middle of the desert?”

Franklin, of the California policy institute, highlights another extreme: Twenty-one of the state’s groundwater basins are already critically depleted, some due to agricultural overdrafting. Pumping brackish aquifers for desalination could aggravate environmental risks.

There are an array of measures, say researchers, that farmers themselves must take in order to survive, with rainwater capture and the fixing of leaky infrastructure at the top of the list. “Desalination is not the best, only or first solution,” Wei says. But he believes that when used wisely in tandem with other smart partial fixes, it could prevent some of the worst water-related catastrophes for our food system.

—Lela Nargi, Knowable Magazine

This article originally appeared in Knowable Magazine, a nonprofit publication dedicated to making scientific knowledge accessible to all. Sign up for Knowable Magazine’s newsletter. Read the original article here.

Old Forests in a New Climate

Thu, 05/15/2025 - 12:00
Editors’ Highlights are summaries of recent papers by AGU’s journal editors. Source: AGU Advances

The shading and evapotranspiration provided by forest vegetation buffers the understory climate, making it cooler than the surrounding non-forest. But does that buffering help prevent the forest from warming as much as its surroundings due to climate change?

Using a 45-year record in the H.J. Andrews Forest, Oregon, USA, Jones et al. [2025] compare changes in climate along a 1,000 meter elevation gradient with changes in nearby non-forested weather stations. The understory air temperature at every elevation within the forest increased at rates similar to, and in some cases greater than, those measured at meteorological stations throughout Oregon and Washington, indicating that the forest is not decoupled or protected from the effects of climate change.

Furthermore, the increase in summer maximum air temperature has been as large as 5 degrees Celsius throughout the forest. The temperature at the top elevation in July is now about the same as it was at the lowest elevation 45 years ago for some summer months. These findings are important because they indicate that, while forests confer cooler environments compared to non-forest, they are not protected from climate change.

Comparison of maximum air temperature in July from 1979 to 2023 in the Andrews Forest at 1,310 meters elevation (site RS04) and at 683 meters (site RS20) and the statewide average air temperature for Oregon. The high elevation site is consistently cooler than the low elevation site, and both are cooler than the average meteorological stations of Oregon, which includes non-forest sites. Hence, the forest vegetation does buffer (cool) the air temperature, but the slopes of the increase in temperature over time are similar, with the forest perhaps warming a bit faster than the statewide mean, indicating that the forests are not decoupled from the effects of climate change. Credit: Jones et al. [2025], Figure 4a

Citation: Jones, J. A., Daly, C., Schulze, M., & Stlll, C. J. (2025). Microclimate refugia are transient in stable old forests, Pacific Northwest, USA. AGU Advances, 6, e2024AV001492. https://doi.org/10.1029/2024AV001492 

—Eric Davidson, Editor, AGU Advances

Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Geological complexity as a way to understand the distribution of landslides

Thu, 05/15/2025 - 06:37

The Landslide Blog is written by Dave Petley, who is widely recognized as a world leader in the study and management of landslides.

Over the course of my career, I have read many papers (and indeed, written a few) that have tried to explain the distribution of landslides based upon combinations of factors that we consider might be important in their causation (for example, slope angle and lithology). There is utility in this type of approach, and it has informed planning guidelines in some countries, for example. However, it also has severe limitations and, even with the advent of artificial intelligence, there have been few major advances in this area for a while.

However, there is a very interesting and thought-provoking paper (Zhang et al. 2025) in the Bulletin of Engineering Geology and the Environment that might stimulate considerable interest. One reason for highlighting it here is that it might drop below the radar – this is not a well-read journal in my experience, and the paper is behind a paywall. That would be a shame, but the link in this post should allow you to read the paper.

The authors argue that we tend to treat geological factors in a rather over-simplified way in susceptibility analyses:-

“The types, triggers, and spatial distribution of landslides are closely related to the spatial complexity of geological conditions, which are indispensable factors in landslide susceptibility assessment. However, geological conditions often consider only a single index, leading to under-utilisation of geological information in assessing landslide hazards.”

Instead, they propose the use of an index of “geological complexity”. This index combines four major geological components:

  • Structural complexity – capturing dip direction, dip angle, slope and aspect;
  • Lithologic complexity – this essentially uses a geological map to capture the number of lithologic types per unit area;
  • Tectonic complexity – this is representing the density of mapped faults;
  • Seismicity – this captures the distribution of the probability of peak ground accelerations.

Zhang et al. (2025) use an analytical approach to weight each of these factors to produce an index of geological complexity across the landscape. In this piece of work, they then compare the results with the distribution of mapped landslides in a study area in the Eastern Himalayan Syntaxis in Tibet (centred on about [29.5, 95.25]. This is the broad area studied:-

Google Earth map of the area studied by Zhang et al. (2025) to examine the role of geological complexity in landslide distribution.

Now this is a fascinating study area – the Google Earth image below shows a small part of it – note the many landslides:-

Google Earth image of a part of the area studied by Zhang et al. (2025) to examine the role of geological complexity in landslide distribution.

Zhang et al. (2025) are able to show that, for this area at least, the spatial distribution of their index of geological complexity correlates well with the mapped distribution of landslides (there are 366 mapped landslides in the 16,606 km2 of the study area).

The authors are clear that this is not the final word on this approach. There is little doubt that this part of Tibet is a highly dynamic area in terms of both climate and tectonics, which probably favours structurally controlled landslides. To what degree would this approach work in a different setting? In addition, acquiring reliable data that represents the components could be a real challenge (e.g. structural data and reliable estimates of probability of peak ground accelerations), and of course the relative weighting of the different components of the index is an open question.

But, it introduces a fresh and really interesting approach that is worth exploring more widely. Zhang et al. (2025) note that there is the potential to combine this index with other indices that measure factors in landslide causation (e.g.  topography, climate and human activity) to produce an enhanced susceptibility assessment.

And finally, of course, this approach is providing insights into the ways in which different geological factors aggregate at a landscape scale to generate landslides. That feels like a fundamental insight that is also worth developing.

Thus, this work potentially forms the basis of a range of new studies, which is tremendously exciting.

Reference

Zhang, Y., et al. 2025. Geological Complexity: a novel index for measuring the relationship between landslide occurrences and geological conditionsBulletin of Engineering Geology and the Environment84, 301. https://doi.org/10.1007/s10064-025-04333-9.

Return to The Landslide Blog homepage Text © 2023. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

EPA to Rescind Rules on Four Forever Chemicals

Wed, 05/14/2025 - 13:51
body {background-color: #D2D1D5;} Research & Developments is a blog for brief updates that provide context for the flurry of news regarding law and policy changes that impact science and scientists today.

The EPA plans to reconsider drinking water limits for four different PFAS chemicals and extend deadlines for public water systems to comply, according to The Washington Post

PFAS, or per- and polyfluoroalkyl substances, are a group of chemicals that are widely used for their water- and stain-resistant properties. Exposure to PFAS is linked to higher risks of certain cancers, reproductive health issues, developmental delays and immune system problems. The so-called “forever chemicals” are ubiquitous in the environment and widely contaminate drinking water.

A rule implemented last year by President Joe Biden set drinking water limits for five common PFAS chemicals: PFOA, PFOS, PFHxS, PFNA, and GenX. Limits for PFOA and PFOS were set at 4 parts per trillion, and limits for PFHxS, PFNA, and GenX were set at 10 parts per trillion. The rule also set limits for mixtures of these chemicals and a sixth, PFBS.

Documents reviewed by The Washington Post show that the EPA plans to rescind and reconsider the limits for PFHxS, PFNA, GenX, and PFBS. Though the documents did not indicate a plan to reconsider limits for PFOA and PFOS, the agency does plan to extend the compliance deadlines for PFOA and PFOS limits from 2029 to 2031.

In the documents, Lee Zeldin, the agency’s administrator, said the plan will “protect Americans from PFOA and PFOS in their drinking water” and provide “common-sense flexibility in the form of additional time for compliance.”

 
Related

PFOA is a known carcinogen and PFOS is classified as a possible carcinogen by the National Cancer Institute.

The EPA plan comes after multiple lawsuits against the EPA in which trade associations representing water utilities challenged the science behind Biden’s drinking water standard. 

Experts expressed concern that rescinding and reconsidering limits for the four chemicals may not be legal because the Safe Drinking Water Act requires each revision to EPA drinking water standards to be at least as strict as the former regulation. 

“The law is very clear that the EPA can’t repeal or weaken the drinking water standard. Any effort to do so will clearly violate what Congress has required for decades,” Erik Olson, the senior strategic director for health at the Natural Resources Defense Council, an advocacy group, told The Washington Post

—Grace van Deelen (@gvd.bsky.social), Staff Writer

These updates are made possible through information from the scientific community. Do you have a story about how changes in law or policy are affecting scientists or research? Send us a tip at eos@agu.org. Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Resilient Solutions Involve Input and Data from the Community

Wed, 05/14/2025 - 13:36
Editors’ Highlights are summaries of recent papers by AGU’s journal editors. Source: Community Science Exchange

Climate Safe Neighborhoods (CSN) (a national effort by Groundwork USA) is a program that supports local communities in understanding their climate risk and providing input about vulnerabilities and solutions. Working with students, local universities and organizations, the CSN program (first started in Cincinnati) was extended to northern Kentucky.

A GIS-based dashboard was created to provide communities with access to data related to climate change and other social issues from health to demographics, together in one place. A climate vulnerability model (part of the dashboard) helped identify communities most in need in Kentucky – these neighborhoods were the focus of community workshops where residents learned about climate impacts and collaborated on potential solutions. Community partners helped with planning and running the workshops which included opportunities for residents to provide feedback through mapping activities – data which was added to the dashboard and later used to support climate solutions, such as climate advisory groups and tree plantings.

In their project report, Robles et al. [2025] outline the process and outcomes of the program which can serve as inspiration to others looking to support and collaborate with communities in becoming more resilient to climate impacts.

Citation: Robles, Z., et al. (2025), Climate Safe Neighborhoods: A community collaboration for a more climate-resilient future, Community Science Exchange, https://doi.org/10.1029/2024CSE000101. Published 7 February 2025.  

—Kathryn Semmens, Deputy Editor, Community Science Exchange

Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Have We Finally Found the Source of the “Sargassum Surge”?

Wed, 05/14/2025 - 13:11

Since 2011, massive mats of golden-brown seaweed—pelagic Sargassum—have repeatedly swamped the shores of the Caribbean, West Africa, and parts of Central and South America. These sprawling blooms have suffocated coral reefs, crippled tourism, and disrupted coastal life.

What caused this sudden explosion of seaweed in regions that had rarely experienced it before?

A modeling study published earlier this year in Nature Communications Earth & Environment offers one possible explanation. It links the start of this phenomenon to the 2009–2010 North Atlantic Oscillation (NAO)—a rare climatic event involving stronger-than-usual Westerlies and altered ocean currents. According to the study, NAO conditions transported Sargassum from its historic home in the Sargasso Sea in the western North Atlantic into tropical waters farther south, where nutrient-rich upwellings and warm temperatures triggered the algae’s explosive growth.

Migrating Macroalgae

Julien Jouanno, senior scientist at the Institut de Recherche pour le Développement and head of the Dynamics of Tropical Oceans team at Laboratoire d’Etudes en Géophysique et Océanographie Spatiales in Toulouse, France, led the modeling work behind the study.

“Our simulations, which combine satellite observations with a coupled ocean-biogeochemical model, suggest that ocean mixing—not river discharge—is the main nutrient source fueling this proliferation,” Jouanno explained. The model incorporates both ocean circulation and biological processes like growth and decay, enabling the team to test various scenarios involving inputs such as ocean fertilization by rivers (such as the Amazon) or influxes of nutrients from the atmosphere (such as dust from the Sahara).

“Turning off river nutrients in the model only reduced biomass by around 15%,” said Jouanno. “But eliminating deep-ocean mixing caused the blooms to collapse completely. That’s a clear indicator of what’s actually driving the system.”

“When we exclude the ocean current anomaly linked to the NAO, Sargassum stays mostly confined to the Sargasso Sea,” Jouanno said. “But once it’s included, we start to see the early formation of what is now known as the Great Atlantic Sargassum Belt.”

The Great Atlantic Sargassum Belt, first identified in 2011, is the largest macroalgae bloom in the world. The massive blooms sit below the Sargasso Sea and currents of the North Atlantic Ocean. Credit: López Miranda et al., 2021, https://doi.org/10.3389/fmars.2021.768470, CC BY 4.0

But not all scientists are convinced by the study. Some argue the truth is more complex, and more grounded in historic ecological patterns.

Was the Seaweed Already There?

Amy N. S. Siuda, an associate professor of marine science at Eckerd College in Florida and an expert in Sargassum ecology, critiqued the study’s core assumptions. “The idea that the 2011 bloom was seeded from the Sargasso Sea doesn’t hold up under scrutiny,” she said.

The dominant form of Sargassum present in the early blooms in the Caribbean and elsewhere (Sargassum natans var. wingei), she explained, “hasn’t been documented in the north Sargasso Sea at all, and only scarcely in the south.”

Historical records suggest that the variety had long existed in the Caribbean and tropical Atlantic, however—just at such low concentrations that it was easily missed, Siuda said. She also cited research on population genetics that show little physical mixing between S. natans var. wingei and other morphotypes through at least 2018.

“We were simply not looking closely enough,” she noted. “Early blooms on Caribbean beaches were misidentified. What we thought was S. fluitans var. fluitans, another common morphotype, turned out to be something else entirely.”

A Sargassum bloom can be difficult to model, Siuda explained. Models “can’t distinguish whether Sargassum is blooming or simply aggregating due to currents. Field data, shipboard observations, and genetic studies tell a much more complex story,” she said.

Donald Johnson, a senior research scientist at the University of Southern Mississippi, offered a different perspective. While he agreed that Sargassum has long existed in the tropical Atlantic, he believes the NAO may have also played a catalytic role in the blooms—just not in the way the original study claims.

“Holopelagic Sargassum has always been in the region—from the Gulf of Guinea to Dakar—as evidenced by earlier observations stretching back to Gabon,” Johnson explained. “What changed in 2010 was the strength of the Westerlies. Drifting buoys without drogues showed unusual eastward movement, possibly carrying Sargassum from the North Atlantic toward West Africa.”

He offered a crucial caveat, however: “There was never any clear satellite or coastal evidence of a massive influx [of Sargassum]. If the NAO did contribute, it may have done so gradually—adding to existing Sargassum in the region and pushing it over the threshold into a full-scale bloom.”

In this view, the 2011 event was less about transport and more about amplification, described as an environmental tipping point triggered by a convergence of factors already present in the system.

More Than Just Climate

Both Siuda and Johnson agreed that multiple nutrient sources in the tropical Atlantic are likely playing a major role in the ongoing blooms:

  • Riverine discharge from the Amazon, Congo, and Niger basins
  • Saharan dust, rich in iron and phosphates, blown westward each year
  • Seasonal upwelling and wind-driven mixing, particularly off West Africa and along the equator.

“Modeling surface currents in the tropical Atlantic is extremely difficult.”

And, Johnson pointed out, persistent gaps in satellite coverage—due to cloud cover and the South Atlantic Anomaly—mean we’re still missing key pieces of the puzzle. “Modeling surface currents in the tropical Atlantic is extremely difficult,” he said. “First-mode long waves and incomplete data make it impossible to fully visualize how Sargassum is moving and growing.”

Ultimately, both researchers said that understanding these golden tides requires reconciling models with fieldwork, as well as recognizing the distinct morphotypes of Sargassum. “Each variety reacts differently to environmental conditions,” Siuda explained. “If we don’t account for that, we risk oversimplifying the entire phenomenon.”

“There’s a danger in leaning too heavily on satellite models,” Johnson cautioned. “They measure aggregation, not growth. Without field validation, assumptions about bloom dynamics could mislead management efforts.”

Jouanno, too, acknowledged the study’s limitations. The model does not differentiate between Sargassum morphotypes and struggles with interannual variability, particularly in peak bloom years like 2016 and 2019. “This was likely a regime shift—possibly amplified by climate change—and while we can simulate broad patterns, there’s still much we don’t know about how each bloom evolves year to year.”

“We’re still learning,” Jouanno said. “Our understanding of vertical mixing, surface stratification, and nutrient cycling in the tropics is incomplete—and the biology of different Sargassum types is another critical gap.”

Ultimately, Jouanno said, “This is climate-driven. The NAO was a catalyst, and ongoing warming may be sustaining it. But without better field data and biological detail, we can’t fully predict what comes next.”

—Sarah Nelson (@SarahPeter3), Science Writer

Citation: Nelson, S. (2025), Have we finally found the source of the “Sargassum surge”?, Eos, 106, https://doi.org/10.1029/2025EO250189. Published on 14 May 2025. Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Heat and Pollution Events Are Deadly, Especially in the Global South

Wed, 05/14/2025 - 13:10
Source: GeoHealth

Small particulate matter (PM2.5) in air pollution raises the risks of respiratory problems, cardiovascular disease, and even cognitive decline. Heat waves, which are occurring more often with climate change, can cause heatstroke and exacerbate conditions such as asthma and diabetes. When heat and pollution coincide, they can create a deadly combination.

Existing studies on hot and polluted episodes (HPEs) have often focused on local, urban settings, so their findings are not necessarily representative of HPEs around the world. To better understand premature mortality associated with pollution exposure during HPEs at multiple scales and settings, Huang et al. looked at a global record of climate and PM2.5 levels from 1990 to 2019.

The team used data from the Modern-Era Retrospective analysis for Research and Applications, Version 2 (MERRA-2), which included hourly concentration measurements of PM2.5 in the form of dust, sea salt, black carbon, organic carbon, and sulfate particles. Daily maximum temperatures were obtained via satellite data from ERA5 (the fifth-generation European Centre for Medium-Range Weather Forecasts atmospheric reanalysis).

The researchers also conducted a meta-analysis of health literature, identifying relevant research using the search terms “PM2.5,” “high temperature,” “heatwaves,” and “all-cause mortality” in the PubMed, Scopus, and Web of Science databases. Then, they conducted a statistical analysis to estimate PM2.5-associated premature mortality events during HPEs.

They found that both the frequency of HPEs and maximum PM2.5 levels during HPEs have increased significantly over the past 30 years. The team estimated that exposure to PM2.5 during HPEs caused 694,440 premature deaths globally between 1990 and 2019, 80% of which occurred in the Global South. With an estimated 142,765 deaths, India had the highest mortality burden by far, surpassing the combined total of China and Nigeria, which had the second- and third-highest burdens. The United States was the most vulnerable of the Global North countries, with an estimated 32,227 deaths.

The work also revealed that PM2.5 pollution during HPEs has steadily increased in the Global North, despite several years of emission control endeavors, and that the frequency of HPEs in the Global North surpassed that of the Global South in 2010. The researchers point out that the study shows the importance of global collaboration on climate change policies and pollution mitigation to address environmental inequalities. (GeoHealth, https://doi.org/10.1029/2024GH001290, 2025)

—Sarah Derouin (@sarahderouin.com), Science Writer

Citation: Derouin, S. (2025), Heat and pollution events are deadly, especially in the Global South, Eos, 106, https://doi.org/10.1029/2025EO250151. Published on 14 May 2025. Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Denver’s Stinkiest Air Is Concentrated in Less Privileged Neighborhoods

Tue, 05/13/2025 - 13:42

The skunky smell of pot smoke. Burning stenches from a pet food factory. Smoke from construction sites. These are the smells that communities of color and lower income people in Denver are disproportionately exposed to at home and at work, according to a new study.

The study, published in the Journal of Exposure Science and Environmental Epidemiology, is one of the first to examine the environmental justice dimensions of bad odors in an urban setting.

“Odors are often ignored because they’re difficult to study and regulate.”

There’s been a wealth of research in recent years showing that people of color and those with lower incomes are exposed to more air pollution, including nitrogen oxides and particulate matter. Exposure to air pollution causes or exacerbates cardiovascular and respiratory illnesses, among other health problems, and increases the overall risk of death.

Odors are more challenging to measure than other kinds of air pollution because they are chemically complex mixtures that dissipate quickly. “Odors are often ignored because they’re difficult to study and regulate,” said Arbor Quist, an environmental epidemiologist at the Ohio State University who was not involved with the research.

Though other kinds of air pollution in the United States are limited by federal laws and regulated at the state level, smells are typically regulated under local nuisance laws. Though somewhat subjective—some folks don’t mind a neighbor toking up—odors can have a big impact on how people experience their environment, and whether they feel safe. Bad smells can limit people’s enjoyment of their homes and yards, and reduce property values.

“Odor is one of the ways municipalities can take action on air pollution.”

Odors are more than a nuisance—they pose real health risks. Exposure to foul smells is associated with headache, elevated blood pressure, irritated eyes and throat, nausea, and stress, among other ills.

University of Colorado Denver urban planning researcher Priyanka deSouza said local regulation of odors gives municipalities an opportunity to intervene in environmental health. “Odor is one of the ways municipalities can take action on air pollution,” she said.

Previous research on ambient odor air pollution has focused on point sources, including chemical spills and concentrated animal-feeding operations such as industrial hog farms. DeSouza said Denver’s unusually robust odor enforcement system made it possible to study the environmental justice dimensions of smelly air over a large geographical area.

Making a Stink

The city maintains a database of odor complaints that includes a description of the smell and the address of the complaint. DeSouza’s team used machine learning to identify themes in complaints made from 2014 to 2023. They found four major clusters: smells related to a Purina pet food factory, smells from a neighbor’s property, reports of smoke from construction and other work, and complaints about marijuana and other small industrial sources.

They used the text of the odor complaints and the locations of the complaints to deduce the likely source of the odor. For instance, complaints about the pet food factory often included the words night, dog, bad, and burn. Marijuana-related complaints frequently used the words strong and fume.

They also matched complaint locations against the addresses of 265 facilities that have been required by the city to come up with odor control plans for reasons including the nature of their business, or because five or more complaints have been filed about them within 30 days. (Growing, processing, or manufacturing marijuana occurs in 257 of these facilities.)

Less privileged people in Denver are more likely to live or work near businesses cited for creating bad smells, including marijuana facilities. Credit: Elsa Olofsson at cbdoracle.com/Flickr, CC BY 2.0

Less privileged census blocks—those with higher percentages of non-white workers and residents, residents with less formal education, lower median incomes, and lower property values—were more likely to contain a potentially smelly facility, according to the analysis. DeSouza said this is likely due to structural racism and historical redlining in Denver.

The facilities were concentrated in a part of the city that is isolated by two major freeways. Previous research has shown that people in these neighborhoods are exposed to more traffic-related air pollution, and that people of color, particularly Hispanic and Latino populations, are more likely to live there.

Yet people living and working in those areas weren’t more likely to register a complaint about bad smells than people in other parts of the city. In fact, most of the complaints came from parts of the city that are gentrifying. DeSouza said it’s not clear why people who live or work near a stinky facility aren’t more likely to complain than people who live farther away from one.

It may be that wind is carrying smells to more affluent neighborhoods, where more privileged people are more aware of Denver’s laws and feel empowered to complain. The research team, which includes researchers from the city’s public health department, is continuing to study odors in the city. Their next step is to integrate information about wind speed and direction with the odor complaints.

Quist said the study is unique in that it factors in potential workplace exposures, where people spend a large part of their day. Workplace exposures can also have health effects that aren’t captured in research that looks only at where people live. “A lot of research has focused on residential disparities,” she said, adding that the inclusion in the analysis of facilities that have had to submit odor-monitoring plans is also significant. “This is an important paper,” she said.

DeSouza said she suspects that people who live and work near smelly facilities may not be complaining because they feel disenfranchised. “People are resigned to odors, they have been living there a long time, and they don’t feel they have a voice.” If residents in less privileged neighborhoods were able to successfully lodge an odor complaint and get results, it may make them feel more connected in general to the city government, she added.

“I’m really interested in supporting policy action,” she said. “We’re trying to get residents to be aware that they can complain.”

—Katherine Bourzac, Science Writer

Citation: Bourzac, K. (2025), Denver’s stinkiest air is concentrated in less privileged neighborhoods, Eos, 106, https://doi.org/10.1029/2025EO250183. Published on 13 May 2025. Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

The Uncertain Fate of the Beaufort Gyre

Tue, 05/13/2025 - 13:40
Source: Journal of Geophysical Research: Oceans

As freshwater from glacier melt, river runoff, and precipitation enters the Arctic Ocean, a circular current called the Beaufort Gyre traps it near the surface, slowly releasing the water into the Atlantic Ocean over decades. Warming global temperatures may weaken the wind patterns that keep the gyre turning, which could slow or even stop the current and release a flood of freshwater with a volume comparable to that of the Great Lakes. This deluge would cool and freshen the surrounding Arctic and North Atlantic oceans, affecting sea life and fisheries and possibly disrupting weather patterns in Europe.

Athanase et al. analyzed the Beaufort Gyre’s circulation patterns using 27 climate models from the Coupled Model Intercomparison Project Phase 6 (CMIP6), which informed the most recent Intergovernmental Panel on Climate Change (IPCC) report.

Before trying to predict the future behavior of the gyre, the researchers turned to the past. To assess how well CMIP6 models capture the gyre’s behavior, they compared records of how the gyre actually behaved to CMIP6 simulations of how it behaved, given known conditions in the ocean and the atmosphere.

Most CMIP6 models do not capture the gyre’s behavior very well, it turns out. Some models did not predict any circulation, when circulation clearly occurred. Others overestimated the area or strength of the gyre, shifted it too far north, or inaccurately estimated sea ice thickness within the gyre. Eleven of the models produced sea ice thickness estimates the researchers called “unacceptable.”

Despite these problems, the researchers pushed ahead, using the 18 CMIP6 models that most closely reflected the gyre’s true behavior to predict how circulation could change under two future emissions scenarios: intermediate and high. Most of the tested models showed that the gyre’s circulation will decline significantly by the end of this century, but their predictions for exactly when varied from the 2030s to the 2070s. Three models predicted that the gyre will not stop turning at all.

The gyre is most likely to disappear if emissions remain high, but it may stabilize as a smaller gyre if emissions are only moderate, the researchers found. Despite substantial uncertainty, the results are a reminder that when it comes to preventing the most disruptive effects of climate change, “every fraction of a degree matters,” they write. (Journal of Geophysical Research: Oceans, https://doi.org/10.1029/2024JC021873, 2025)

—Saima May Sidik (@saimamay.bsky.social), Science Writer

Citation: Sidik, S. M. (2025), The uncertain fate of the Beaufort Gyre, Eos, 106, https://doi.org/10.1029/2025EO250186. Published on 13 May 2025. Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Beyond Up and Down: How Arctic Ponds Stir Sideways

Tue, 05/13/2025 - 12:00
Editors’ Highlights are summaries of recent papers by AGU’s journal editors. Source: Geophysical Research Letters

Arctic ponds play a key role in permafrost thaw and greenhouse gas emissions; however, their physical mixing processes remain poorly characterized. Most conceptual models assume that vertical, one-dimensional mixing—driven by surface cooling due to which water becomes denser, and sinks vertically, mixing the water mass from the top down—is the primary mechanism for deep water renewal.

Henderson and MacIntyre [2025] challenges that model by showing that two-dimensional thermal overturning circulation dominates in a shallow permafrost pond. Specifically, nighttime surface cooling in shallow areas generates cold, dense water that flows downslope along the pond bed, displacing and renewing deeper waters. Using high-resolution velocity, temperature, and other related measurements, the authors demonstrate that these gravity currents ventilate the bottom despite persistent stable stratification during nighttime. These findings reveal that lateral thermal flows can drive vertical exchange in small water bodies. The results have important implications for biogeochemical modeling and upscaling greenhouse gas fluxes across Arctic landscapes.

This is a diagram of how cold water moves at night in a pond. At night, the shallow parts of the pond (near the right edge) cool down faster than the deeper parts. This creates thin layers of cold, dense water near the shore. Because this water is denser (heavier), it sinks and flows sideways along the sloped pond bottom toward the deepest part of the pond—like a slow, underwater landslide of cold water. As this cold water flows downhill, it pushes the existing bottom water upward, creating a gentle circulation loop: surface water cools and sinks at the edges, flows along the bottom, and pushes older deep water upward toward the middle. Credit: Henderson and MacIntyre, Figure 3a

Citation: Henderson, S. M., & MacIntyre, S. (2025). Thermal overturning circulation in an Arctic pond. Geophysical Research Letters, 52, e2024GL114541. https://doi.org/10.1029/2024GL114541

—Valeriy Ivanov, Editor, Geophysical Research Letters

Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Some Tropical Trees Benefit from Lightning Strikes

Mon, 05/12/2025 - 13:10

Every now and then, some trees apparently just need a jolt. When struck by lightning, the large-crowned Dipteryx oleifera sustains minimal damage, whereas the trees and parasitic vines in its immediate vicinity often wither away or die altogether. That clearing out of competing vegetation results in a nearly fifteenfold boost in lifetime seed production for D. oleifera, researchers estimated.

An Instrumented Forest

“This is the only place on Earth where we have consistent lightning tracking data with the precision needed to know [whether a strike] hit a patch of forest.”

Panama is often known for its eponymous canal. But Barro Colorado Island, in central Panama, is also home to what researchers who work in the area call “one of the best-studied patches of tropical forest on earth.” That’s because cameras and devices to measure electric fields are constantly surveying the forest from atop a series of towers, each about 40 meters high. Those instruments can reveal, among other information, the precise locations of lightning strikes. “This is the only place on Earth where we have consistent lightning tracking data with the precision needed to know [whether a strike] hit a patch of forest,” said Evan Gora, an ecologist at the Cary Institute of Ecosystem Studies and the Smithsonian Tropical Research Institute.

Such infrastructure is key to locating trees that have been struck by lightning, said Gabriel Arellano, a forest ecologist at the University of Michigan in Ann Arbor who was not involved in the research. “It’s very difficult to monitor lightning strikes and find the specific trees that were affected.”

That’s because a lightning strike to a tropical tree rarely leads to a fire, said Gora. More commonly, tropical trees hit by lightning look largely undamaged but die off slowly over a period of months.

Follow the Flashes

To better understand how large tropical trees are affected by lightning strikes, Gora and his colleagues examined 94 lightning strikes to 93 unique trees on Barro Colorado Island between 2014 and 2019. In 2021, the team traveled to the island to collect both ground- and drone-based imagery of each directly struck tree and its environs.

Gora and his colleagues recorded six metrics about the condition of each directly struck tree and its cadre of parasitic woody vines known as lianas—including crown loss, trunk damage, and percent of the crown infested with lianas. Lianas colonize the crowns of many tropical trees, using them for structure and competing with them for light. Think of someone sitting next to you and picking off half of every bite of food you take, Gora said. “That’s effectively what these lianas are doing.”

The team also surveyed the trees surrounding each directly struck tree. The electrical current of a lightning strike can travel through the air and pass through nearby trees as well, explained Gora. Where a struck tree’s branches are closest to its neighbors, “the ends of its branches and its neighbors’ will die,” Gora said. “You’ll see dozens of those locations.”

Thriving After Lightning

On average, the researchers found that about a quarter of trees directly struck by lightning died. But when the team divided up their sample by tree species, D. oleifera (more commonly known as the almendro or tonka bean tree) stood out for its uncanny ability to survive lightning strikes. The nine D. oleifera trees in the team’s sample consistently survived lightning strikes, whereas their lianas and immediate neighbors did not fare so well. “There was a pretty substantial amount of damage in the area, but not to the directly struck tree,” said Gora of the species. “This one never died.”

(Ten other species in the researchers’ cohort of trees also exhibited no mortality after being struck by lightning, but those samples were all too small—one or two individuals—to draw any robust conclusions from.)

A D. oleifera tree in Panama is shown just after being struck by lightning in 2019 (left) and 2 years later (right). The tree survived the strike, but its parasitic vines and some of its neighbors did not. Credit: Evan Gora

Gora and his collaborators estimated that large D. oleifera trees are struck by lightning an average of five times during their roughly 300-year lifespan. This species’ ability to survive those events while lianas and neighboring trees often died back should result in overall reduced competition for nutrients and sunlight, the team reasoned. Using models of tree growth and reproductive capacity, the researchers estimated that D. oleifera reaped substantial benefits from being struck by lightning—particularly in regard to fecundity, or the number of seeds produced over a tree’s lifetime. “The ability to survive lightning increases their fecundity by fourteenfold,” Gora said.

D. oleifera may be essentially evolving to be better lightning rods.

The researchers furthermore showed that D. oleifera tended to be both taller and wider at its crown than many other tropical tree species on Barro Colorado Island. Previous work by Gora and his colleagues has shown that taller trees are particularly at risk for getting struck by lightning. It’s therefore conceivable that D. oleifera are essentially evolving to be better lightning rods, said Gora. “Maybe lightning is shaping not just the dynamics of our forests but also the evolution.”

These results were published in New Phytologist.

Gora and his collaborators hypothesized that the physiology of D. oleifera must be conferring some protection against the massive amount of current imparted by a lightning strike. Previous work by Gora and other researchers has suggested that D. oleifera is more conductive than average; higher levels of conductivity mean less resistance and therefore less internal heating. “We think that how conductive a tree is is really important to whether it dies,” said Gora.

Continuing to ferret out other lightning-hardy tree species will be important for understanding how forests evolve over time. And that’s where more data will be useful, said Arellano. “I wouldn’t be surprised if we find many other species.”

—Katherine Kornei (@KatherineKornei), Science Writer

Citation: Kornei, K. (2025), Some tropical trees benefit from lightning strikes, Eos, 106, https://doi.org/10.1029/2025EO250181. Published on 12 May 2025. Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Proposed Experiment Could Clarify Origin of Martian Methane

Mon, 05/12/2025 - 13:08
Source: Journal of Geophysical Research: Planets

Over the past decade, the Curiosity rover has repeatedly detected methane on the surface of Mars. This gas is often produced by microbes, so it could herald the presence of life on the Red Planet. But skeptics have postulated that the gas detected by Curiosity could have a much more pedestrian origin. Viscardy et al. suggest the methane could be coming from inside the Curiosity rover itself rather than from the atmosphere of Mars. They propose an experiment that could differentiate between microbes and a technological source.

There’s ample reason to believe something is going awry, the researchers say. Each methane measurement that Curiosity’s spectrometer reports is actually the average of three individual measurements. Though those averages tend to suggest the presence of methane, the individual measurements are far more variable, bringing the results into question.

Another issue concerns the instability of gas pressures inside the spectrometer. The two main compartments—the foreoptics chamber that holds the laser source and the cell that holds the Martian air sample—are designed to remain sealed from each other and from the outside environment. However, significant pressure variations observed in both compartments, even during individual measurement runs, suggest this isn’t the case. These pressure changes raise doubts about the hermetic sealing of the system and the integrity of the analyzed air samples.

It’s clear, however, that at least some of the methane traveled to Mars from Earth. Prior to launch from Cape Canaveral in 2011, Florida air is known to have leaked into the foreoptics chamber. This contamination has persisted despite multiple gas evacuations, pointing to unidentified methane reservoirs or production mechanisms within the instrument. As a result, methane levels in this compartment are more than 1,000 times higher than those measured in the cell storing the Martian air sample for analysis. Even an “imperceptible” leak between the chambers could cause Curiosity to report erroneous methane levels, the researchers write.

To put the issue to rest, the researchers suggest analyzing the methane content of the same sample of Martian air on two consecutive nights. A concentration of methane that is higher on the second night than on the first night would suggest that methane is leaking into the sample from elsewhere in the rover rather than coming from the planet itself. (Journal of Geophysical Research: Planets, https://doi.org/10.1029/2024JE008441, 2025)

—Saima May Sidik (@saimamay.bsky.social), Science Writer

Citation: Sidik, S. M. (2025), Proposed experiment could clarify origin of Martian methane, Eos, 106, https://doi.org/10.1029/2025EO250182. Published on 12 May 2025. Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Seismic analysis to understand the 13 February 2024 Çöpler Gold Mine Landslide, Erzincan, Türkiye 

Mon, 05/12/2025 - 06:44

The Landslide Blog is written by Dave Petley, who is widely recognized as a world leader in the study and management of landslides.

On 13 February 2024, the enormous Çöpler Gold Mine Landslide occurred in Erzincan, Türkiye (Turkey), killing nine miners. This was the first of two massive and immensely damaging heap leach mine failures last year (the other occurred in Canada). That such an event could occur has come as something of surprise to many people, so there is intense interest in understanding the circumstances of the failure.

I posted about the landslide at the time, and subsequently:

At the time, Capella Space captured this amazing radar image of the aftermath of the landslide (which is highlighted):

A radar image of the 13 February 2024 landslide at Çöpler Mine in Türkiye (Turkey), courtesy of Capella Space.

Analysis of this landslide is ongoing, and information is emerging on a regular basis. The latest is an open access paper (Büyükakpınar et al. 2025the PDF is here) in The Seismic Record that combines analysis of the seismic data from the landslide with remote sensing data to try to understand the failure.

The use of seismic data for landslide analysis often causes confusion, with people interpreting it to mean that the landslide was triggered by an earthquake. This is not the case – the scale of this landslide meant that it generated a seismic signal that was detected up to 400 km from the source. This data can be analysed to provide information about the landslide itself.

Büyükakpınar et al. (2025) provides three really interesting insights into the Çöpler Gold Mine Landslide, confirming initial observations. The first is that there are two distinct seismic signals, 48 seconds apart. Thus, there were two landslide events. The first detached to the west, representing a collapse of a steep slope into the deep excavation. The second moved to the north‐northeast, on a more gentle slope. It is the second that was caught on video, and that is highlighted in the Capella Space image. In fact the first landslide can also be seen in the image – in particular the landslide deposit at the bottom of the deep excavation. The analysis also suggests that the combined landslide volume was about 1.2 millon m3, of which the second landslide was about 1 millon m3.

I would note that soon after the landslide,  Tolga Gorum correctly identified that the image shows that the landslide moved in two directions.

Second, Büyükakpınar et al. (2025) have used an InSAR analysis to examine precursory deformation of the heap leach pad before the failure. This suggests that the mass was moving at up to 60 mm per year over the four years prior to the failure. The trend is quite linear, so it is not obvious that it would have provided an indication that failure was imminent, but this level of movement would be quite surprising in a well managed site.

Finally, and perhaps most importantly, Büyükakpınar et al. (2025) also show that the embankment below the cyanide leach pond (labelled in the pre-failure Google Earth imagery below) is now moving at up to 85 mm/year. As the authors put it this “raises significant concerns about the potential for further instability in the area”.

Google Earth image showing the site of the 13 February 2024 Çöpler Gold Mine Landslide, Erzincan, Türkiye (Turkey). The embankment that is showing active deformation is highlighted.

One can only hope that this hazard, in a seismically active area, is being addressed and that lessons have been learnt.

Reference

Büyükakpınar, P. et al. 2025. Seismic, Field, and Remote Sensing Analysis of the 13 February 2024 Çöpler Gold Mine Landslide, Erzincan, TürkiyeThe Seismic Record 5 (2): 165–174. doi: https://doi.org/10.1785/0320250007

Return to The Landslide Blog homepage

Trump Blocks Funding for EPA Science Division

Fri, 05/09/2025 - 19:56
body {background-color: #D2D1D5;} Research & Developments is a blog for brief updates that provide context for the flurry of news regarding law and policy changes that impact science and scientists today.

The Trump administration has blocked funding for the EPA’s Office of Research and Development (ORD), the agency’s main science division.

An email sent 7 May and first reported by E&E News said that research laboratory funding had been stopped except for requests related to health and safety. Nature then obtained additional internal e-mails regarding the funding freeze which were confirmed by anonymous EPA sources.

“Lab research will wind down over the next few weeks as we will no longer have the capability to acquire supplies and materials,” one of the emails said.

The freeze appears to disregard a Congressional spending agreement that guaranteed EPA funding at 2024 levels through September.

On 2 May, EPA administrator Lee Zeldin announced a “reorganization” within the EPA to ensure that its research “directly advances statutory obligations and mission-essential functions.” Zeldin assured members of the House Committee on Science, Space, and Technology that ORD would not experience significant changes during the reorganization, and this latest funding freeze seems to break that promise.

 
Related

“We are unsure if these laboratory activities will continue post-reorganization,” the 7 May email stated. “Time and funding would be needed to reconstitute activities.”

The EPA told E&E News that the email was “factually inaccurate” and that ORD is not part of the planned reorganization.

But Jennifer Orme-Zavaleta, who served as principal deputy assistant administrator at ORD during Trump’s first presidency, said that “They have basically shut ORD down by cutting off the money.”

The 2 May reorganization announcement also included a deadline for the nearly 1,500 ORD staff to either apply for a new position within the EPA, retire, or resign. That deadline is at 11:59 on 9 May. Fewer than 500 new jobs have been posted at the agency, and hundreds of EPA employees have already been fired.

—Kimberly M. S. Cartier (@astrokimcartier.bsky.social), Staff Writer

These updates are made possible through information from the scientific community. Do you have a story about how changes in law or policy are affecting scientists or research? Send us a tip at eos@agu.org. Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

NSF Plans to Abolish Divisions

Fri, 05/09/2025 - 13:12
body {background-color: #D2D1D5;} Research & Developments is a blog for brief updates that provide context for the flurry of news regarding law and policy changes that impact science and scientists today.

The U.S. National Science Foundation (NSF) plans to abolish dozens of divisions across all eight of its directorates and reduce the number of programs within those divisions, according to Science.

A spokesperson for NSF told Science that the reason behind the decision was to “reduce the number of SES [senior executive service] positions in the agency and create new non-executive positions to better align with the needs of the agency.”

Directorates at NSF and the divisions within them oversee grantmaking related to a particular field of science. Current directors and deputy directors will lose their titles and may be reassigned. Division directors play a large role in grantmaking decisions and are usually responsible for giving final approval for NSF awards. 

NSF lists the following directorates and divisions:

  • Directorate for Biological Sciences
    • Biological Infrastructure
    • Environmental Biology
    • Emerging Frontiers
    • Integrative Organismal Systems
    • Molecular and Cellular Biosciences
  • Directorate for Computer and Information Science and Engineering
    • Office of Advanced Cyberinfrastructure
    • Computing and Communication Foundations
    • Computer and Network Systems
    • Information and Intelligent Systems
  • Directorate for Engineering 
    • Chemical, Bioengineering, Environmental and Transport Systems
    • Civil, Mechanical and Manufacturing Innovation
    • Electrical, Communications and Cyber Systems
    • Engineering Education and Centers
    • Emerging Frontiers and Multidisciplinary Activities
  • Directorate for Geosciences
    • Atmospheric and Geospace Sciences
    • Earth Sciences
    • Ocean Sciences
    • Research, Innovation, Synergies and Education
    • Office of Polar Programs
  • Directorate for Mathematical and Physical Sciences
    • Astronomical Sciences
    • Chemistry
    • Materials Research
    • Mathematical Sciences
    • Physics
    • Office of Strategic Initiatives
  • Directorate for Social, Behavioral, and Economic Sciences
    • Behavioral and Cognitive Sciences
    • National Center for Science and Engineering Statistics
    • Social and Economic Sciences
    • Multidisciplinary Activities
  • Directorate for STEM Education
    • Equity for Excellence in STEM
    • Graduate Education
    • Research on Learning in Formal and Informal Settings
    • Undergraduate Education
  • Directorate for Technology, Innovation and Partnerships
    • Regional Innovation and Economic Growth
    • Accelerating Technology Translation and Development
    • Preparing the U.S. Workforce

“The end of NSF and American science expertise as we know it is here,” wrote Paul Bierman, a geomorphologist at the University of Vermont, on Bluesky

 
Related

The decision to abolish its divisions may be part of a larger restructuring of NSF grantmaking, according to Science.

NSF was already facing drastic changes to its operations from Trump administration directives, including an order to stop awarding new and existing grants until further notice and an order cancelling hundreds of grants related to diversity, equity, and inclusion as well as disinformation and misinformation. Last month, NSF shuttered most of its outside advisory committees that gave input to operations at seven of the eight directorates.

On 8 May, members of the House Committee on Science, Space, and Technology sent a letter to Brian Stone, the acting director of the NSF, expressing distress at the changes at NSF over the past few weeks. 

“So, who is in charge here? How far does DOGE’s influence reach?” members of the committee wrote in the letter. “We seek answers about actions NSF has taken that potentially break the law and certainly break the trust of the research community.”

Layoff notices are expected to be sent to NSF staff members today, as well.

9 May update: On Friday, NSF closed its Division of Equity for Excellence in STEM (EES) and removed the division from its website. EES was responsible for programs that advanced access to science, technology, engineering, and mathematics (STEM) education. In its explanation for the closure, NSF noted that it is “mindful of its statutory program obligations and plans to take steps to ensure those continue.” Division grantees received notice from their program officers about the closure this morning.

An internal memo circulated Thursday and obtained by E&E News stated that NSF will begin a reduction in force (RIF) aimed at its Senior Executive Service. The RIF will also terminate roughly 300 temporary positions.

—Grace van Deelen (@gvd.bsky.social), Staff Writer

These updates are made possible through information from the scientific community. Do you have a story about how changes in law or policy are affecting scientists or research? Send us a tip at eos@agu.org. Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer