EOS

Syndicate content Eos
Science News by AGU
Updated: 1 day 13 hours ago

Can Desalination Quench Agriculture’s Thirst?

Thu, 05/15/2025 - 12:42

This story was originally published by Knowable Magazine.

Ralph Loya was pretty sure he was going to lose the corn. His farm had been scorched by El Paso’s hottest-ever June and second-hottest August; the West Texas county saw 53 days soar over 100 degrees Fahrenheit in the summer of 2024. The region was also experiencing an ongoing drought, which meant that crops on Loya’s eight-plus acres of melons, okra, cucumbers and other produce had to be watered more often than normal.

Loya had been irrigating his corn with somewhat salty, or brackish, water pumped from his well, as much as the salt-sensitive crop could tolerate. It wasn’t enough, and the municipal water was expensive; he was using it in moderation and the corn ears were desiccating where they stood.

Ensuring the survival of agriculture under an increasingly erratic climate is approaching a crisis in the sere and sweltering Western and Southwestern United States, an area that supplies much of our beef and dairy, alfalfa, tree nuts and produce. Contending with too little water to support their plants and animals, farmers have tilled under crops, pulled out trees, fallowed fields and sold off herds. They’ve also used drip irrigation to inject smaller doses of water closer to a plant’s roots, and installed sensors in soil that tell more precisely when and how much to water.

“We see it as a nice solution that’s appropriate in some contexts, but for agriculture it’s hard to justify, frankly.”

In the last five years, researchers have begun to puzzle out how brackish water, pulled from underground aquifers, might be de-salted cheaply enough to offer farmers another water resilience tool. Loya’s property, which draws its slightly salty water from the Hueco Bolson aquifer, is about to become a pilot site to test how efficiently desalinated groundwater can be used to grow crops in otherwise water-scarce places.

Desalination renders salty water less so. It’s usually applied to water sucked from the ocean, generally in arid lands with few options; some Gulf, African and island countries rely heavily or entirely on desalinated seawater. Inland desalination happens away from coasts, with aquifer waters that are brackish—containing between 1,000 and 10,000 milligrams of salt per liter, versus around 35,000 milligrams per liter for seawater. Texas has more than three dozen centralized brackish groundwater desalination plants, California more than 20.

Such technology has long been considered too costly for farming. Some experts still think it’s a pipe dream. “We see it as a nice solution that’s appropriate in some contexts, but for agriculture it’s hard to justify, frankly,” says Brad Franklin, an agricultural and environmental economist at the Public Policy Institute of California. Desalting an acre-foot (almost 326,000 gallons) of brackish groundwater for crops now costs about $800, while farmers can pay a lot less—as little as $3 an acre-foot for some senior rights holders in some places—for fresh municipal water. As a result, desalination has largely been reserved to make liquid that’s fit for people to drink. In some instances, too, inland desalination can be environmentally risky, endangering nearby plants and animals and reducing stream flows.

Brackish (slightly salty) groundwater is found mostly in the Western United States. Click image for larger version. Credit: J.S. Stanton et al. / Brackish Groundwater in the United States: USGS professional paper 1833, 2017

But the US Bureau of Reclamation, along with a research operation called the National Alliance for Water Innovation (NAWI) that’s been granted $185 million from the Department of Energy, have recently invested in projects that could turn that paradigm on its head. Recognizing the urgent need for fresh water for farms—which in the US are mostly inland—combined with the ample if salty water beneath our feet, these entities have funded projects that could help advance small, decentralized desalination systems that can be placed right on farms where they’re needed. Loya’s is one of them.

“We think we have a clear line of sight for agricultural-quality water.”

US farms consume over 83 million acre-feet (more than 27 trillion gallons) of irrigation water every year—the second most water-intensive industry in the country, after thermoelectric power. Not all aquifers are brackish, but most that are exist in the country’s West, and they’re usually more saline the deeper you dig. With fresh water everywhere in the world becoming saltier due to human activity, “we have to solve inland desal for ag…in order to grow as much food as we need,” says Susan Amrose, a research scientist at MIT who studies inland desalination in the Middle East and North Africa.

That means lowering energy and other operational costs; making systems simple for farmers to run; and figuring out how to slash residual brine, which requires disposal and is considered the process’s “Achilles’ heel,” according to one researcher.

The last half-decade of scientific tinkering is now yielding tangible results, says Peter Fiske, NAWI’s executive director. “We think we have a clear line of sight for agricultural-quality water.”

Swallowing the High Cost

Fiske believes farm-based mini-plants can be cost-effective for producing high-value crops like broccoli, berries and nuts, some of which need a lot of irrigation. That $800 per acre-foot has been achieved by cutting energy use, reducing brine and revolutionizing certain parts and materials. It’s still expensive but arguably worth it for a farmer growing almonds or pistachios in California—as opposed to farmers growing lesser-value commodity crops like wheat and soybeans, for whom desalination will likely never prove affordable. As a nut farmer, “I would sign up to 800 bucks per acre-foot of water till the cows come home,” Fiske says.

Loya’s pilot is being built with Bureau of Reclamation funding and will use a common process called reverse osmosis. Pressure pushes salty water through a semi-permeable membrane; fresh water comes out the other side, leaving salts behind as concentrated brine. Loya figures he can make good money using desalinated water to grow not just fussy corn, but even fussier grapes he might be able to sell at a premium to local wineries.

Such a tiny system shares some of the problems of its large-scale cousins—chiefly, brine disposal. El Paso, for example, boasts the biggest inland desalination plant in the world, which makes 27.5 million gallons of fresh drinking water a day. There, every gallon of brackish water gets split into two streams: fresh water and residual brine, at a ratio of 83 percent to 17 percent. Since there’s no ocean to dump brine into, as with seawater desalination, this plant injects it into deep, porous rock formations—a process too pricey and complicated for farmers.

But what if desalination could create 90 or 95 percent fresh water and 5 to 10 percent brine? What if you could get 100 percent fresh water, with just a bag of dry salts leftover? Handling those solids is a lot safer and easier, “because super-salty water brine is really corrosive…so you have to truck it around in stainless steel trucks,” Fiske says.

Finally, what if those salts could be broken into components—lithium, essential for batteries; magnesium, used to create alloys; gypsum, turned into drywall; as well as gold, platinum and other rare-earth elements that can be sold to manufacturers? Already, the El Paso plant participates in “mining” gypsum and hydrochloric acid for industrial customers.

Loya’s brine will be piped into an evaporation pond. Eventually, he’ll have to pay to landfill the dried-out solids, says Quantum Wei, founder and CEO of Harmony Desalting, which is building Loya’s plant. There are other expenses: drilling a well (Loya, fortuitously, already has one to serve the project); building the physical plant; and supplying the electricity to pump water up day after day. These are bitter financial pills for a farmer. “We’re not getting rich; by no means,” Loya says.

Rows of reverse osmosis membranes at the Kay Bailey Hutchison Desalination Plant in El Paso. Credit: Ada Cowan

More cost comes from the desalination itself. The energy needed for reverse osmosis is a lot, and the saltier the water, the higher the need. Additionally, the membranes that catch salt are gossamer-thin, and all that pressure destroys them; they also get gunked up and need to be treated with chemicals.

Reverse osmosis presents another problem for farmers. It doesn’t just remove salt ions from water but the ions of beneficial minerals, too, such as calcium, magnesium and sulfate. According to Amrose, this means farmers have to add fertilizer or mix in pretreated water to replace essential ions that the process took out.

To circumvent such challenges, one NAWI-funded team is experimenting with ultra-high-pressure membranes, fashioned out of stiffer plastic, that can withstand a much harder push. The results so far look “quite encouraging,” Fiske says. Another is looking into a system in which a chemical solvent dropped into water isolates the salt without a membrane, like the polymer inside a diaper absorbs urine. The solvent, in this case the common food-processing compound dimethyl ether, would be used over and over to avoid potentially toxic waste. It has proved cheap enough to be considered for agricultural use.

Amrose is testing a system that uses electrodialysis instead of reverse osmosis. This sends a steady surge of voltage across water to pull salt ions through an alternating stack of positively charged and negatively charged membranes. Explains Amrose, “You get the negative ions going toward their respective electrode until they can’t pass through the membranes and get stuck,” and the same happens with the positive ions. The process gets much higher fresh water recovery in small systems than reverse osmosis, and is twice as energy efficient at lower salinities. The membranes last longer, too—10 years versus three to five years, Amrose says—and can allow essential minerals to pass through.

Data-Based Design

At Loya’s farm, Wei paces the property on a sweltering summer morning with a local engineering company he’s tapped to design the brine storage pond. Loya is anxious that the pond be as small as possible to keep arable land in production; Wei is more concerned that it be big and deep enough. To factor this, he’ll look at average weather conditions since 1954 as well as worst-case data from the last 25 years pertaining to monthly evaporation and rainfall rates. He’ll also divide the space into two sections so one can be cleaned while the other is in use. Loya’s pond will likely be one-tenth of an acre, dug three to six feet deep.

(Left to right) West Texas farmer Ralph Loya, Quantum Wei of Harmony Desalting, and engineer Johanes Makahaube discuss where a desalination plant and brine pond might be placed on Loya’s farm. Credit: Ada Cowan

“Our goal is to make it as painless as possible.”

The desalination plant will pair reverse osmosis membranes with a “batch” process, pushing water through multiple times instead of once and gradually amping up the pressure. Regular reverse osmosis is energy-intensive because it constantly applies the highest pressures, Wei says, but Harmony’s process saves energy by using lower pressures to start with. A backwash between cycles prevents scaling by dissolving mineral crystals and washing them away. “You really get the benefit of the farmer not having to deal with dosing chemicals or replacing membranes,” Wei says. “Our goal is to make it as painless as possible.”

Another Harmony innovation concentrates leftover brine by running it through a nanofiltration membrane in their batch system; such membranes are usually used to pretreat water to cut back on scaling or to recover minerals, but Wei believes his system is the first to combine them with batch reverse osmosis.That’s what’s really going to slash brine volumes,” he says. The whole system will be hooked up to solar panels, keeping Loya’s energy off-grid and essentially free. If all goes to plan, the system will be operational by early 2025 and produce seven gallons of fresh water a minute during the strongest sun of the day, with a goal of 90 to 95 percent fresh water recovery. Any water not immediately used for irrigation will be stored in a tank.

Spreading Out the Research

Ninety-eight miles north of Loya’s farm, along a dead flat and endlessly beige expanse of road that skirts the White Sands Missile Range, more desalination projects burble away at the Brackish Groundwater National Desalination Research Facility in Alamogordo, New Mexico. The facility, run by the Bureau of Reclamation, offers scientists a lab and four wells of differing salinities to fiddle with.

On some parched acreage at the foot of the Sacramento Mountains, a longstanding farming pilot project bakes in relentless sunlight. After some preemptive words about the three brine ponds on the property—“They have an interesting smell, in between zoo and ocean”—facility manager Malynda Cappelle drives a golf cart full of visitors past solar arrays and water tanks to a fenced-in parcel of dust and plants. Here, since 2019, a team from the University of North Texas, New Mexico State University and Colorado State University has tested sunflowers, fava beans and, currently, 16 plots of pinto beans. Some plots are bare dirt; others are topped with compost that boosts nutrients, keeps soil moist and provides a salt barrier. Some plots are drip-irrigated with brackish water straight from a well; some get a desalinated/brackish water mix.

Eyeballing the plots even from a distance, the plants in the freshest-water plots look large and healthy. But those with compost are almost as vigorous, even when irrigated with brackish water. This could have significant implications for cash-conscious farmers. “Maybe we do a lesser level of desalination, more blending, and this will reduce the cost,” says Cappelle.

Pei Xu, has been co-investigator on this project since its start. She’s also the progenitor of a NAWI-funded pilot at the El Paso desalination plant. Later in the day, in a high-ceilinged space next to the plant’s treatment room, she shows off its consequential bits. Like Amrose’s system, hers uses electrodialysis. In this instance, though, Xu is aiming to squeeze a bit of additional fresh—at least freshish—water from the plant’s leftover brine. With suitably low levels of salinity, the plant could pipe it to farmers through the county’s existing canal system, turning a waste product into a valuable resource.

“I think our role now and in the future is as water stewards—to work with each farm to understand their situation and then to recommend their best path forward.”

Xu’s pinto bean and El Paso work, and Amrose’s in the Middle East, are all relevant to Harmony’s pilot and future projects. “Ideally we can improve desalination to the point where it’s an option which is seriously considered,” Wei says. “But more importantly, I think our role now and in the future is as water stewards—to work with each farm to understand their situation and then to recommend their best path forward…whether or not desalting is involved.”

Indeed, as water scarcity becomes ever more acute, desalination advances will help agriculture only so much; even researchers who’ve devoted years to solving its challenges say it’s no panacea. “What we’re trying to do is deliver as much water as cheaply as possible, but that doesn’t really encourage smart water use,” says NAWI’s Fiske. “In some cases, it encourages even the reverse. Why are we growing alfalfa in the middle of the desert?”

Franklin, of the California policy institute, highlights another extreme: Twenty-one of the state’s groundwater basins are already critically depleted, some due to agricultural overdrafting. Pumping brackish aquifers for desalination could aggravate environmental risks.

There are an array of measures, say researchers, that farmers themselves must take in order to survive, with rainwater capture and the fixing of leaky infrastructure at the top of the list. “Desalination is not the best, only or first solution,” Wei says. But he believes that when used wisely in tandem with other smart partial fixes, it could prevent some of the worst water-related catastrophes for our food system.

—Lela Nargi, Knowable Magazine

This article originally appeared in Knowable Magazine, a nonprofit publication dedicated to making scientific knowledge accessible to all. Sign up for Knowable Magazine’s newsletter. Read the original article here.

Old Forests in a New Climate

Thu, 05/15/2025 - 12:00
Editors’ Highlights are summaries of recent papers by AGU’s journal editors. Source: AGU Advances

The shading and evapotranspiration provided by forest vegetation buffers the understory climate, making it cooler than the surrounding non-forest. But does that buffering help prevent the forest from warming as much as its surroundings due to climate change?

Using a 45-year record in the H.J. Andrews Forest, Oregon, USA, Jones et al. [2025] compare changes in climate along a 1,000 meter elevation gradient with changes in nearby non-forested weather stations. The understory air temperature at every elevation within the forest increased at rates similar to, and in some cases greater than, those measured at meteorological stations throughout Oregon and Washington, indicating that the forest is not decoupled or protected from the effects of climate change.

Furthermore, the increase in summer maximum air temperature has been as large as 5 degrees Celsius throughout the forest. The temperature at the top elevation in July is now about the same as it was at the lowest elevation 45 years ago for some summer months. These findings are important because they indicate that, while forests confer cooler environments compared to non-forest, they are not protected from climate change.

Comparison of maximum air temperature in July from 1979 to 2023 in the Andrews Forest at 1,310 meters elevation (site RS04) and at 683 meters (site RS20) and the statewide average air temperature for Oregon. The high elevation site is consistently cooler than the low elevation site, and both are cooler than the average meteorological stations of Oregon, which includes non-forest sites. Hence, the forest vegetation does buffer (cool) the air temperature, but the slopes of the increase in temperature over time are similar, with the forest perhaps warming a bit faster than the statewide mean, indicating that the forests are not decoupled from the effects of climate change. Credit: Jones et al. [2025], Figure 4a

Citation: Jones, J. A., Daly, C., Schulze, M., & Stlll, C. J. (2025). Microclimate refugia are transient in stable old forests, Pacific Northwest, USA. AGU Advances, 6, e2024AV001492. https://doi.org/10.1029/2024AV001492 

—Eric Davidson, Editor, AGU Advances

Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Geological complexity as a way to understand the distribution of landslides

Thu, 05/15/2025 - 06:37

The Landslide Blog is written by Dave Petley, who is widely recognized as a world leader in the study and management of landslides.

Over the course of my career, I have read many papers (and indeed, written a few) that have tried to explain the distribution of landslides based upon combinations of factors that we consider might be important in their causation (for example, slope angle and lithology). There is utility in this type of approach, and it has informed planning guidelines in some countries, for example. However, it also has severe limitations and, even with the advent of artificial intelligence, there have been few major advances in this area for a while.

However, there is a very interesting and thought-provoking paper (Zhang et al. 2025) in the Bulletin of Engineering Geology and the Environment that might stimulate considerable interest. One reason for highlighting it here is that it might drop below the radar – this is not a well-read journal in my experience, and the paper is behind a paywall. That would be a shame, but the link in this post should allow you to read the paper.

The authors argue that we tend to treat geological factors in a rather over-simplified way in susceptibility analyses:-

“The types, triggers, and spatial distribution of landslides are closely related to the spatial complexity of geological conditions, which are indispensable factors in landslide susceptibility assessment. However, geological conditions often consider only a single index, leading to under-utilisation of geological information in assessing landslide hazards.”

Instead, they propose the use of an index of “geological complexity”. This index combines four major geological components:

  • Structural complexity – capturing dip direction, dip angle, slope and aspect;
  • Lithologic complexity – this essentially uses a geological map to capture the number of lithologic types per unit area;
  • Tectonic complexity – this is representing the density of mapped faults;
  • Seismicity – this captures the distribution of the probability of peak ground accelerations.

Zhang et al. (2025) use an analytical approach to weight each of these factors to produce an index of geological complexity across the landscape. In this piece of work, they then compare the results with the distribution of mapped landslides in a study area in the Eastern Himalayan Syntaxis in Tibet (centred on about [29.5, 95.25]. This is the broad area studied:-

Google Earth map of the area studied by Zhang et al. (2025) to examine the role of geological complexity in landslide distribution.

Now this is a fascinating study area – the Google Earth image below shows a small part of it – note the many landslides:-

Google Earth image of a part of the area studied by Zhang et al. (2025) to examine the role of geological complexity in landslide distribution.

Zhang et al. (2025) are able to show that, for this area at least, the spatial distribution of their index of geological complexity correlates well with the mapped distribution of landslides (there are 366 mapped landslides in the 16,606 km2 of the study area).

The authors are clear that this is not the final word on this approach. There is little doubt that this part of Tibet is a highly dynamic area in terms of both climate and tectonics, which probably favours structurally controlled landslides. To what degree would this approach work in a different setting? In addition, acquiring reliable data that represents the components could be a real challenge (e.g. structural data and reliable estimates of probability of peak ground accelerations), and of course the relative weighting of the different components of the index is an open question.

But, it introduces a fresh and really interesting approach that is worth exploring more widely. Zhang et al. (2025) note that there is the potential to combine this index with other indices that measure factors in landslide causation (e.g.  topography, climate and human activity) to produce an enhanced susceptibility assessment.

And finally, of course, this approach is providing insights into the ways in which different geological factors aggregate at a landscape scale to generate landslides. That feels like a fundamental insight that is also worth developing.

Thus, this work potentially forms the basis of a range of new studies, which is tremendously exciting.

Reference

Zhang, Y., et al. 2025. Geological Complexity: a novel index for measuring the relationship between landslide occurrences and geological conditionsBulletin of Engineering Geology and the Environment84, 301. https://doi.org/10.1007/s10064-025-04333-9.

Return to The Landslide Blog homepage Text © 2023. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

EPA to Rescind Rules on Four Forever Chemicals

Wed, 05/14/2025 - 13:51
body {background-color: #D2D1D5;} Research & Developments is a blog for brief updates that provide context for the flurry of news regarding law and policy changes that impact science and scientists today.

The EPA plans to reconsider drinking water limits for four different PFAS chemicals and extend deadlines for public water systems to comply, according to The Washington Post

PFAS, or per- and polyfluoroalkyl substances, are a group of chemicals that are widely used for their water- and stain-resistant properties. Exposure to PFAS is linked to higher risks of certain cancers, reproductive health issues, developmental delays and immune system problems. The so-called “forever chemicals” are ubiquitous in the environment and widely contaminate drinking water.

A rule implemented last year by President Joe Biden set drinking water limits for five common PFAS chemicals: PFOA, PFOS, PFHxS, PFNA, and GenX. Limits for PFOA and PFOS were set at 4 parts per trillion, and limits for PFHxS, PFNA, and GenX were set at 10 parts per trillion. The rule also set limits for mixtures of these chemicals and a sixth, PFBS.

Documents reviewed by The Washington Post show that the EPA plans to rescind and reconsider the limits for PFHxS, PFNA, GenX, and PFBS. Though the documents did not indicate a plan to reconsider limits for PFOA and PFOS, the agency does plan to extend the compliance deadlines for PFOA and PFOS limits from 2029 to 2031.

In the documents, Lee Zeldin, the agency’s administrator, said the plan will “protect Americans from PFOA and PFOS in their drinking water” and provide “common-sense flexibility in the form of additional time for compliance.”

 
Related

PFOA is a known carcinogen and PFOS is classified as a possible carcinogen by the National Cancer Institute.

The EPA plan comes after multiple lawsuits against the EPA in which trade associations representing water utilities challenged the science behind Biden’s drinking water standard. 

Experts expressed concern that rescinding and reconsidering limits for the four chemicals may not be legal because the Safe Drinking Water Act requires each revision to EPA drinking water standards to be at least as strict as the former regulation. 

“The law is very clear that the EPA can’t repeal or weaken the drinking water standard. Any effort to do so will clearly violate what Congress has required for decades,” Erik Olson, the senior strategic director for health at the Natural Resources Defense Council, an advocacy group, told The Washington Post

—Grace van Deelen (@gvd.bsky.social), Staff Writer

These updates are made possible through information from the scientific community. Do you have a story about how changes in law or policy are affecting scientists or research? Send us a tip at eos@agu.org. Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Resilient Solutions Involve Input and Data from the Community

Wed, 05/14/2025 - 13:36
Editors’ Highlights are summaries of recent papers by AGU’s journal editors. Source: Community Science Exchange

Climate Safe Neighborhoods (CSN) (a national effort by Groundwork USA) is a program that supports local communities in understanding their climate risk and providing input about vulnerabilities and solutions. Working with students, local universities and organizations, the CSN program (first started in Cincinnati) was extended to northern Kentucky.

A GIS-based dashboard was created to provide communities with access to data related to climate change and other social issues from health to demographics, together in one place. A climate vulnerability model (part of the dashboard) helped identify communities most in need in Kentucky – these neighborhoods were the focus of community workshops where residents learned about climate impacts and collaborated on potential solutions. Community partners helped with planning and running the workshops which included opportunities for residents to provide feedback through mapping activities – data which was added to the dashboard and later used to support climate solutions, such as climate advisory groups and tree plantings.

In their project report, Robles et al. [2025] outline the process and outcomes of the program which can serve as inspiration to others looking to support and collaborate with communities in becoming more resilient to climate impacts.

Citation: Robles, Z., et al. (2025), Climate Safe Neighborhoods: A community collaboration for a more climate-resilient future, Community Science Exchange, https://doi.org/10.1029/2024CSE000101. Published 7 February 2025.  

—Kathryn Semmens, Deputy Editor, Community Science Exchange

Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Have We Finally Found the Source of the “Sargassum Surge”?

Wed, 05/14/2025 - 13:11

Since 2011, massive mats of golden-brown seaweed—pelagic Sargassum—have repeatedly swamped the shores of the Caribbean, West Africa, and parts of Central and South America. These sprawling blooms have suffocated coral reefs, crippled tourism, and disrupted coastal life.

What caused this sudden explosion of seaweed in regions that had rarely experienced it before?

A modeling study published earlier this year in Nature Communications Earth & Environment offers one possible explanation. It links the start of this phenomenon to the 2009–2010 North Atlantic Oscillation (NAO)—a rare climatic event involving stronger-than-usual Westerlies and altered ocean currents. According to the study, NAO conditions transported Sargassum from its historic home in the Sargasso Sea in the western North Atlantic into tropical waters farther south, where nutrient-rich upwellings and warm temperatures triggered the algae’s explosive growth.

Migrating Macroalgae

Julien Jouanno, senior scientist at the Institut de Recherche pour le Développement and head of the Dynamics of Tropical Oceans team at Laboratoire d’Etudes en Géophysique et Océanographie Spatiales in Toulouse, France, led the modeling work behind the study.

“Our simulations, which combine satellite observations with a coupled ocean-biogeochemical model, suggest that ocean mixing—not river discharge—is the main nutrient source fueling this proliferation,” Jouanno explained. The model incorporates both ocean circulation and biological processes like growth and decay, enabling the team to test various scenarios involving inputs such as ocean fertilization by rivers (such as the Amazon) or influxes of nutrients from the atmosphere (such as dust from the Sahara).

“Turning off river nutrients in the model only reduced biomass by around 15%,” said Jouanno. “But eliminating deep-ocean mixing caused the blooms to collapse completely. That’s a clear indicator of what’s actually driving the system.”

“When we exclude the ocean current anomaly linked to the NAO, Sargassum stays mostly confined to the Sargasso Sea,” Jouanno said. “But once it’s included, we start to see the early formation of what is now known as the Great Atlantic Sargassum Belt.”

The Great Atlantic Sargassum Belt, first identified in 2011, is the largest macroalgae bloom in the world. The massive blooms sit below the Sargasso Sea and currents of the North Atlantic Ocean. Credit: López Miranda et al., 2021, https://doi.org/10.3389/fmars.2021.768470, CC BY 4.0

But not all scientists are convinced by the study. Some argue the truth is more complex, and more grounded in historic ecological patterns.

Was the Seaweed Already There?

Amy N. S. Siuda, an associate professor of marine science at Eckerd College in Florida and an expert in Sargassum ecology, critiqued the study’s core assumptions. “The idea that the 2011 bloom was seeded from the Sargasso Sea doesn’t hold up under scrutiny,” she said.

The dominant form of Sargassum present in the early blooms in the Caribbean and elsewhere (Sargassum natans var. wingei), she explained, “hasn’t been documented in the north Sargasso Sea at all, and only scarcely in the south.”

Historical records suggest that the variety had long existed in the Caribbean and tropical Atlantic, however—just at such low concentrations that it was easily missed, Siuda said. She also cited research on population genetics that show little physical mixing between S. natans var. wingei and other morphotypes through at least 2018.

“We were simply not looking closely enough,” she noted. “Early blooms on Caribbean beaches were misidentified. What we thought was S. fluitans var. fluitans, another common morphotype, turned out to be something else entirely.”

A Sargassum bloom can be difficult to model, Siuda explained. Models “can’t distinguish whether Sargassum is blooming or simply aggregating due to currents. Field data, shipboard observations, and genetic studies tell a much more complex story,” she said.

Donald Johnson, a senior research scientist at the University of Southern Mississippi, offered a different perspective. While he agreed that Sargassum has long existed in the tropical Atlantic, he believes the NAO may have also played a catalytic role in the blooms—just not in the way the original study claims.

“Holopelagic Sargassum has always been in the region—from the Gulf of Guinea to Dakar—as evidenced by earlier observations stretching back to Gabon,” Johnson explained. “What changed in 2010 was the strength of the Westerlies. Drifting buoys without drogues showed unusual eastward movement, possibly carrying Sargassum from the North Atlantic toward West Africa.”

He offered a crucial caveat, however: “There was never any clear satellite or coastal evidence of a massive influx [of Sargassum]. If the NAO did contribute, it may have done so gradually—adding to existing Sargassum in the region and pushing it over the threshold into a full-scale bloom.”

In this view, the 2011 event was less about transport and more about amplification, described as an environmental tipping point triggered by a convergence of factors already present in the system.

More Than Just Climate

Both Siuda and Johnson agreed that multiple nutrient sources in the tropical Atlantic are likely playing a major role in the ongoing blooms:

  • Riverine discharge from the Amazon, Congo, and Niger basins
  • Saharan dust, rich in iron and phosphates, blown westward each year
  • Seasonal upwelling and wind-driven mixing, particularly off West Africa and along the equator.

“Modeling surface currents in the tropical Atlantic is extremely difficult.”

And, Johnson pointed out, persistent gaps in satellite coverage—due to cloud cover and the South Atlantic Anomaly—mean we’re still missing key pieces of the puzzle. “Modeling surface currents in the tropical Atlantic is extremely difficult,” he said. “First-mode long waves and incomplete data make it impossible to fully visualize how Sargassum is moving and growing.”

Ultimately, both researchers said that understanding these golden tides requires reconciling models with fieldwork, as well as recognizing the distinct morphotypes of Sargassum. “Each variety reacts differently to environmental conditions,” Siuda explained. “If we don’t account for that, we risk oversimplifying the entire phenomenon.”

“There’s a danger in leaning too heavily on satellite models,” Johnson cautioned. “They measure aggregation, not growth. Without field validation, assumptions about bloom dynamics could mislead management efforts.”

Jouanno, too, acknowledged the study’s limitations. The model does not differentiate between Sargassum morphotypes and struggles with interannual variability, particularly in peak bloom years like 2016 and 2019. “This was likely a regime shift—possibly amplified by climate change—and while we can simulate broad patterns, there’s still much we don’t know about how each bloom evolves year to year.”

“We’re still learning,” Jouanno said. “Our understanding of vertical mixing, surface stratification, and nutrient cycling in the tropics is incomplete—and the biology of different Sargassum types is another critical gap.”

Ultimately, Jouanno said, “This is climate-driven. The NAO was a catalyst, and ongoing warming may be sustaining it. But without better field data and biological detail, we can’t fully predict what comes next.”

—Sarah Nelson (@SarahPeter3), Science Writer

Citation: Nelson, S. (2025), Have we finally found the source of the “Sargassum surge”?, Eos, 106, https://doi.org/10.1029/2025EO250189. Published on 14 May 2025. Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Heat and Pollution Events Are Deadly, Especially in the Global South

Wed, 05/14/2025 - 13:10
Source: GeoHealth

Small particulate matter (PM2.5) in air pollution raises the risks of respiratory problems, cardiovascular disease, and even cognitive decline. Heat waves, which are occurring more often with climate change, can cause heatstroke and exacerbate conditions such as asthma and diabetes. When heat and pollution coincide, they can create a deadly combination.

Existing studies on hot and polluted episodes (HPEs) have often focused on local, urban settings, so their findings are not necessarily representative of HPEs around the world. To better understand premature mortality associated with pollution exposure during HPEs at multiple scales and settings, Huang et al. looked at a global record of climate and PM2.5 levels from 1990 to 2019.

The team used data from the Modern-Era Retrospective analysis for Research and Applications, Version 2 (MERRA-2), which included hourly concentration measurements of PM2.5 in the form of dust, sea salt, black carbon, organic carbon, and sulfate particles. Daily maximum temperatures were obtained via satellite data from ERA5 (the fifth-generation European Centre for Medium-Range Weather Forecasts atmospheric reanalysis).

The researchers also conducted a meta-analysis of health literature, identifying relevant research using the search terms “PM2.5,” “high temperature,” “heatwaves,” and “all-cause mortality” in the PubMed, Scopus, and Web of Science databases. Then, they conducted a statistical analysis to estimate PM2.5-associated premature mortality events during HPEs.

They found that both the frequency of HPEs and maximum PM2.5 levels during HPEs have increased significantly over the past 30 years. The team estimated that exposure to PM2.5 during HPEs caused 694,440 premature deaths globally between 1990 and 2019, 80% of which occurred in the Global South. With an estimated 142,765 deaths, India had the highest mortality burden by far, surpassing the combined total of China and Nigeria, which had the second- and third-highest burdens. The United States was the most vulnerable of the Global North countries, with an estimated 32,227 deaths.

The work also revealed that PM2.5 pollution during HPEs has steadily increased in the Global North, despite several years of emission control endeavors, and that the frequency of HPEs in the Global North surpassed that of the Global South in 2010. The researchers point out that the study shows the importance of global collaboration on climate change policies and pollution mitigation to address environmental inequalities. (GeoHealth, https://doi.org/10.1029/2024GH001290, 2025)

—Sarah Derouin (@sarahderouin.com), Science Writer

Citation: Derouin, S. (2025), Heat and pollution events are deadly, especially in the Global South, Eos, 106, https://doi.org/10.1029/2025EO250151. Published on 14 May 2025. Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Denver’s Stinkiest Air Is Concentrated in Less Privileged Neighborhoods

Tue, 05/13/2025 - 13:42

The skunky smell of pot smoke. Burning stenches from a pet food factory. Smoke from construction sites. These are the smells that communities of color and lower income people in Denver are disproportionately exposed to at home and at work, according to a new study.

The study, published in the Journal of Exposure Science and Environmental Epidemiology, is one of the first to examine the environmental justice dimensions of bad odors in an urban setting.

“Odors are often ignored because they’re difficult to study and regulate.”

There’s been a wealth of research in recent years showing that people of color and those with lower incomes are exposed to more air pollution, including nitrogen oxides and particulate matter. Exposure to air pollution causes or exacerbates cardiovascular and respiratory illnesses, among other health problems, and increases the overall risk of death.

Odors are more challenging to measure than other kinds of air pollution because they are chemically complex mixtures that dissipate quickly. “Odors are often ignored because they’re difficult to study and regulate,” said Arbor Quist, an environmental epidemiologist at the Ohio State University who was not involved with the research.

Though other kinds of air pollution in the United States are limited by federal laws and regulated at the state level, smells are typically regulated under local nuisance laws. Though somewhat subjective—some folks don’t mind a neighbor toking up—odors can have a big impact on how people experience their environment, and whether they feel safe. Bad smells can limit people’s enjoyment of their homes and yards, and reduce property values.

“Odor is one of the ways municipalities can take action on air pollution.”

Odors are more than a nuisance—they pose real health risks. Exposure to foul smells is associated with headache, elevated blood pressure, irritated eyes and throat, nausea, and stress, among other ills.

University of Colorado Denver urban planning researcher Priyanka deSouza said local regulation of odors gives municipalities an opportunity to intervene in environmental health. “Odor is one of the ways municipalities can take action on air pollution,” she said.

Previous research on ambient odor air pollution has focused on point sources, including chemical spills and concentrated animal-feeding operations such as industrial hog farms. DeSouza said Denver’s unusually robust odor enforcement system made it possible to study the environmental justice dimensions of smelly air over a large geographical area.

Making a Stink

The city maintains a database of odor complaints that includes a description of the smell and the address of the complaint. DeSouza’s team used machine learning to identify themes in complaints made from 2014 to 2023. They found four major clusters: smells related to a Purina pet food factory, smells from a neighbor’s property, reports of smoke from construction and other work, and complaints about marijuana and other small industrial sources.

They used the text of the odor complaints and the locations of the complaints to deduce the likely source of the odor. For instance, complaints about the pet food factory often included the words night, dog, bad, and burn. Marijuana-related complaints frequently used the words strong and fume.

They also matched complaint locations against the addresses of 265 facilities that have been required by the city to come up with odor control plans for reasons including the nature of their business, or because five or more complaints have been filed about them within 30 days. (Growing, processing, or manufacturing marijuana occurs in 257 of these facilities.)

Less privileged people in Denver are more likely to live or work near businesses cited for creating bad smells, including marijuana facilities. Credit: Elsa Olofsson at cbdoracle.com/Flickr, CC BY 2.0

Less privileged census blocks—those with higher percentages of non-white workers and residents, residents with less formal education, lower median incomes, and lower property values—were more likely to contain a potentially smelly facility, according to the analysis. DeSouza said this is likely due to structural racism and historical redlining in Denver.

The facilities were concentrated in a part of the city that is isolated by two major freeways. Previous research has shown that people in these neighborhoods are exposed to more traffic-related air pollution, and that people of color, particularly Hispanic and Latino populations, are more likely to live there.

Yet people living and working in those areas weren’t more likely to register a complaint about bad smells than people in other parts of the city. In fact, most of the complaints came from parts of the city that are gentrifying. DeSouza said it’s not clear why people who live or work near a stinky facility aren’t more likely to complain than people who live farther away from one.

It may be that wind is carrying smells to more affluent neighborhoods, where more privileged people are more aware of Denver’s laws and feel empowered to complain. The research team, which includes researchers from the city’s public health department, is continuing to study odors in the city. Their next step is to integrate information about wind speed and direction with the odor complaints.

Quist said the study is unique in that it factors in potential workplace exposures, where people spend a large part of their day. Workplace exposures can also have health effects that aren’t captured in research that looks only at where people live. “A lot of research has focused on residential disparities,” she said, adding that the inclusion in the analysis of facilities that have had to submit odor-monitoring plans is also significant. “This is an important paper,” she said.

DeSouza said she suspects that people who live and work near smelly facilities may not be complaining because they feel disenfranchised. “People are resigned to odors, they have been living there a long time, and they don’t feel they have a voice.” If residents in less privileged neighborhoods were able to successfully lodge an odor complaint and get results, it may make them feel more connected in general to the city government, she added.

“I’m really interested in supporting policy action,” she said. “We’re trying to get residents to be aware that they can complain.”

—Katherine Bourzac, Science Writer

Citation: Bourzac, K. (2025), Denver’s stinkiest air is concentrated in less privileged neighborhoods, Eos, 106, https://doi.org/10.1029/2025EO250183. Published on 13 May 2025. Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

The Uncertain Fate of the Beaufort Gyre

Tue, 05/13/2025 - 13:40
Source: Journal of Geophysical Research: Oceans

As freshwater from glacier melt, river runoff, and precipitation enters the Arctic Ocean, a circular current called the Beaufort Gyre traps it near the surface, slowly releasing the water into the Atlantic Ocean over decades. Warming global temperatures may weaken the wind patterns that keep the gyre turning, which could slow or even stop the current and release a flood of freshwater with a volume comparable to that of the Great Lakes. This deluge would cool and freshen the surrounding Arctic and North Atlantic oceans, affecting sea life and fisheries and possibly disrupting weather patterns in Europe.

Athanase et al. analyzed the Beaufort Gyre’s circulation patterns using 27 climate models from the Coupled Model Intercomparison Project Phase 6 (CMIP6), which informed the most recent Intergovernmental Panel on Climate Change (IPCC) report.

Before trying to predict the future behavior of the gyre, the researchers turned to the past. To assess how well CMIP6 models capture the gyre’s behavior, they compared records of how the gyre actually behaved to CMIP6 simulations of how it behaved, given known conditions in the ocean and the atmosphere.

Most CMIP6 models do not capture the gyre’s behavior very well, it turns out. Some models did not predict any circulation, when circulation clearly occurred. Others overestimated the area or strength of the gyre, shifted it too far north, or inaccurately estimated sea ice thickness within the gyre. Eleven of the models produced sea ice thickness estimates the researchers called “unacceptable.”

Despite these problems, the researchers pushed ahead, using the 18 CMIP6 models that most closely reflected the gyre’s true behavior to predict how circulation could change under two future emissions scenarios: intermediate and high. Most of the tested models showed that the gyre’s circulation will decline significantly by the end of this century, but their predictions for exactly when varied from the 2030s to the 2070s. Three models predicted that the gyre will not stop turning at all.

The gyre is most likely to disappear if emissions remain high, but it may stabilize as a smaller gyre if emissions are only moderate, the researchers found. Despite substantial uncertainty, the results are a reminder that when it comes to preventing the most disruptive effects of climate change, “every fraction of a degree matters,” they write. (Journal of Geophysical Research: Oceans, https://doi.org/10.1029/2024JC021873, 2025)

—Saima May Sidik (@saimamay.bsky.social), Science Writer

Citation: Sidik, S. M. (2025), The uncertain fate of the Beaufort Gyre, Eos, 106, https://doi.org/10.1029/2025EO250186. Published on 13 May 2025. Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Beyond Up and Down: How Arctic Ponds Stir Sideways

Tue, 05/13/2025 - 12:00
Editors’ Highlights are summaries of recent papers by AGU’s journal editors. Source: Geophysical Research Letters

Arctic ponds play a key role in permafrost thaw and greenhouse gas emissions; however, their physical mixing processes remain poorly characterized. Most conceptual models assume that vertical, one-dimensional mixing—driven by surface cooling due to which water becomes denser, and sinks vertically, mixing the water mass from the top down—is the primary mechanism for deep water renewal.

Henderson and MacIntyre [2025] challenges that model by showing that two-dimensional thermal overturning circulation dominates in a shallow permafrost pond. Specifically, nighttime surface cooling in shallow areas generates cold, dense water that flows downslope along the pond bed, displacing and renewing deeper waters. Using high-resolution velocity, temperature, and other related measurements, the authors demonstrate that these gravity currents ventilate the bottom despite persistent stable stratification during nighttime. These findings reveal that lateral thermal flows can drive vertical exchange in small water bodies. The results have important implications for biogeochemical modeling and upscaling greenhouse gas fluxes across Arctic landscapes.

This is a diagram of how cold water moves at night in a pond. At night, the shallow parts of the pond (near the right edge) cool down faster than the deeper parts. This creates thin layers of cold, dense water near the shore. Because this water is denser (heavier), it sinks and flows sideways along the sloped pond bottom toward the deepest part of the pond—like a slow, underwater landslide of cold water. As this cold water flows downhill, it pushes the existing bottom water upward, creating a gentle circulation loop: surface water cools and sinks at the edges, flows along the bottom, and pushes older deep water upward toward the middle. Credit: Henderson and MacIntyre, Figure 3a

Citation: Henderson, S. M., & MacIntyre, S. (2025). Thermal overturning circulation in an Arctic pond. Geophysical Research Letters, 52, e2024GL114541. https://doi.org/10.1029/2024GL114541

—Valeriy Ivanov, Editor, Geophysical Research Letters

Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Some Tropical Trees Benefit from Lightning Strikes

Mon, 05/12/2025 - 13:10

Every now and then, some trees apparently just need a jolt. When struck by lightning, the large-crowned Dipteryx oleifera sustains minimal damage, whereas the trees and parasitic vines in its immediate vicinity often wither away or die altogether. That clearing out of competing vegetation results in a nearly fifteenfold boost in lifetime seed production for D. oleifera, researchers estimated.

An Instrumented Forest

“This is the only place on Earth where we have consistent lightning tracking data with the precision needed to know [whether a strike] hit a patch of forest.”

Panama is often known for its eponymous canal. But Barro Colorado Island, in central Panama, is also home to what researchers who work in the area call “one of the best-studied patches of tropical forest on earth.” That’s because cameras and devices to measure electric fields are constantly surveying the forest from atop a series of towers, each about 40 meters high. Those instruments can reveal, among other information, the precise locations of lightning strikes. “This is the only place on Earth where we have consistent lightning tracking data with the precision needed to know [whether a strike] hit a patch of forest,” said Evan Gora, an ecologist at the Cary Institute of Ecosystem Studies and the Smithsonian Tropical Research Institute.

Such infrastructure is key to locating trees that have been struck by lightning, said Gabriel Arellano, a forest ecologist at the University of Michigan in Ann Arbor who was not involved in the research. “It’s very difficult to monitor lightning strikes and find the specific trees that were affected.”

That’s because a lightning strike to a tropical tree rarely leads to a fire, said Gora. More commonly, tropical trees hit by lightning look largely undamaged but die off slowly over a period of months.

Follow the Flashes

To better understand how large tropical trees are affected by lightning strikes, Gora and his colleagues examined 94 lightning strikes to 93 unique trees on Barro Colorado Island between 2014 and 2019. In 2021, the team traveled to the island to collect both ground- and drone-based imagery of each directly struck tree and its environs.

Gora and his colleagues recorded six metrics about the condition of each directly struck tree and its cadre of parasitic woody vines known as lianas—including crown loss, trunk damage, and percent of the crown infested with lianas. Lianas colonize the crowns of many tropical trees, using them for structure and competing with them for light. Think of someone sitting next to you and picking off half of every bite of food you take, Gora said. “That’s effectively what these lianas are doing.”

The team also surveyed the trees surrounding each directly struck tree. The electrical current of a lightning strike can travel through the air and pass through nearby trees as well, explained Gora. Where a struck tree’s branches are closest to its neighbors, “the ends of its branches and its neighbors’ will die,” Gora said. “You’ll see dozens of those locations.”

Thriving After Lightning

On average, the researchers found that about a quarter of trees directly struck by lightning died. But when the team divided up their sample by tree species, D. oleifera (more commonly known as the almendro or tonka bean tree) stood out for its uncanny ability to survive lightning strikes. The nine D. oleifera trees in the team’s sample consistently survived lightning strikes, whereas their lianas and immediate neighbors did not fare so well. “There was a pretty substantial amount of damage in the area, but not to the directly struck tree,” said Gora of the species. “This one never died.”

(Ten other species in the researchers’ cohort of trees also exhibited no mortality after being struck by lightning, but those samples were all too small—one or two individuals—to draw any robust conclusions from.)

A D. oleifera tree in Panama is shown just after being struck by lightning in 2019 (left) and 2 years later (right). The tree survived the strike, but its parasitic vines and some of its neighbors did not. Credit: Evan Gora

Gora and his collaborators estimated that large D. oleifera trees are struck by lightning an average of five times during their roughly 300-year lifespan. This species’ ability to survive those events while lianas and neighboring trees often died back should result in overall reduced competition for nutrients and sunlight, the team reasoned. Using models of tree growth and reproductive capacity, the researchers estimated that D. oleifera reaped substantial benefits from being struck by lightning—particularly in regard to fecundity, or the number of seeds produced over a tree’s lifetime. “The ability to survive lightning increases their fecundity by fourteenfold,” Gora said.

D. oleifera may be essentially evolving to be better lightning rods.

The researchers furthermore showed that D. oleifera tended to be both taller and wider at its crown than many other tropical tree species on Barro Colorado Island. Previous work by Gora and his colleagues has shown that taller trees are particularly at risk for getting struck by lightning. It’s therefore conceivable that D. oleifera are essentially evolving to be better lightning rods, said Gora. “Maybe lightning is shaping not just the dynamics of our forests but also the evolution.”

These results were published in New Phytologist.

Gora and his collaborators hypothesized that the physiology of D. oleifera must be conferring some protection against the massive amount of current imparted by a lightning strike. Previous work by Gora and other researchers has suggested that D. oleifera is more conductive than average; higher levels of conductivity mean less resistance and therefore less internal heating. “We think that how conductive a tree is is really important to whether it dies,” said Gora.

Continuing to ferret out other lightning-hardy tree species will be important for understanding how forests evolve over time. And that’s where more data will be useful, said Arellano. “I wouldn’t be surprised if we find many other species.”

—Katherine Kornei (@KatherineKornei), Science Writer

Citation: Kornei, K. (2025), Some tropical trees benefit from lightning strikes, Eos, 106, https://doi.org/10.1029/2025EO250181. Published on 12 May 2025. Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Proposed Experiment Could Clarify Origin of Martian Methane

Mon, 05/12/2025 - 13:08
Source: Journal of Geophysical Research: Planets

Over the past decade, the Curiosity rover has repeatedly detected methane on the surface of Mars. This gas is often produced by microbes, so it could herald the presence of life on the Red Planet. But skeptics have postulated that the gas detected by Curiosity could have a much more pedestrian origin. Viscardy et al. suggest the methane could be coming from inside the Curiosity rover itself rather than from the atmosphere of Mars. They propose an experiment that could differentiate between microbes and a technological source.

There’s ample reason to believe something is going awry, the researchers say. Each methane measurement that Curiosity’s spectrometer reports is actually the average of three individual measurements. Though those averages tend to suggest the presence of methane, the individual measurements are far more variable, bringing the results into question.

Another issue concerns the instability of gas pressures inside the spectrometer. The two main compartments—the foreoptics chamber that holds the laser source and the cell that holds the Martian air sample—are designed to remain sealed from each other and from the outside environment. However, significant pressure variations observed in both compartments, even during individual measurement runs, suggest this isn’t the case. These pressure changes raise doubts about the hermetic sealing of the system and the integrity of the analyzed air samples.

It’s clear, however, that at least some of the methane traveled to Mars from Earth. Prior to launch from Cape Canaveral in 2011, Florida air is known to have leaked into the foreoptics chamber. This contamination has persisted despite multiple gas evacuations, pointing to unidentified methane reservoirs or production mechanisms within the instrument. As a result, methane levels in this compartment are more than 1,000 times higher than those measured in the cell storing the Martian air sample for analysis. Even an “imperceptible” leak between the chambers could cause Curiosity to report erroneous methane levels, the researchers write.

To put the issue to rest, the researchers suggest analyzing the methane content of the same sample of Martian air on two consecutive nights. A concentration of methane that is higher on the second night than on the first night would suggest that methane is leaking into the sample from elsewhere in the rover rather than coming from the planet itself. (Journal of Geophysical Research: Planets, https://doi.org/10.1029/2024JE008441, 2025)

—Saima May Sidik (@saimamay.bsky.social), Science Writer

Citation: Sidik, S. M. (2025), Proposed experiment could clarify origin of Martian methane, Eos, 106, https://doi.org/10.1029/2025EO250182. Published on 12 May 2025. Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Seismic analysis to understand the 13 February 2024 Çöpler Gold Mine Landslide, Erzincan, Türkiye 

Mon, 05/12/2025 - 06:44

The Landslide Blog is written by Dave Petley, who is widely recognized as a world leader in the study and management of landslides.

On 13 February 2024, the enormous Çöpler Gold Mine Landslide occurred in Erzincan, Türkiye (Turkey), killing nine miners. This was the first of two massive and immensely damaging heap leach mine failures last year (the other occurred in Canada). That such an event could occur has come as something of surprise to many people, so there is intense interest in understanding the circumstances of the failure.

I posted about the landslide at the time, and subsequently:

At the time, Capella Space captured this amazing radar image of the aftermath of the landslide (which is highlighted):

A radar image of the 13 February 2024 landslide at Çöpler Mine in Türkiye (Turkey), courtesy of Capella Space.

Analysis of this landslide is ongoing, and information is emerging on a regular basis. The latest is an open access paper (Büyükakpınar et al. 2025the PDF is here) in The Seismic Record that combines analysis of the seismic data from the landslide with remote sensing data to try to understand the failure.

The use of seismic data for landslide analysis often causes confusion, with people interpreting it to mean that the landslide was triggered by an earthquake. This is not the case – the scale of this landslide meant that it generated a seismic signal that was detected up to 400 km from the source. This data can be analysed to provide information about the landslide itself.

Büyükakpınar et al. (2025) provides three really interesting insights into the Çöpler Gold Mine Landslide, confirming initial observations. The first is that there are two distinct seismic signals, 48 seconds apart. Thus, there were two landslide events. The first detached to the west, representing a collapse of a steep slope into the deep excavation. The second moved to the north‐northeast, on a more gentle slope. It is the second that was caught on video, and that is highlighted in the Capella Space image. In fact the first landslide can also be seen in the image – in particular the landslide deposit at the bottom of the deep excavation. The analysis also suggests that the combined landslide volume was about 1.2 millon m3, of which the second landslide was about 1 millon m3.

I would note that soon after the landslide,  Tolga Gorum correctly identified that the image shows that the landslide moved in two directions.

Second, Büyükakpınar et al. (2025) have used an InSAR analysis to examine precursory deformation of the heap leach pad before the failure. This suggests that the mass was moving at up to 60 mm per year over the four years prior to the failure. The trend is quite linear, so it is not obvious that it would have provided an indication that failure was imminent, but this level of movement would be quite surprising in a well managed site.

Finally, and perhaps most importantly, Büyükakpınar et al. (2025) also show that the embankment below the cyanide leach pond (labelled in the pre-failure Google Earth imagery below) is now moving at up to 85 mm/year. As the authors put it this “raises significant concerns about the potential for further instability in the area”.

Google Earth image showing the site of the 13 February 2024 Çöpler Gold Mine Landslide, Erzincan, Türkiye (Turkey). The embankment that is showing active deformation is highlighted.

One can only hope that this hazard, in a seismically active area, is being addressed and that lessons have been learnt.

Reference

Büyükakpınar, P. et al. 2025. Seismic, Field, and Remote Sensing Analysis of the 13 February 2024 Çöpler Gold Mine Landslide, Erzincan, TürkiyeThe Seismic Record 5 (2): 165–174. doi: https://doi.org/10.1785/0320250007

Return to The Landslide Blog homepage

Trump Blocks Funding for EPA Science Division

Fri, 05/09/2025 - 19:56
body {background-color: #D2D1D5;} Research & Developments is a blog for brief updates that provide context for the flurry of news regarding law and policy changes that impact science and scientists today.

The Trump administration has blocked funding for the EPA’s Office of Research and Development (ORD), the agency’s main science division.

An email sent 7 May and first reported by E&E News said that research laboratory funding had been stopped except for requests related to health and safety. Nature then obtained additional internal e-mails regarding the funding freeze which were confirmed by anonymous EPA sources.

“Lab research will wind down over the next few weeks as we will no longer have the capability to acquire supplies and materials,” one of the emails said.

The freeze appears to disregard a Congressional spending agreement that guaranteed EPA funding at 2024 levels through September.

On 2 May, EPA administrator Lee Zeldin announced a “reorganization” within the EPA to ensure that its research “directly advances statutory obligations and mission-essential functions.” Zeldin assured members of the House Committee on Science, Space, and Technology that ORD would not experience significant changes during the reorganization, and this latest funding freeze seems to break that promise.

 
Related

“We are unsure if these laboratory activities will continue post-reorganization,” the 7 May email stated. “Time and funding would be needed to reconstitute activities.”

The EPA told E&E News that the email was “factually inaccurate” and that ORD is not part of the planned reorganization.

But Jennifer Orme-Zavaleta, who served as principal deputy assistant administrator at ORD during Trump’s first presidency, said that “They have basically shut ORD down by cutting off the money.”

The 2 May reorganization announcement also included a deadline for the nearly 1,500 ORD staff to either apply for a new position within the EPA, retire, or resign. That deadline is at 11:59 on 9 May. Fewer than 500 new jobs have been posted at the agency, and hundreds of EPA employees have already been fired.

—Kimberly M. S. Cartier (@astrokimcartier.bsky.social), Staff Writer

These updates are made possible through information from the scientific community. Do you have a story about how changes in law or policy are affecting scientists or research? Send us a tip at eos@agu.org. Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

NSF Plans to Abolish Divisions

Fri, 05/09/2025 - 13:12
body {background-color: #D2D1D5;} Research & Developments is a blog for brief updates that provide context for the flurry of news regarding law and policy changes that impact science and scientists today.

The U.S. National Science Foundation (NSF) plans to abolish dozens of divisions across all eight of its directorates and reduce the number of programs within those divisions, according to Science.

A spokesperson for NSF told Science that the reason behind the decision was to “reduce the number of SES [senior executive service] positions in the agency and create new non-executive positions to better align with the needs of the agency.”

Directorates at NSF and the divisions within them oversee grantmaking related to a particular field of science. Current directors and deputy directors will lose their titles and may be reassigned. Division directors play a large role in grantmaking decisions and are usually responsible for giving final approval for NSF awards. 

NSF lists the following directorates and divisions:

  • Directorate for Biological Sciences
    • Biological Infrastructure
    • Environmental Biology
    • Emerging Frontiers
    • Integrative Organismal Systems
    • Molecular and Cellular Biosciences
  • Directorate for Computer and Information Science and Engineering
    • Office of Advanced Cyberinfrastructure
    • Computing and Communication Foundations
    • Computer and Network Systems
    • Information and Intelligent Systems
  • Directorate for Engineering 
    • Chemical, Bioengineering, Environmental and Transport Systems
    • Civil, Mechanical and Manufacturing Innovation
    • Electrical, Communications and Cyber Systems
    • Engineering Education and Centers
    • Emerging Frontiers and Multidisciplinary Activities
  • Directorate for Geosciences
    • Atmospheric and Geospace Sciences
    • Earth Sciences
    • Ocean Sciences
    • Research, Innovation, Synergies and Education
    • Office of Polar Programs
  • Directorate for Mathematical and Physical Sciences
    • Astronomical Sciences
    • Chemistry
    • Materials Research
    • Mathematical Sciences
    • Physics
    • Office of Strategic Initiatives
  • Directorate for Social, Behavioral, and Economic Sciences
    • Behavioral and Cognitive Sciences
    • National Center for Science and Engineering Statistics
    • Social and Economic Sciences
    • Multidisciplinary Activities
  • Directorate for STEM Education
    • Equity for Excellence in STEM
    • Graduate Education
    • Research on Learning in Formal and Informal Settings
    • Undergraduate Education
  • Directorate for Technology, Innovation and Partnerships
    • Regional Innovation and Economic Growth
    • Accelerating Technology Translation and Development
    • Preparing the U.S. Workforce

“The end of NSF and American science expertise as we know it is here,” wrote Paul Bierman, a geomorphologist at the University of Vermont, on Bluesky

 
Related

The decision to abolish its divisions may be part of a larger restructuring of NSF grantmaking, according to Science.

NSF was already facing drastic changes to its operations from Trump administration directives, including an order to stop awarding new and existing grants until further notice and an order cancelling hundreds of grants related to diversity, equity, and inclusion as well as disinformation and misinformation. Last month, NSF shuttered most of its outside advisory committees that gave input to operations at seven of the eight directorates.

On 8 May, members of the House Committee on Science, Space, and Technology sent a letter to Brian Stone, the acting director of the NSF, expressing distress at the changes at NSF over the past few weeks. 

“So, who is in charge here? How far does DOGE’s influence reach?” members of the committee wrote in the letter. “We seek answers about actions NSF has taken that potentially break the law and certainly break the trust of the research community.”

Layoff notices are expected to be sent to NSF staff members today, as well.

9 May update: On Friday, NSF closed its Division of Equity for Excellence in STEM (EES) and removed the division from its website. EES was responsible for programs that advanced access to science, technology, engineering, and mathematics (STEM) education. In its explanation for the closure, NSF noted that it is “mindful of its statutory program obligations and plans to take steps to ensure those continue.” Division grantees received notice from their program officers about the closure this morning.

An internal memo circulated Thursday and obtained by E&E News stated that NSF will begin a reduction in force (RIF) aimed at its Senior Executive Service. The RIF will also terminate roughly 300 temporary positions.

—Grace van Deelen (@gvd.bsky.social), Staff Writer

These updates are made possible through information from the scientific community. Do you have a story about how changes in law or policy are affecting scientists or research? Send us a tip at eos@agu.org. Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Senior Scientists Must Stand Up Against Attacks on Research and Education

Fri, 05/09/2025 - 13:09

Massive cuts in federal funding to schools and science agencies, dogmatic calls to eliminate entire research areas, revocations of visas for international students and scholars, and attacks on academic freedom, speech, and the value of education and expertise—all emanating from recent Trump administration actions—are damaging and reshaping U.S. higher education and scientific institutions. Furthermore, the country’s withdrawals from international treaties (e.g., the Paris Agreement) and organizations (e.g., the Intergovernmental Panel on Climate Change and World Health Organization), and its weakening of programs promoting health, environmental protection, cultural exchange, and peace, diminish U.S. leadership and credibility globally and add to instabilities threatening lives, economies, and security around the world.

The surprising speed and breadth of the attacks and changes have left scientists, educators, and others confused, afraid, and grappling with how to respond.

The surprising speed and breadth of the attacks and changes have left scientists, educators, and others confused, afraid, and grappling with how to respond. The environment of intimidation, uncertainty, isolation, and fear created by the administration has been compounded by the silence or outright capitulation of many leaders and institutions, despite their having firm legal and constitutional protections, in the face of these threats. If sitting Republican senators like Lisa Murkowski (R-Alaska), major universities, law firms, and private companies and foundations are afraid to speak out and defend their values, what can individuals do?

Individuals can organize, and in so doing wield strength in numbers and identify leaders who are well-positioned to raise their voices to push reluctant institutions to act. Within science higher education, senior scientists can and should fill these roles.

Standing Up and Standing Out

The risk calculations for many institutions and individuals in the face of the administration’s swift, illiberal, and authoritarian actions have been clear: It is better to comply than to fight, because fighting risks funding losses, investigations, and lawsuits.

However, as the experiences of some universities, notably Columbia, have demonstrated, submitting to administration demands does not spare institutions from further scrutiny. In Harvard’s case, shortly after the school’s president indicated willingness to engage with the administration about shared concerns, the scope of outrageous demands increased to infringe on its ability to make its own decisions on hiring, enrollment, curriculum, and values, leading the university to sue the administration.

Standing up and standing out are easier said than done, especially considering the very real risks to individuals’ careers, livelihoods, and safety.

Clearly, the balance of risk between compliance and standing up for core principles (not to mention the rule of law) has shifted. As the leaderships of higher education and science institutions weigh how to respond to this shift, their employees, members, and constituent communities can speak up to shape these responses.

What is needed is courage, solidarity, and an intentional and strategic plan of action. Of course, standing up and standing out are easier said than done, especially considering the very real risks to individuals’ careers, livelihoods, and safety. In science and academia, as elsewhere, these risks are greatest for those most vulnerable: students, early-career researchers, and immigrants and international scholars. Therefore, it is incumbent upon senior colleagues—who have outsize privilege, responsibility, and collective power in universities and professional societies—to lead the way.

Reframing the Message

With social media increasingly fueling the spread of misinformation and disinformation and the corporate consolidation and polarization (both real and perceived) of mass media, strategies used in the past to inform reasoned policy discussions no longer work on their own. Scientists’ rational, detailed, and evidence-based arguments used to be effective in influencing policy, but the current administration and its allies have largely disregarded experts and facts in making major decisions.

With this new reality, the messaging from scientists—especially senior scientists from privileged identities—must change. It must be direct and aimed at resisting ongoing actions that are dismantling U.S. scientific and education enterprises; harming students, universities and colleges, and federal research agencies; and degrading public health, foreign policy, the economy, and the rule of law. Simply put, these actions are leading to death and environmental destruction, and they are endangering the national economy.

The dismantling of federal support for HIV and AIDS research and prevention, for example, “will hurt people, will cause people to die, and will cause significant increased costs to all of us throughout the country,” said a former Centers for Disease Control and Prevention official. The numerous rollbacks of major EPA rules and environmental protections will dramatically degrade air and water quality and irreparably harm public health and ecosystems. And the cuts to scientific research will directly affect our ability to advance medical, energy, transportation, space, communication, and infrastructure innovations, undermining the country’s economic strength.

Influencing Institutional Leaders

Senior scientists should be at the vanguard of these fronts, using their influence to protect students and more vulnerable colleagues.

In addition to speaking simply and clearly about the realities of such threats, scientists must come together within their own and across institutions to form united fronts. Senior scientists should be at the vanguard of these fronts, using their influence to protect students and more vulnerable colleagues, U.S. citizens, and international scholars alike.

They should demand that their institutional leaders uphold core values of higher education and science, including inclusion, international cooperation, and ethical and evidence-based research. They should demand that these leaders strengthen mutually beneficial ties among universities and professional societies, urging them, for example, to join mutual defense alliances such as the recently proposed coalition among Big Ten universities and to sign on to the American Association of Colleges and Universities’ “Call for Constructive Engagement” that rebuked the administration’s attacks. And they should demand that instead of capitulating, their institutions bring and support litigation against attempts to suppress academic freedom, free speech, and freedom of association; to unlawfully cancel grants and revoke visas; and to infringe on universities’ independence to develop their own curricula and academic policies. After all, executive orders are unilateral directives, not laws or legislation.

Furthermore, institutions should provide free legal counsel to imperiled international students and researchers and speak loudly and publicly about the meaning and value of academic freedom, the power of diverse and inclusive communities in driving societally valuable innovations, and the incredible returns of investing in modern research universities.

Though these demands are made of our institutional leaders, senior scientists can also act on their own initiative to help defend the higher education and scientific communities and their work from attacks meant to discredit and marginalize them.

Acknowledge and Activate

What can these scientists do? For starters, they can keep up-to-date about the shifting landscape of relevant federal, state, and institutional policies and responses. Many timely resources can help with this. I joined the chapter of the American Association of University Professors (AAUP) at the University of Michigan in Ann Arbor, where I work, for this purpose.

Senior scientists can support early-career colleagues and students by helping them, in turn, stay informed of policy developments, by actively listening to and understanding their concerns, and by providing opportunities for career and community networking and professional development during these uncertain times. Universities frequently offer mentoring resources and tool kits that can help, and programs such as AGU’s Mentoring365 enable connections within and across peer groups. They can also support each other across campuses, and seek allies in other disciplines, recognizing that attacks on the arts, humanities, and STEM (science, technology, engineering, and mathematics) fields are all connected.

Scientists should be contacting and meeting with local, state, and federal elected officials to convey the impacts of funding cuts and attacks on students, scholars, research, and innovations.

Further, scientists should be contacting and meeting with local, state, and federal elected officials. Scientists should use those meetings to convey the impacts of funding cuts and attacks on students, scholars, research, and innovations, citing real examples from their home institutions. At the University of Michigan, for example, scores of grants and contracts (including two of my own) have been canceled or not renewed, either because they were not compliant with administration ideology on DEI (diversity, equity, and inclusion), health equity, or environmental justice, or because of agency program eliminations and budget cuts. These cuts have directly halted student research experiences and led to layoffs and withdrawals of graduate admissions offers.

Although local and state officials cannot directly change federal policy, scientists can help focus their attention on the local impacts of federal actions. Further, these leaders’ concerns often carry a different weight within political decisionmaking. A federal congressperson may respond differently to a state senator from their own political party than they would to the concerns of 10 scientists.

Senior scientists can also work with their professional societies and organizations to file litigation against unjust actions, and provide programming (e.g., career counseling) and financial support (e.g., waived conference registration fees) for students and colleagues directly affected. And if needed, they can push their professional societies to take stronger stances. The powerful statement by the American Academy of Arts and Sciences offers a model that every nonprofit professional society should emulate. Even if institutions or societies have adopted neutrality statements, or are nonprofits prohibited from lobbying activities and whose memberships have diverse views, there is clear rationale to speak out and act against policy changes that directly affect their missions.

In short, senior scientists must acknowledge the severity of the threats to the scientific and higher education communities from the administration’s actions and activate to support local and national efforts to counter the threats. Together with the leaderships of their institutions and professional societies, they must defend these communities—particularly their more vulnerable members—and the value and integrity of the work they do. The stakes are high: Lives and careers are being jeopardized, and brilliant scientists are being driven away. We must act to preserve the American partnership that created diverse, federally supported research universities before the damage is permanent.

Author Information

Mark Moldwin (mmoldwin@umich.edu), University of Michigan, Ann Arbor

Citation: Moldwin, M. (2025), Senior scientists must stand up against attacks on research and education, Eos, 106, https://doi.org/10.1029/2025EO250181. Published on 9 May 2025 2025. This article does not represent the opinion of AGU, Eos, or any of its affiliates. It is solely the opinion of the author(s). Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Newly Discovered Algae May Speed Melting of Antarctic Ice

Fri, 05/09/2025 - 13:07

Alex Thomson, an algal ecologist with the Scottish Association for Marine Science, had planned to study coastal blooms of microalgae during his 2023 trip to Robert Island in Antarctica. But after arriving, he and his colleagues made a discovery that would change their mission.

Scientists have known for years that ice in the Arctic is teeming with microscopic algae. But aside from a few scattered observations, nobody knew whether such blooms were widespread in Antarctica’s ice caps (an ice cap is a type of gently domed glacier flowing outward in all directions). Thomson and his colleagues decided to collect a few samples from the Robert Island ice cap while they were there.

Researchers found a diversity of species of Ancylonema, purplish, conical algae that can form in chains. Credit: Alex Thomson

When they got the samples under a microscope, it was clear that the ice was a bustling hub of algal activity. “As we started to uncover this during the field season, we shifted our focus and took what was happening on the ice cap more seriously,” Thomson said.

In a study published in Nature Communications, the researchers revealed the extent and diversity of algae they found inhabiting the ice. Their findings warn that algae, whose pigments absorb heat from the Sun, may be accelerating the melting of Antarctic ice at a rate greater than previously thought.

“It’s the first paper quantifying that process in Antarctica,” said Alexandre Anesio, an Arctic algae expert at Aarhus University in Denmark who wasn’t involved in the new study.

Widespread Blooms and Unexpected Diversity

Scientists sampled from 198 locations and examined WorldView-2 satellite images from February 2023, which revealed darkened patches of ice indicative of algal blooms. On the basis of their sampling and the satellite images, the scientists estimated that algal blooms covered around 20% of the ice cap’s surface.

The newly discovered algal communities may represent one of the largest photosynthetic habitats in Antarctica. Researchers had previously estimated that all detectable photosynthetic life in Antarctica covered approximately 44 square kilometers. The ice cap algal blooms on Robert Island alone were equivalent to about 6% of that area.

“We were seeing this huge morphological diversity, loads of forms of Ancylonema that I’d never seen described in any of the literature.”

The scientists also found a diverse range of species in their samples. The most prevalent genus of ice algae, Ancylonema, has an elongated “sausage shape and can form in chains,” Thomson said. “We were seeing this huge morphological diversity, loads of forms of Ancylonema that I’d never seen described in any of the literature.”

Genetic analysis revealed that the Antarctic ice cap contains Ancylonema species that are similar to those found in the Arctic, but also others that were distinct. Some genetic lineages appear unique to Antarctica, suggesting that these communities may have evolved in isolation over millions of years.

Dark Pigments Accelerate Antarctic Ice Melt

Thomson was excited by the diversity of algae, but said the finding could have troubling implications.

When a researcher on the team used a backpack device that Thomson said “looks a bit like a piece of Ghostbusters apparatus” to measure how much light reflected off the ice’s surface, they discovered that areas of ice containing algae reflect significantly less light than areas without algae. The purple pigment within Ancylonema, which it uses as sunscreen to protect itself from ultraviolet radiation, absorbs more energy and heats the surrounding ice.

“This study gives a big preview of what can happen in Antarctica if you start to have warm summers.”

Through modeling, they found that algae can contribute up to around 2% of the total daily melting on the ice cap. Though the figure isn’t as high as it is in Greenland, where dense blooms can increase melt rates of the ice surface by 13%, scientists are concerned that warmer temperatures may allow more algae to grow, which would cause more heat to be absorbed into the ice caps. “That 2% is probably going to look more similar to Greenland” in the future, Anesio said.

Currently, climate models do not account for microorganisms’ contributions to melting. To Anesio and Thomson, studies like this highlight why that needs to change. “This study gives a big preview of what can happen in Antarctica if you start to have warm summers,” Anesio said.

—Andrew Chapman (@andrewchapman.bsky.social), Science Writer

Citation: Chapman, A. (2025), Newly discovered algae may speed melting of Antarctic ice, Eos, 106, https://doi.org/10.1029/2025EO250174. Published on 9 May 2025. Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

NOAA Halts Maintenance of Key Arctic Data at National Snow and Ice Data Center

Thu, 05/08/2025 - 15:50
body {background-color: #D2D1D5;} Research & Developments is a blog for brief updates that provide context for the flurry of news regarding law and policy changes that impact science and scientists today.

The National Snow and Ice Data Center (NSIDC) may no longer actively maintain or update some of its snow and ice data products after losing support from NOAA’s National Centers for Environmental Information, according to a 6 May announcement.

 
Related

Data products affected by the decision are used to monitor the impacts of climate change in the Arctic, and include the center’s Sea Ice Index, Gridded Monthly Sea Ice Extent and Concentration, 1850 Onward, and World Glacier Inventory. “All of these data products as well as others in the NOAA@NSIDC collection face uncertain futures without ongoing support,” NSIDC wrote in an email to users posted on Bluesky.

While the data products won’t disappear, they will no longer be maintained at their current levels. 

“This change in support limits our ability to respond quickly to user inquiries, resolve issues, or maintain these products as thoroughly as before,” the NSIDC said in a statement to Inside Climate News

NSIDC, based at the University of Colorado, Boulder, is a prominent polar research institute. Its Sea Ice Index, in particular, has been a crucial source of data for scientists tracking the decline of sea ice cover in the Arctic. The threatened data sets are also used by Alaskan communities for weather prediction, inform fisheries and ecosystem management, and support “countless other Arctic geopolitical and security decision-making needs,” Zack Labe, a climate scientist and former NOAA staff member, told Inside Climate News.

This is horrible. I don't even know what to say. Some of our most key polar data."As a result, the level of services for affected products below will be reduced to Basic—meaning they will remain accessible but may not be actively maintained, updated, or fully supported."nsidc.org/data/user-re…

Zack Labe (@zacklabe.com) 2025-05-06T20:08:25.918Z

The decision to end support of the NSIDC products is the latest in ongoing efforts from the Trump administration to take important environmental data offline, though some nonprofits, scientists, and advocacy groups are working to recreate some of the lost data tools. 

A NOAA webpage lists data products that have been decommissioned since President Trump took office, including data from marine monitoring buoys, coastal ecosystem maps, seafloor data, and satellite data tracking hurricanes. In a 21 April announcement, the University-National Oceanographic Laboratory System, a group that coordinates U.S. ocean research, suggested that those interested in salvaging data products planned for decommissioning in 2025 should nominate those datasets for backup by the Data Rescue Project, a volunteer archiving effort.

NSIDC is asking scientists and educators who rely on these data products and would like to demonstrate the importance of these data sets to share their stories at nsidc@nsidc.org.

—Grace van Deelen (@gvd.bsky.social), Staff Writer

These updates are made possible through information from the scientific community. Do you have a story about how changes in law or policy are affecting scientists or research? Send us a tip at eos@agu.org. Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

那些科学上认为不应该存在的河流

Thu, 05/08/2025 - 12:18
Source: Water Resources Research

This is an authorized translation of an Eos article. 本文是Eos文章的授权翻译。

河流汇入下游,顺坡而下,最终汇入海洋或终端湖泊:这些是水道和流域运作的基本规律。但规律就是用来打破的。Sowby和Siegel在美洲列出了九条违背水文学预期的河流和湖泊。

所有河流都存在分叉现象,即河流分成几条支流,继续向下游流动。但与典型的分叉不同,这些河流在分叉后不会回到主水道。

例如,南美洲的卡西基亚雷河(Casiquiare)是一条可通航的水道,它连接着美洲大陆最大的两大流域——奥里诺科河(Orinoco)流域和亚马逊河(Amazon)流域,既是前者的支流,也是后者的支流。作者写道,它“在水文学上相当于两个星系之间的虫洞”。卡西基亚雷河从奥里诺科河分叉,蜿蜒流经茂密、近乎平坦的雨林,汇入里内格罗河(Rio Negro),最终汇入亚马逊河。该研究的作者指出,轻微的坡度(小于0.009%)足以使大量的水顺流而下,这种不寻常的情况是由于河流被不完全捕获造成的。他们指出,对卡西基亚雷河的理解仍在不断加深。

1717年,荷兰殖民者首次绘制了苏里南遥远的韦安博河(Wayambo)的地图。这条河可以向东或向西流动,这取决于降雨量和人类使用水闸对流量的改变。它还靠近金矿和铝土矿开采以及石油生产地点,其双向流动使得预测污染物的扩散变得困难。

研究人员称,在他们调查的所有河流中,位于加拿大荒野高地的埃奇马米什河(Echimamish River)是“最令人费解的”。它的名字在克里语中的意思是“双向流动的水”。这条河连接了海耶斯河和纳尔逊河,根据一些记载,埃奇马米什河从它的中部流向这两条更大的河流。然而,它的河道平坦,间或被海狸水坝阻隔,导致即使在今天,人们仍然无法确定它的流向以及它究竟在何处发生了变化。

作者还探索了另外六条奇特的水道,包括有两个出口的湖泊和同时流入大西洋和太平洋的小溪。通过这些研究,他们强调了关于地球水体如何运作,我们仍有许多未知之处有待探索。(Water Resources Research, https://doi.org/10.1029/2024WR039824, 2025)

—科学撰稿人Rebecca Dzombak

This translation was made by Wiley本文翻译由Wiley提供。

Read this article on WeChat. 在微信上阅读本文。

Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

33.8 Million People in the United States Live on Sinking Land

Thu, 05/08/2025 - 09:01

Land subsidence is typically considered a coastal problem: The dual threats of sinking land and rising seas intensify flooding, particularly in places like New York City and Louisiana. But even inland, major cities face infrastructure problems and flooding damage from sinking land beneath. 

“Land subsidence does not stop at coastal boundaries.”

A study published in Nature Cities has found that all 28 of the most populous cities in the United States are sinking. Though some of this subsidence is due to long-term geologic processes, much of it is spurred by human activity, including groundwater pumping and the building of new infrastructure. Better groundwater management and stricter building codes could mitigate risks.

“Land subsidence does not stop at coastal boundaries,” said Leonard Ohenhen, a postdoctoral researcher at Columbia University and the first author of the new study. 

From Coast to Coast, and in Between

Rates of sinking or uplifting land, also known as vertical land motion, can be measured from satellites via synthetic aperture radar (SAR), a technology that sends radar pulses to Earth and records how those pulses are reflected back. Ohenhen and the research team used SAR measurements from 2015 to 2021 from the Sentinel-1 mission to create maps of ground deformation in the 28 most populous U.S. cities.

The team found that in every city, at least 20% of the land area was sinking, and in 25 of the 28 cities, at least 65% of the land area was sinking. Estimates from the study show that about 33.8 million people live on sinking land in these 28 cities. 

The study shows a “really good assessment of what the whole local and regional picture of vertical land motion looks like,” said Patrick Barnard, a geologist at the University of California, Santa Cruz Center for Coastal Climate Resilience, who was not involved in the new study. “It gives us more and more confidence and a greater understanding of how [subsidence] is influencing urban areas and increasing the risk to the population.”

Maps created by Ohenhen and his colleagues show which cities are experiencing uplift (positive vertical land motion values) and subsidence (negative vertical land motion values). Credit: Ohenhen et al., 2025, doi.org/10.1038/s44284-025-00240-y

Some of the highest rates of subsidence (>4 millimeters per year) were observed in several cities in Texas: Houston, Fort Worth, and Dallas. The fastest-sinking city in the country was Houston, with more than 40% of its land subsiding at a rate greater than 5 millimeters per year.

Chicago, Detroit, New York, and Denver were among the cities with the most land area affected by subsidence.

Some of the rates described in the study were “alarming,” Barnard said, because typical background subsidence is below a couple of millimeters per year. Rates above 2 millimeters per year can damage infrastructure and buildings, he said.

Vertical land motion is especially problematic where land is sinking unevenly, or where a subsiding region is next to an area that’s rising.

Analyzing building densities and land deformation, the researchers found that San Antonio faces the greatest risk, with one in every 45 buildings at a high risk of damage.

What may seem like slow sinking can build up over time to cause problems, Ohenhen said. “Four millimeters per year becomes 40 millimeters over 10 years, and so on…that cumulative effect can add up.”

Getting Ahead of Ground Deformation

A now-absent ice sheet may be responsible for some of the land deformation. Tens of thousands of years ago, the Laurentide Ice Sheet covered much of North America, compressing the land beneath. Now that the ice sheet has melted, North America is readjusting. Land once underneath the ice sheet is generally rising slowly, while land not covered by the ice sheet is sinking. Ohenhen compared this process to relieving pressure on a mattress: Once pressure is released, some parts of the mattress rise while others sink back to their original height. 

Most of the subsidence described in the study, though, likely comes from groundwater pumping, which decreases pressure in the pore space of rock and sediment. The pore space slowly collapses and the ground sinks.

“We can’t just be pumping the ground without any regard to the potential long-term impacts.”

That can exacerbate flooding and infrastructure damage. Groundwater pumping and oil and gas extraction near Houston caused land subsidence that correlated with flood severity after Hurricane Harvey in 2017, for example.

As climate change continues to intensify drought conditions in some parts of the United States, land subsidence from groundwater pumping could become even more of a risk to infrastructure. An “increasing number of cities may face significant challenges in subsidence management,” the study authors wrote. 

“It’s really a major issue we have to consider, especially in these urban areas,” Barnard said. “We can’t just be pumping the ground without any regard to the potential long-term impacts.”

The risks posed by land subsidence are high enough to warrant policy changes to better manage groundwater pumping across the country, Barnard said. Better enforcement of building codes could also prevent damage, the paper’s authors wrote.

“People are often not attuned to some of these subtle hazards they may be exposed to,” Ohenhen said. “[We should] make people aware of the situation so that we do not wait until the very last moment to respond.”

—Grace van Deelen (@gvd.bsky.social), Staff Writer

Citation: van Deelen, G. (2025), 33.8 million people in the United States live on sinking land, Eos, 106, https://doi.org/10.1029/2025EO250178. Published on 8 May 2025. Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer