EOS

Syndicate content Eos
Science News by AGU
Updated: 1 day 14 hours ago

Is Convection Wobbling Venus?

Tue, 12/09/2025 - 18:32
Editors’ Highlights are summaries of recent papers by AGU’s journal editors. Source: AGU Advances

If you spin a bowling ball, the finger-holes will end up near the rotation axis because putting mass as far from the axis as possible minimizes energy. So, on planets –if there is a large mountain, it will end up at the equator; in physics terms, the axes of rotation and maximum inertia align.

Conversely, a planet that is very spherical will be rather unstable, so that the solid surface can move relative to the rotation axis, so-called true polar wander (TPW). Because of its slow rotation, Venus is extremely spherical; TPW can thus easily occur, driven for example by mantle convection, which is time-dependent. Furthermore, Venus’s axes of maximum inertia and rotation are offset, by about 0.5o.

In a new paper, Patočka et al. [2025] analyze the effect of convection on Venus’s axial offset and potential for TPW. They find TPW rates that are consistent with geologically-derived values, but that the resulting axial offset is much smaller than observed. Their conclusion is that atmospheric torques are likely responsible, as they probably are for the apparent variations in Venus’s rotation rate measured from Earth.

The angular offset between the rotation and maximum inertia axis as a function of time, driven by time-dependent convection. The mean value (0.0055o) is two orders of magnitude smaller than the observed value (0.5o). Convection cannot be causing this offset. Credit: Patočka et al. [2025], Figure 2e

Three spacecraft missions will soon be heading to Venus. Direct measurement of the effects predicted by the researchers are challenging, but the coupling between atmospheric dynamics and planetary rotation will surely form an important part of their investigations.

Citation: Patočka, V., Maia, J., & Plesa, A.-C. (2025). Polar motion dynamics on slow-rotating Venus: Signatures of mantle flow. AGU Advances, 6, e2025AV001976. https://doi.org/10.1029/2025AV001976

—Francis Nimmo, Editor, AGU Advances

Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Celebrating the MacGyver Spirit: Hacking, Tinkering, Scavenging, and Crowdsourcing

Tue, 12/09/2025 - 12:32

In 2009, Rolf Hut—then a doctoral student at Delft University of Technology in the Netherlands—hacked a $40 Nintendo Wii remote, turning it into a sensor capable of measuring evaporation in a lake.

The innovation, tested in his lab’s wave generator basin, became part of Hut’s doctoral thesis and changed the course of his career. Though he’s now an associate professor at Delft, Hut considers himself a professional tinkerer and a teacher of tinkerers.

Back in 2009, Hut and a group of fellow Ph.D. students organized a session at AGU’s annual meeting in which hydrologists could demonstrate the quirky measurement devices they’d made, hacked, scavenged, or used in a manner entirely different from what manufacturers intended.

Rolf Hut from Delft University of Technology organized the AGU 2010 MacGyver session. The session included homemade devices such as a “disdrometer” for counting raindrops and a demonstration of the “rising bubble“ method of determining canal discharge. Credit: Rolf Hut

The session, “Self-Made Sensors and Unintended Use of Measurement Equipment,” was so popular that Hut organized it again the next year and the next. In addition to Hut’s remodeled Wiimote, early sessions included an acoustic rain sensor made from singing birthday card speakers, a demonstration of how to use a handheld GPS unit to measure tidal slack in estuaries, and a giant temperature-sensing pole that showed how the room heated up after the coffee break.

Since then, the endeavor has grown from a single session to many, expanded to the annual meeting of the European Geosciences Union in addition to AGU’s, and gained a new name: “People just kept calling it ‘the MacGyver session,’” Hut said.

This year, there are five MacGyver sessions, encompassing space weather, ocean environments, the geosphere, and crowdsourced science—the biggest program yet, said Chet Udell of Oregon State University, an electrical engineer and musical composer who is convening the geosphere session.

“The MacGyver sessions are a powder keg of possibilities,” Udell said. “You never know who’s gonna talk with who and what really cool collaboration or initiative could get started that way.”

The MacGyver Spirit

The term “MacGyver” originated with the 1980s television character, a resourceful secret agent known for elegantly solving complex problems with a Swiss Army knife, a few paper clips, chewing gum, or the roll of duct tape he always kept in his back pocket.

That can-do attitude is a natural fit for science, said Udell. “The MacGyver spirit is all about empowering the curiosity that drives science to also drive instrumentation.”

“Oftentimes, [scientists] come up to the barrier of, ‘I can’t ask that question because measuring this thing would be too infeasible, too complicated, too expensive, [the sensor] doesn’t exist,’” he said.

In addition to innovation—“There are a lot of people generating new science because they’ve hacked their instrumentation”—collaboration is key to the MacGyver spirit, Udell said. The ethos is less do-it-yourself (DIY) and more do-it-together. With strong links to the open-source and makerspace traditions, community and transparency are prioritized over competition and secrecy.

“No one lab has all of the expertise, the tools, and the capacity to bring these really interesting, handmade types of DIY innovation to the sciences,” Udell said.

Until recently, the MacGyver sessions were among the only places scientists and engineers could share these kinds of innovations with others. Journal articles’ methods sections typically aren’t long enough to explain exactly how to make one of these hacked or duct-taped devices.

But in 2017, the multidisciplinary, peer-reviewed journal HardwareX was launched with the aim of accelerating the distribution of low-cost, high-quality, open-source scientific hardware. Udell is an associate editor of the journal and recently published an article there with instructions on how to build a “Pied Piper” device that senses pest insects and then lures them into a trap. Citations from HardwareX can help MacGyver scientists justify time spent tinkering, he added.

The Alchemy of Serendipity

The in-person MacGyver sessions remain the heart of the movement, said Udell. There’s a certain alchemy that happens when you bring similarly geeky people together. “You know you’ve really found your community,” said Udell. “There’s a sensation that we’re all cut from the same cloth.”

“We want people to bring the physical device they’ve made and have a nerd-on-nerd discussion about that.”

There’s a reason they’re usually poster sessions, too, added Hut. “We want people to bring the physical device they’ve made and have a nerd-on-nerd discussion about that, which is a very different sort of communication than one-to-many broadcasting your awesome work.”

The format facilitates serendipitous discovery, too. “People walk by and they’re like, ‘Hey, what’s this weird device? I didn’t know you could measure that,’” said Udell. The conversation might spark an epiphany that could help someone solve a problem they’ve been wrestling with in their own research.

Kristina Collins, an electrical engineer who has convened several MacGyver sessions, said scientists and engineers from all disciplines are welcome at any of them—not just those in their own “Hogwarts House” or discipline.

“Having open-source hardware gives people a way to exchange information across different scientific cultures,” she said. “The point of Fall Meeting is to connect with the gestalt of what’s happening at the level of your field and also across fields. I really like that. I think everything interesting happens at the interface.”

Crowdsourced Science

Collins, now a research scientist at the Space Science Institute in Boulder, stumbled upon the MacGyver sessions at her first AGU annual meeting, in 2019—when she was a graduate student and the sessions were hydrology only.

At the time, she was working on making low-cost space weather station receivers for taking Doppler measurements and working with the worldwide ham radio community to deploy them—harnessing low-cost tech and crowdsourced science to gather data from the ionosphere and provide insights into the effects of solar activity on Earth.

“We named [our first receiver] the Grape because people like to name electronics after tiny fruit, and everything else was taken,” she explained (think: kiwis, limes, raspberries, blackberries, apples). “And also, it does its best work in bunches—many, many instruments [working] as a single meta instrument.”

The following year, Collins and some colleagues organized their own MacGyver session on sensors for detecting space weather. At AGU’s Annual Meeting 2025, there will be both oral and poster space weather MacGyver sessions . Collins will present an update on the Personal Space Weather Station Network and the various instruments, including Grape monitors, that make up this distributed, crowdsourced system.

For many geoscientists, the MacGyver spirit is not just a fun side quest, but a fundamental part of the scientific process, said Udell. “The questions we ask and the things that we observe are shaped by what we can measure, and this is shaped by our instrumentation,” he said.

“And so, in a way, what we make ends up making us.”

—Kate Evans (@kategevans.bsky.social), Science Writer

Citation: Evans, K. (2025), Celebrating the MacGyver spirit: Hacking, tinkering, scavenging, and crowdsourcing, Eos, 106, https://doi.org/10.1029/2025EO250460. Published on 9 December 2025. Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

The Long and the Weak of It—The Ediacaran Magnetic Field

Tue, 12/09/2025 - 12:30

Time travelers to the Ediacaran can forget about packing a compass. Our planet’s magnetic field was remarkably weak then, and new research suggests that that situation persisted for roughly 3 times longer than previously believed.

That negligible magnetic field likely resulted in increased atmospheric oxygen levels, which in turn could have facilitated the observed growth of microscopic organisms, researchers have now concluded. These results, which will be presented at AGU’s Annual Meeting on Wednesday, 17 December, pave the way for better understanding a multitude of life-forms.

The Ediacaran period, which spans from roughly 640 million to 540 million years ago, is recognized as a time in which microscopic life began evolving into macroscopic forms. That transition in turn paved the way for the diversification of life known as the Cambrian explosion. The Ediacaran furthermore holds the honor of being one of the most recent inductees into the International Chronostratigraphic Chart, the official geologic timescale. (Last year, the Anthropocene was rejected as an addition to the International Chronostratigraphic Chart.)

A Collapsing Field, with Implications for Life

The Ediacaran was a time of magnetic tumult. An earlier study showed that our planet’s magnetic field precipitously fell from roughly modern-day values, decreasing by as much as a factor of roughly 30.

“We have this unprecedented interval in Earth’s history where the Earth’s magnetic field is collapsing.”

“We have this unprecedented interval in Earth’s history where the Earth’s magnetic field is collapsing,” said John Tarduno, a geophysicist at the University of Rochester involved in the earlier study as well as this new work.

The strength of our planet’s magnetic field has implications for life on Earth. That’s because Earth’s magnetic field functions much like a shield, protecting our planet’s atmosphere from being pummeled by a steady stream of charged particles emanating from the Sun (the solar wind). A weaker magnetic field means that more energetic particles from the solar wind can ultimately interact with the atmosphere. That influx of charged particles can alter the chemical composition of the atmosphere and allow more DNA-damaging ultraviolet radiation from the Sun to reach Earth’s surface.

There’s accordingly a strong link between Earth’s magnetic field and our planet’s ability to support life, said Tarduno. “One of the big questions we’re interested in is the relationship between Earth’s magnetic field and its habitability.”

We’re Getting Older (Rocks)

Tarduno and his colleagues previously showed that a weak magnetic field likely persisted during the Ediacaran from 591 to 565 million years ago, a span of 26 million years.

But maybe that period lasted even longer, the team surmised. To test that idea, the researchers analyzed an assemblage of 641-million-year-old anorthosite rocks from Brazil. Those rocks date to the late Cryogenian, the period immediately preceding the Ediacaran.

Back in the laboratory, the researchers extracted pieces of feldspar from the rocks. Within that feldspar, the team homed in on tiny inclusions of magnetite, a mineral that records the strength and direction of magnetic fields.

Team member Jack Schneider, a geologist at the University of Rochester, used a scanning electron microscope to observe individual needle-shaped bits of magnetite measuring just millionths of a meter long and billionths of a meter wide. “We can see the actual magnetic recorders,” said Schneider.

Working in a room shielded from Earth’s own magnetic field, Schneider measured the magnetization of feldspar crystals containing those magnetite needles. To ensure that the magnetite needles were truly reflecting Earth’s magnetic field 641 million years ago rather than a more recent magnetic field, the team focused on single-domain magnetite. A single domain refers to a region of uniform magnetization, which is much more difficult to overprint with a new magnetic field than a region magnetized in multiple directions. “We make sure that they’re good samples for us to use,” said Schneider.

Don’t Blame Reversals

The average field strength that the team recorded was consistent with zero, with an upper limit of just a couple hundred nanoteslas. “Those are the type of numbers you measure on solar system bodies today where there’s no magnetic field,” said Tarduno. For comparison, Earth’s magnetic field today is several tens of thousands of nanoteslas.

Given the weak magnetic field strengths dating to 565 million years ago and 591 million years ago and these new measurements of rocks from 641 million years ago, there might have been a roughly 70-million-year span in which Earth’s magnetic field was unusually feeble and possibly nonexistent, the team concluded.

And magnetic reversal—the periodic switching of Earth’s north and south magnetic poles—isn’t the likely culprit, the researchers suggest. It’s true that the planet’s magnetic field drops to very low levels during some parts of a magnetic reversal, but that situation persists for at most a few thousand years, said Tarduno. That’s far too short a time to show up in this dataset—the rocks that the team measured all cooled over tens of thousands of years, so the magnetic fields they recorded are an average over that time span.

Take a Deep Breath

If it’s true that Earth’s magnetic field was anomalously weak for about 70 million years, cascading effects might have helped prompt the transition from microscopic to macroscopic life, the team suggests. That shift, known as the Avalon explosion, preceded the better-known Cambrian explosion.

In particular, a weak magnetic field would have allowed the solar wind to impinge more on our planet’s atmosphere, a process that would have preferentially kicked out lighter inhabitants of the atmosphere such as hydrogen. Such a depletion of hydrogen would have, in turn, boosted the relative concentration of an important atmospheric species: oxygen.

“If you’re removing hydrogen, you’re actually increasing the oxygenation of the planet, particularly in the atmosphere and the oceans,” explained Tarduno. And because oxygen plays such a key role for so many species across the animal kingdom, it’s not too much of a stretch to imagine that the important life shift that occurred soon thereafter—miniscule creatures evolving into ones that measured centimeters or even meters in size—owes something to the invisible actor that is our planet’s magnetic field, the team concluded. “We passed a threshold that allowed things to get big,” said Tarduno.

It’s difficult to test this hypothesis by measuring ancient atmospheric oxygen levels, the team admits. (The ice cores that famously record atmospheric gases stretch back in time just about a million years, give or take.)

But this idea that the planet’s magnetic field may have triggered atmospheric changes that in turn played a role in animals growing larger makes sense, said Shuhai Xiao, a geobiologist at Virginia Tech not involved in the research. “If the oxygen concentration is low, you simply cannot grow very big.”

In the future, it will be important to fill in our knowledge of the magnetic field during the Ediacaran with more measurements, added Xiao. “One data point could change the story a lot.”

Cathy Constable, a geophysicist at the Scripps Institution of Oceanography not involved in the research, echoed that thought. “The data are sparse,” she said. But this investigation is clearly a step in the right direction, she said. “I think this is exciting work.”

—Katherine Kornei (@KatherineKornei), Science Writer

Citation: Kornei, K. (2025), The long and the weak of it—The Ediacaran magnetic field, Eos, 106, https://doi.org/10.1029/2025EO250454. Published on 9 December 2025. Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

When Should a Tsunami Not Be Called a Tsunami?

Mon, 12/08/2025 - 13:56

The public has long been educated to respond to the threat of a tsunami by moving away from the coast and to higher ground. This messaging has created the impression that tsunami impacts are always potentially significant and has conditioned many in the public toward strong emotional responses at the mere mention of the word “tsunami.”

Indeed, in more general usage, “tsunami” is often used to indicate the arrival or occurrence of something in overwhelming quantities, such as in the seeming “tsunami of data” available in the digital age.

The prevailing messaging of tsunami risk communications is underscored by roadside signs in vulnerable areas, such as along the U.S. West Coast, that point out tsunami hazard zones or direct people to evacuation routes. The ubiquity and straightforward message of these signs, which typically depict large breaking waves (sometimes looming over human figures), reinforce the notion that tsunamis pose life-threatening hazards and that people should evacuate the area.

The disparity between the scientific definition of tsunamis and their common portrayal in risk communications and general usage creates room for confusion in public understanding.

Of course, sometimes they do present major risks—but not always.

The current scientific definition of a tsunami sets no size limit. According to the Intergovernmental Oceanographic Commission (IOC) [2019], a tsunami is “a series of travelling waves of extremely long length and period, usually generated by disturbances associated with earthquakes occurring below or near the ocean floor.” After pointing out that volcanic eruptions, submarine landslides, coastal rockfalls, and even meteorite impacts can also produce tsunamis, the definition continues: “These waves may reach enormous dimensions and travel across entire ocean basins with little loss of energy.”

The use of “may” indicates that a tsunami, or long wave, by this definition need not be large or especially impactful. If the initiating disturbance is small, the amplitude of the generated long wave will also be small.

The disparity between the scientific definition of tsunamis and their common portrayal in risk communications and general usage creates room for additional confusion in public understanding and potentially wasted effort and resources in community responses. We thus propose revising the definition of tsunami to include an amplitude threshold to help clarify when and where incoming waves pose enough of a hazard for the public to take action.

A Parting of the Waves

Tsunami wave amplitudes can vary substantially not only from one event to another but also within a single event. Following the magnitude 8.8 earthquake off the Kamchatka Peninsula in July, for example, tsunami waves upward of 4 meters hit nearby parts of the Russian coast, whereas amplitudes were much lower at distant sites across the Pacific.

Meanwhile, other disturbances create waves that although technically tsunamis, simply tend to be smaller. Prevailing public messaging about tsunami threats can complicate communications about such smaller waves, including those from meteotsunamis, for example.

Meteotsunamis are long waves generated in a body of water by a sudden atmospheric disturbance, usually a rapid change in barometric pressure [e.g., Rabinovich, 2020]. They are often reported after the fact as coastal inundation events for which no other obvious explanation can be found.

Once a meteotsunami is formed, the factors that govern its propagation, amplitude, and impact are the same as for other tsunamis. However, meteotsunami wave amplitudes are typically smaller than those of long waves generated by large seismic events.

Updating the scientific definition of a tsunami to include a low-end amplitude threshold could help avoid scenarios where oceanic long waves may be coming but evacuation is not required.

As coastal inundations are amplified by sea level rise and thus are becoming more frequent, a greater need to communicate about all coastal inundation events, including from meteotsunamis, is emerging. And with recent progress in understanding meteotsunamis, it is becoming feasible to develop operational warning systems for them (although to date, only a few countries—Korea being one [Kim et al., 2022]—have such systems).

Still, many meteotsunamis do not require coastal evacuations. Given the public’s understanding of the word “tsunami,” however, an announcement that a meteotsunami is on the way could cause an unnecessary response.

Updating the scientific definition of a tsunami to include a low-end amplitude threshold could help avoid such scenarios where oceanic long waves may be coming but evacuation is not required. We suggest that a long wave below that threshold amplitude should be referred to simply as an oceanic long wave or another suitable alternative, such as a displacement wave. Many meteotsunamis, as well as some long waves generated by low-magnitude seismicity and other drivers, would thus not be classified as tsunamis.

Conceptually, our proposal aligns somewhat with various tsunami magnitude scales developed to link wave heights or energies with potential impacts on land [e.g., Abe, 1979]. These scales have yet to be widely accepted by either the scientific community or operational warning centers, however, perhaps because it is difficult to assign a single value to represent the impact of a tsunami. In addition, tsunami magnitude calculations often require postevent analyses, which are too slow for use in early warnings.

We are not proposing yet another tsunami magnitude scale; rather, our idea focuses predominantly on terminology and solely on relatively low amplitude long waves.

Lessons from Meteorology

This kind of threshold classification for naming natural hazards has precedent in other scientific disciplines.

In meteorology, for example, a tropical low-pressure system is designated as a named tropical storm only if its maximum sustained wind speed is more than 63 kilometers per hour. Below that threshold, a system is called a tropical depression. A higher wind speed threshold is similarly specified before more emotive terms such as “hurricane,” “typhoon,” and “tropical cyclone” (depending on the region) are used.

Considering the effectiveness of using thresholds for tropical storm terminology, we anticipate that adopting a formal tsunami threshold could have similar benefits.

Current wind-based tropical storm naming systems have limitations, such as their focus on wind hazards over those from rainfall or storm surge [e.g., Paxton et al., 2024]. However, on the whole, using intensity thresholds for various terms has enhanced the communication of the risks of these weather systems—whether limited or life-threatening—to the public. The straightforward framework helps inform decisionmaking, allowing people in potentially affected areas to determine whether they should evacuate or take other protective measures against an approaching weather system [e.g., Lazo et al., 2010; Cass et al., 2023]. Lazo et al. [2010], for example, underscored that categorizing hurricanes is a powerful tool for easily conveying storm severity to the public, enabling faster and more confident protective action decisions.

Research into tsunami risk communication, including about best practices and regional differences, is limited compared with that related to other hazards [Rafliana et al., 2022]. However, considering the effectiveness of using thresholds for tropical storm terminology, we anticipate that adopting a formal tsunami threshold could have similar benefits for the communication of risk to the public. For example, it could inform decisions about where and when to issue evacuation orders and, equally important, when those orders could be lifted.

Open Questions to Consider

Our proposal raises important questions about the nature of a potential tsunami threshold and how it should be applied.

First, what should the threshold wave amplitude be? There is no obvious answer, and the decision would require careful consideration within the scientific and operational tsunami warning communities, although amplitude threshold–related techniques already used by tsunami warning services may offer useful insights.

For example, the Joint Australian Tsunami Warning Centre (JATWC) issues three categories of tsunami warnings: no threat, marine threat (indicating potentially dangerous waves and strong ocean currents in the marine environment), and land threat (indicating major land inundation of low-lying coastal areas, dangerous waves, and strong ocean currents). JATWC uses an amplitude of 0.4 meter measured at a tide gauge as a minimum for the confirmation of lower-level marine threat warnings [Allen and Greenslade, 2010]. That could be a possible value for our proposed threshold—or at least a starting point for discussion.

An internationally consistent threshold would be ideal, especially considering the expansive reach of tsunamis, but is not necessarily imperative. The terminology for tropical storms is not entirely consistent around the world, yet the benefits for hazard communication are still evident.

A second question is whether a wave should be considered a tsunami along its entire length once its amplitude anywhere reaches the threshold. We think not and instead propose that long waves be called tsunamis only where their amplitude is above the defined threshold. Were the threshold to be set at 0.4 meter, this provision would mean, for example, that in the hypothetical case following a large earthquake shown in Figure 1, only waves in the orange- and red-shaded regions would be considered tsunami waves.

Fig. 1. Modeled maximum amplitudes of waves propagating across the Pacific Ocean following a hypothetical magnitude 9.0 earthquake on the Japan Trench are seen here. Credit: Stewart Allen, Bureau of Meteorology

In this way, the proposed terminology for tsunamis would differ from that used for tropical low-pressure systems, which are classified as storms (or hurricanes, typhoons, etc.) in their entirety once their maximum sustained winds exceed a certain threshold—regardless of where that occurs. While tropical storms typically have localized impacts, long ocean waves can travel vast distances, even globally. However, since the destructive effects of these waves are limited to specific regions—similar to tropical storms—it is reasonable to refer to them as tsunamis only in areas where significant impact is expected.

This location-dependent classification may raise practical challenges for warning centers, in part because details of the forcing disturbance (e.g., the earthquake depth and focal mechanism) may not be immediately available and because of uncertainties about how a long wave will interact with complex coastlines, which can amplify or attenuate waves.

On the other hand, early assessments of where and when an ocean long wave should be defined as a tsunami would benefit from the fact that once a tsunami is generated, its evolution is fairly predictable because of its linear propagation in deep water.

Using amplitude threshold–based definitions will require efforts to educate the public about basic principles and the terminology of ocean waves.

Another issue for consideration is that using amplitude threshold–based definitions will require efforts to educate the public about basic principles (e.g., what wave amplitudes are and why they vary) and the terminology of ocean waves. Ubiquitous mentions of atmospheric pressure “highs” and “lows” in weather forecasts have familiarized the public with terms like “tropical low” and with what conditions to expect when the pressure is low. However, “oceanic long wave” and other such terms are more obscure. Choosing the best term for waves that do not meet the tsunami threshold, as well as the best approaches for informing people, would require social science research and testing with the public.

Finally, how do we ensure that this tsunami threshold terminology prompts appropriate public reactions, whether that is evacuating coastal areas entirely, pursuing a limited response such as securing boats properly and staying out of the water, or taking no action at all? Scientists, social scientists, and the emergency management and civil protection communities must collaborate to address this question and to test the messaging with the public. Using official tsunami warning services to issue warnings about above-threshold events and more routine marine and coastal services, such as forecasts of sea and swell in coastal waters, to share news about below-threshold events might be an effective way to help the public understand the potential severity of different events and react accordingly.

Normalizing and Formalizing

Should the use of a tsunami amplitude threshold be adopted for risk communications, we advocate that ocean scientists should also adhere to the terminology in presentations and research publications in the same way that atmospheric scientists have adhered to the threshold-dependent terminology around tropical storms. This consistency will gradually normalize the usage and reduce confusion. Readers of a scientific publication that notes the occurrence of a tsunami, for example, would instantly know that it was an above-threshold event.

Formalizing a scientific redefinition of what constitutes a tsunami will require discussion, agreement, and coordination across multiple bodies, most notably the IOC, which supports the agencies that provide tsunami warnings, and the World Meteorological Organization (WMO), which supports the agencies that provide marine forecasts. Should this threshold proposal receive enough initial support, the next step would be to elevate the proposal to the IOC and WMO for further consideration in these forums.

Considering the potential benefits for risk communications and the well-being of coastal communities worldwide, we think these are discussions worth having.

References

Abe, K. (1979), Size of great earthquakes of 1837–1974 inferred from tsunami data, J. Geophys. Res., 84, 1,561–1,568, https://doi.org/10.1029/JB084iB04p01561.

Allen, S. C. R., and D. J. M. Greenslade (2010), Model-based tsunami warnings derived from observed impacts, Nat. Hazards Earth Syst. Sci., 10, 2,631–2,642, https://doi.org/10.5194/nhess-10-2631-2010.

Cass, E., et al. (2023), Identifying trends in interpretation and responses to hurricane and climate change communication tools, Int. J. Disaster Risk Reduct., 93, 103752, https://doi.org/10.1016/j.ijdrr.2023.103752.

Intergovernmental Oceanographic Commission (2019), Tsunami Glossary, 4th ed., IOC Tech. Ser. 85, U.N. Educ., Sci. and Cultural Organ., Paris, unesdoc.unesco.org/ark:/48223/pf0000188226.

Kim, M.-S., et al. (2022), Towards observation- and atmospheric model-based early warning systems for meteotsunami mitigation: A case study of Korea, Weather Clim. Extremes, 37, 100463, https://doi.org/10.1016/j.wace.2022.100463.

Lazo, J. K., et al. (2010), Household evacuation decision making and the benefits of improved hurricane forecasting: Developing a framework for assessment, Weather Forecast., 25(1), 207–219, https://doi.org/10.1175/2009WAF2222310.1.

Paxton, L. D., J. Collins, and L. Myers (2024), Reconsidering the Saffir-Simpson scale: A Qualitative investigation of public understanding and alternative frameworks, in Advances in Hurricane Risk in a Changing Climate, Hurricane Risk, vol. 3, edited by J. Collins et al., pp. 241–279, Springer, Cham, Switzerland, https://doi.org/10.1007/978-3-031-63186-3_10.

Rabinovich, A. B. (2020), Twenty-seven years of progress in the science of meteorological tsunamis following the 1992 Daytona Beach event, Pure Appl. Geophys., 177, 1,193–1,230, https://doi.org/10.1007/s00024-019-02349-3.

Rafliana, I., et al. (2022), Tsunami risk communication and management: Contemporary gaps and challenges, Int. J. Disaster Risk Reduct., 70, 102771, https://doi.org/10.1016/j.ijdrr.2021.102771.

Author Information

Diana J. M. Greenslade (diana.greenslade@bom.gov.au) and Matthew C. Wheeler, Bureau of Meteorology, Melbourne, Vic., Australia

Citation: Greenslade, D. J. M., and M. C. Wheeler (2025), When should a tsunami not be called a tsunami?, Eos, 106, https://doi.org/10.1029/2025EO250453. Published on 8 December 2025. This article does not represent the opinion of AGU, Eos, or any of its affiliates. It is solely the opinion of the author(s). Text © 2025. Commonwealth of Australia. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Tiny Turbulent Whirls Keep the Arctic Ocean Flowing

Mon, 12/08/2025 - 13:54
Source: AGU Advances

In the coming decades, climate change is likely to lead to a loss of sea ice in and an influx of warmer water to the Arctic Ocean, affecting the ocean’s vertical circulation. Brown et al. recently investigated the forces that drive the Arctic Ocean’s vertical circulation to gain insight into how the circulation might change in the future.

The researchers drew on data from a range of sources, including measurements from shipborne and mooring-based instruments, ERA-Interim, the Arctic Ocean Model Intercomparison Project, and the Polar Science Center Hydrographic Climatology.

Two contrasting factors emerged as the main drivers of vertical circulation as warmer waters flow from the Atlantic Ocean into the Arctic. In the Barents Sea, hitherto the only ice-free part of the Arctic, the ocean loses heat to the atmosphere, causing some of the water to become denser and to sink. Elsewhere, centimeter-sized whirls of turbulence mix in freshwater from rivers and precipitation, resulting in lighter-weight water that remains close to the surface.

As climate change continues to melt sea ice, the balance between these surface fluxes and turbulent mixing is likely to change. More of the ocean surface will be exposed to heat loss to the atmosphere. At the same time, turbulence is likely both to increase and to become more variable. The Arctic Ocean is a source of cold, dense water that feeds the Atlantic Meridional Overturning Circulation, or AMOC, a circulation pattern that holds key influence over the weather in western Europe and North America. Determining how changing circulation patterns in the Arctic Ocean will affect the AMOC should be a focus for future research, the authors suggest. (AGU Advances, https://doi.org/10.1029/2024AV001529, 2025)

—Saima May Sidik (@saimamay.bsky.social), Science Writer

Citation: Sidik, S. M. (2025), Tiny turbulent whirls keep the Arctic Ocean flowing, Eos, 106, https://doi.org/10.1029/2025EO250455. Published on 8 December 2025. Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

A Cryobank Network Grows in the Coral Triangle

Fri, 12/05/2025 - 14:23

The Coral Triangle is a biodiversity hot spot. At least for now.

More than 600 species of coral grow in this massive area straddling the Pacific and Indian Oceans, stretching from the Philippines to Bali to the Solomon Islands. But as the oceans get hotter, coral reefs—and the ecosystems they support—are at risk. Experts predict up to 90% of coral could disappear from the world’s warming oceans by 2050.

Research institutions are racing to preserve corals, and one strategy involves placing them in a deep freeze. By archiving corals in cryobanks, biologists can buy time for research and restoration—and hopefully stave off extinction.

A new capacity-building project is training cryocollaborators in the Coral Triangle region, starting at the University of the Philippines (UP).

The initiative is “very, very urgent,” said Chiahsin Lin, a cryobiologist who is leading the project from Taiwan’s National Museum of Marine Biology and Aquarium.

Room to Grow

“We don’t have that much time to develop the techniques.”

A cryobank is like a frozen library. But instead of books, the shelves are lined with canisters of coral sperm, larvae, and even whole coral fragments chilled in liquid nitrogen.

Coral cryobanking can aid in coral preservation and future cultivation. But the process is tricky and time-consuming and requires trial and error. The temperature and timing that work for one species won’t carry over to others. Plus, it can take 30 minutes to freeze a single coral larva, said Lin.

University of the Philippines research assistant Ryan Carl De Juan works with Sun Yat-Sen University Ph.D. student Federica Buttari on vitrification and cryobanking procedures at the National Museum of Marine Biology and Aquarium laboratory in Taiwan. Credit: UP MSI Interactions of Marine Bionts and Benthic Ecosystems Laboratory

While materials from hundreds of species have been frozen, very few larvae have been successfully revived and brought to adulthood.

“We hope more and more people can be involved in this research,” Lin said. “We don’t have that much time to develop the techniques.”

The new project aims to increase the number of trained professionals who can freeze the world’s corals. UP’s Marine Science Institute is currently working to open the first cryobank in Southeast Asia. Lin has visited UP multiple times to train researchers on cryopreservation and vitrification. The UP team also traveled to Taiwan to work with samples in Lin’s lab.

The project will establish future cryobanks in Thailand, Malaysia, and Indonesia as well. Those teams will also participate in similar trainings to reach a shared goal: a network of coral cryobanks in the Coral Triangle.

Pausing the Clock

A major benefit of cryopreservation is that it pauses the clock. Some coral species spawn for only a few hours or days each year, and that window changes by species and by region. If a lab group misses the release, they may wait months before collecting materials again.

By freezing coral samples, researchers have more opportunities to experiment throughout the year.

In the past, Emmeline Jamodiong, a coral reproduction biologist in the Philippines, led coral reproduction trainings for stakeholders across the country. Logistics were complicated. People needed to travel from different regions and islands, but “we had to wait for the corals to spawn before we could conduct the training.” Now, even if it’s not spawning season, researchers could still work with coral reproduction.

A local cryobank “offers a lot of future research opportunities,” she said. “I’m very happy that we have this facility established in the Philippines.”

A new cryobank in the Philippines will focus on Pocilloporidae, a family of corals that grows fast, reproduces quickly, and is among the first to root on disturbed reefs. Like nearly all corals, though, Pocilloporidae are sensitive to coral bleaching and climate stress. Credit: UP MSI Interactions of Marine Bionts and Benthic Ecosystems Laboratory Freezing for the Future

The new project in the Coral Triangle is a helpful addition to cryobiology, said Mary Hagedorn, a senior scientist at the Smithsonian’s National Zoo and Conservation Biology Institute who developed the field of coral cryobanking.

“A real bottleneck for this field is there’s so few people that are trained as cryobiologists,” said Hagedorn. Every effort to expand the research ranks is valuable.

But cryobanks are just one part of coral conservation, she said. Aquariums that cultivate live coral are also important. Secure storage is essential to keeping samples safe from storms and power outages. Sustained government funding is needed to keep coral frozen and make sure staff are continuously trained.

“Without starting this project, there’s no hope for coral reefs.”

Some coral populations are already functionally extinct, making global collaborations like the one between Taiwan and the Philippines key.

“No one person can cryopreserve all the species of corals in the ocean,” Hagedorn said. Lin’s team has “a wonderful opportunity to get some amazing species and genetic diversity.”

Hagedorn’s international collaborators think it may take 15–25 years to collect enough coral larvae to ensure genetic diversity for a species. Many corals still need a tailor-made recipe for freezing and thawing if they’re going to be cryobanked at all. It’s a daunting task and a tight timeline available to only a handful of institutions around the world. The Coral Triangle network will add to that number.

“Without starting this project, there’s no hope for coral reefs,” Lin said. Coral cryobanking “gives tomorrow’s ocean a better chance.”

—J. Besl (@j_besl, @jbesl.bsky.social), Science Writer

Citation: Besl J. (2025), A cryobank network grows in the Coral Triangle, Eos, 106, https://doi.org/10.1029/2025EO250451. Published on 5 December 2025. Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

When Does Rainfall Become Recharge?

Thu, 12/04/2025 - 14:18

Groundwater is one of Earth’s most important natural resources—it’s the world’s biggest source of accessible freshwater, and in dry parts of the world, it supplies most of the water humans consume. In Australia, for example, where more than 70% of the landmass is semiarid or arid, groundwater is the only reliable source of freshwater, supplemented by limited seasonal or episodic rainfall.

Estimates of regional and global groundwater recharge exist, but they typically come with large uncertainties because recharge is generally measured indirectly: Scientists extrapolate from measurements of water table fluctuations or streamflow loss, or they simply estimate recharge as a percentage of rainfall. Moreover, the relationships between individual rainfall events and groundwater recharge are rarely examined because existing methodologies have focused on gathering data to estimate recharge volumes on monthly or annual timescales.

Groundwater is replenished, or recharged, when water—usually from precipitation—percolates into bedrock from the surface, raising the water table. However, scientists have relatively little knowledge of when this replenishment occurs and how much precipitation is needed to refill underground reservoirs known as aquifers.

The lack of understanding of when and how water goes from the surface to a reservoir makes it difficult to predict how groundwater recharge will change as the climate changes.

The lack of understanding of when and how water goes from the surface to a reservoir makes it difficult to predict how groundwater recharge will change as the climate changes. This difficulty will be increasingly problematic as our knowledge of past rainfall patterns becomes obsolete and no longer helps us to allocate water sustainably.

To reveal the relationship between rainfall and groundwater recharge, we can use a metric referred to as the rainfall recharge threshold. This threshold is the amount of rainfall that is needed for groundwater recharge to occur at a given location and time. We can determine this value if we can observe groundwater being recharged and we have weather data from prior to the recharge event so we know how much precipitation fell.

Measuring this threshold at different times and places allows us to assess how much rainfall is needed to recharge groundwater and how the amount varies seasonally and over time, how often that amount falls, and what weather patterns and climate processes make recharge events more likely.

Addressing these unknowns, in turn, helps us understand recharge processes in more detail and improves our ability to manage valuable groundwater resources sustainably.

Watching Groundwater Flow Fig. 1. This map of Australia, with state and territory borders demarcated, shows annual average rainfall across the continent, as well as the locations of current National Groundwater Recharge Observing System (NGROS) sites. Site 1, Capricorn Caves; site 2, Daylight Cave; site 3, Wellington Phosphate Mine; site 4, Wombeyan Caves; site 5, Yarrangobilly; site 6, Walhalla Mine; site 7, Derby mine tunnel; site 8, Marakoopa Cave; site 9, Ajax South Adit; site 10, Durham Lead Adit; site 11, Berringa Adit; site 12, Stawell Mine; site 13, Byaduk Lava Cave; site 14, Naracoorte Caves; site 15, Tantanoola Caves; site 16, Old Sleep’s Hill Tunnel; site 17, Montacute Mine; site 18, Burra Mine Adit; site 19, Elliston; site 20, Yanchep. Click image for larger version.

Our team has implemented an innovative approach to quantify rainfall recharge thresholds throughout Australia [Baker et al., 2024]. The approach involves directly detecting potential groundwater recharge using a network of automated hydrological loggers deployed as close to the water table as possible (so measurements are as representative of true recharge as possible) in underground tunnels, mines, and caves (Figure 1).

Since 2022, we have placed loggers at 20 sites, such as an abandoned train tunnel in Precambrian to Cambrian sandstone (at Old Sleep’s Hill Tunnel in South Australia; Figure 1, site 16), a heritage gold mine in Devonian metasedimentary rock (at Walhalla Mine in eastern Victoria; site 6), and a lava cave in Quaternary basalt (at Byaduk Lava Cave in western Victoria; site 13). These sensors make up Australia’s National Groundwater Recharge Observing System (NGROS), the first dedicated network for observing event-scale groundwater recharge across different geologic and surface environments, as well as across a wide range of Australian hydroclimates.

The hydrological loggers we use detect the impacts of falling water droplets that hit them, meaning they must be placed in open underground spaces, rather than buried in soil. They were originally designed to count drips falling from cave stalactites, but they count drips falling from the roof of any underground space just as well.

Sharp increases in drip rates allow us to identify the precise timing of recharge events.

In the time series datasets from replicate loggers at each site, sharp increases in drip rates allow us to identify the precise timing (and location, season, and climatic conditions) of recharge events—similar to how spikes in streamflow help identify flood episodes in river hydrographs. With this information, we can tie recharge to specific precipitation events, and by combining it with daily rainfall data, we can quantify rainfall recharge thresholds and their variability with geography and through time.

Confidently linking rainfall and recharge events also requires that water percolate from the surface toward the water table fast enough that the rainfall response is preserved. For this reason, loggers are placed where water flows predominantly—and rapidly—through features such as rock fractures rather than more slowly through tighter pore spaces. Data from NGROS therefore also shed light on recharge processes in fractured rock terrains, which have been relatively less researched than those in porous rocks and sediments.

We deliberately sited some NGROS sensors to be close to Australia’s new network of critical zone observatories to complement infrastructure that is recording stocks and flows of carbon, water, and mass from the plant canopy to the groundwater. We also chose cave sites where we plan to reconstruct records of past recharge by analyzing cave stalagmites. Our improved understanding of the climate conditions necessary for recharge to occur today will help us interpret stalagmite oxygen isotope compositions at these sites as a proxy for past periods of groundwater recharge and drought.

Heavy Rains Required

At NGROS sites where at least 1 year’s worth of data have been collected, we’ve observed that 10–20 millimeters of rainfall over 48 hours are typically needed to initiate recharge in fractured rock aquifers [Priestley et al., 2025].

We have seen a clear relationship across the sites between lower numbers of recharge events and higher rainfall thresholds.

Such rainfall events are infrequent in Australia. Between five and 18 occurred at each site in the first year the loggers were observing, with the fewest observed at Daylight Cave in eastern New South Wales (site 2) and the most at Capricorn Caves in central Queensland (site 1). These rainfall recharge events generally occurred when specific weather conditions manifested, such as the co-occurrence of cyclones or fronts with thunderstorms.

Regardless of the frequency of recharge events, we have seen a clear relationship across the sites between lower numbers of recharge events and higher rainfall thresholds. This relationship also holds despite differences in soil, vegetation, geology, and depth to the loggers, suggesting that climate is a major control on rainfall thresholds.

Most of the rainfall recharge events (86%) occurred in wetter seasons, and during these times, rainfall recharge thresholds were lower than in drier seasons: The median wet season rainfall recharge threshold was 19.5 millimeters in 48 hours, compared with 30.4 millimeters per 48 hours in the dry season [Priestley et al., 2025].

Drip loggers in the NGROS network were placed, for example, on old mining infrastructure in a former gold mine in Walhalla (left) and under an inflow zone in a gold adit in Durham Lead, near Ballarat, Victoria (right). The loggers can measure a maximum rate of four drips per second and can last roughly 3 years using internal batteries. Credit: Wendy Timms

The seasonal control on recharge is likely related to the greater amount of rainfall required to saturate soils during dry season events, although this control may be modulated by site‐specific factors such as soil conditions, unsaturated zone thickness, and vegetation. For example, sites vegetated with native woodland might be expected to have a greater unsaturated zone water demand in hot summer conditions and therefore to require more rainfall to generate recharge than more sparsely vegetated sites [Baker et al., 2024]. We will investigate the influence of these factors further as we collect longer time series of data.

Our results have also produced some surprising findings. Although we expected that antecedent weather conditions would influence the amount of rainfall needed for recharge to occur, we have not yet seen a clear relationship between preceding rainfall amounts or the time since the last recharge event and observed rainfall recharge thresholds. Longer time series of data should help to clarify the role of antecedent conditions as well.

Guiding Groundwater Management

Findings from NGROS have important implications for water management in Australia. Government groundwater management strategies, such as in the states of New South Wales and South Australia, where groundwater is the largest source of freshwater, rely on scientific data to support sustainable extraction and to protect groundwater-dependent ecosystems.

Our observations provide useful guidance for regulators setting extraction limits in the near term; they’re also useful to groundwater managers planning for changing future demands.

Our observations showing that only a handful of recharge events occur annually and that recharge requires a relatively high rainfall amount can, for example, provide useful guidance for regulators setting extraction limits in the near term. They’re also useful to groundwater managers planning for changing future demands resulting from changes in population, growth in agriculture and in mining industries supporting the green energy transition, and climate change.

Climate change is predicted to cause greater climatic variability across Australia, parts of which are already experiencing both wet and dry extremes. However, it is not yet clear to what extent changes in the climatic conditions that affect rainfall recharge thresholds will influence soil infiltration and overall groundwater recharge. The NGROS network is designed to investigate this question.

As a growing number of towns and even large cities (e.g., Perth, Western Australia [Broderick and McFarlane, 2022]) are forced to consider alternatives to shrinking groundwater resources, detailed information about regional and local recharge could help improve decisionmaking and give water managers more time to consider options.

NGROS findings also highlight that models used to inform groundwater management should factor in how recharge relates not only to rainfall amounts but also to the season and manner in which the rain is delivered (e.g., in long-duration, high-magnitude events versus intense bursts or prolonged sprinkles). This treatment conflicts with current practice, which typically quantifies recharge as a percentage, or some function, of annual rainfall. By instead considering the links between rainfall patterns and recharge, we can better identify how climate-induced changes in rainfall may impact future recharge.

Expanding the Network

NGROS data are already providing a more detailed understanding of rainfall recharge thresholds in Australia, helping refine recharge rate estimates by showing which rainfall event characteristics lead to recharge. Longer NGROS network time series, combined with other data such as groundwater level observations, will further help quantify recharge processes and reveal variability and trends over time and across the continent.

The concept of an underground groundwater recharge observing network could be applied more widely.

Beyond the 20 sites currently instrumented, the concept of an underground groundwater recharge observing network could be applied more widely, and we are continuously looking to expand the network within Australia and beyond.

Most recently, we installed loggers in sinkholes (vertical dissolution caves) on the Eyre Peninsula in South Australia, an arid region where groundwater levels have been falling. We have also commenced collaborations with colleagues in Africa, Europe, and North America, sharing our experience and know-how in setting up logger networks.

If groundwater observing networks are expanded to other parts of the world, researchers could compare recharge thresholds and processes across a wider range of climates, weather patterns, geologies, and environments to gain a more comprehensive view of when and how precipitation leads to recharge. Such knowledge would support sustainable management of vital groundwater resources not only in Australia but around the world.

Acknowledgments

Maria de Lourdes Melo Zurita (University of New South Wales, Sydney) also contributed to the underlying research project and provided feedback for this article.

References

Baker, A., et al. (2024), An underground drip water monitoring network to characterize rainfall recharge of groundwater at different geologies, environments, and climates across Australia, Geosci. Instrum. Methods Data Syst., 13(1), 117–129, https://doi.org/10.5194/gi-13-117-2024.

Broderick, K., and D. McFarlane (2022), Water resources planning in a drying climate in the south-west of Western Australia, Australas. J. Water Resour., 26(1), 72–83, https://doi.org/10.1080/13241583.2022.2078470.

Priestley, S. C., et al. (2025), Groundwater recharge of fractured rock aquifers in SE Australia is episodic and controlled by season and rainfall amount, Geophys. Res. Lett., 52(5), e2024GL113503, https://doi.org/10.1029/2024GL113503.

Author Information

Stacey Priestley (stacey.priestley@csiro.au), Commonwealth Scientific and Industrial Research Organisation, Adelaide, SA, Australia; Andy Baker (a.baker@unsw.edu.au), University of New South Wales, Sydney, Australia; Margaret Shanafield, Flinders University, SA, Adelaide, Australia; Wendy Timms, Deakin University, Geelong Waurn Ponds, Vic, Australia; and Martin Andersen, University of New South Wales, Sydney, Australia

Citation: Priestley, S., A. Baker, M. Shanafield, W. Timms, and M. Andersen (2025), When does rainfall become recharge?, Eos, 106, https://doi.org/10.1029/2025EO250452. Published on 4 December 2025. Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Watershed Sustainability Project Centers Place-Based Research

Thu, 12/04/2025 - 14:14
Source: Community Science

The Xwulqw'selu Sta'lo' (Koksilah River) is a culturally important river to the Cowichan Tribes, located on traditional Quw'utsun land on Vancouver Island, British Columbia. The land, which was never ceded to Canada, is part of a watershed that faces challenges including decreasing salmon populations, low river flow, flooding, and land use changes.

Gleeson et al. are working with the Cowichan Tribes and the provincial government to collaborate on the first water sustainability plan in British Columbia. About halfway through their 5-year project, the researchers are sharing how their work is guided by five “woven statements,” representing their intentions and values. These statements include a commitment to uphold Quw'utsun rights and laws, an intention that community-based monitoring and modeling will inform water and land decisions about the river, and a commitment by researchers to share their practices and outcomes. Just like the horizontal wefts and vertical warps in traditional Coast Salish weaving practices, these statements overlap and connect with their research goals, projects, and partnerships.

The research project has three goals: to improve understanding of current and future low flows in the Xwulqw'selu Sta'lo' through community science; to promote community engagement with water science, water governance, and Indigenous Knowledge; and to examine how this community science work can be useful to shared watershed management.

To accomplish these goals, the researchers use traditional scientific practices deeply grounded in the river itself. The community science project includes hydrological monitoring, modeling of low river flows, and quantification of groundwater flows into the river. In 2024, 44 volunteers participated in the community science project.

The 5-year project is part of the larger Xwulqw'selu Connections program, which supports a shift toward water cogovernance between the Cowichan Tribes and the provincial government through the Xwulqw'selu Water Sustainability Plan. The program could inform other community science collaborations between governments and Indigenous peoples, the authors say. (Community Science, https://doi.org/10.1029/2024CSJ000120, 2025)

—Madeline Reinsel, Science Writer

Citation: Reinsel, M. (2025), Watershed sustainability project centers place-based research, Eos, 106, https://doi.org/10.1029/2025EO250439. Published on 4 December 2025. Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Changes in Slab Dip Cause Rapid Changes in Plate Motion

Thu, 12/04/2025 - 14:00
Editors’ Highlights are summaries of recent papers by AGU’s journal editors. Source: Journal of Geophysical Research: Solid Earth

Reconstructing the direction and rate of motion of tectonic plates is essential for understanding deformation within and between plates and for evaluating the geodynamical drivers of plate tectonics. One debate concerns the relative importance of flow in the asthenosphere versus processes at plate boundaries in controlling the motion of tectonic plates.

Wilson and DeMets [2025] present the most detailed reconstruction of changes in motion of the Nazca Plate to date. Remarkably, their results show periods of constant motion separated by geologically short periods of rapid acceleration or deceleration. These changes coincide with changes in the dip of the Nazca plate where it subducts beneath South America, with decelerations occurring when multiple regions of the slab shallowed to anomalously low dips (“flat slab subduction”), and accelerations occurring when the slab deepened to normal dips.  These results imply that changes in the forces acting between plates are an important control on plate motion.

Citation: Wilson, D. S., & DeMets, C. (2025). Changes in motion of the Nazca/Farallon plate over the last 34 million years: Implications for flat-slab subduction and the propagation of plate kinematic changes. Journal of Geophysical Research: Solid Earth, 130, e2025JB031933. https://doi.org/10.1029/2025JB031933

—Donna Shillington, Associate Editor, JGR: Solid Earth

Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Trump Proposes Weakening Fuel Economy Rules for Vehicles

Wed, 12/03/2025 - 21:09
body {background-color: #D2D1D5;} Research & Developments is a blog for brief updates that provide context for the flurry of news regarding law and policy changes that impact science and scientists today.

At the White House today, President Donald Trump announced his administration would “reset” vehicle fuel economy standards. Trump said the administration plans to revoke tightened standards, also known as Corporate Average Fuel Economy (CAFE) standards, set by President Joe Biden in 2024.

“We’re bringing back the car industry that was stolen from us.”

“We are officially terminating Joe Biden’s ridiculously burdensome—horrible, actually—CAFE standards that imposed expensive restrictions and all sorts of problems, all sorts of problems, to automakers,” Trump said. “We’re bringing back the car industry that was stolen from us.”

Automobile executives from Ford, General Motors (GM), and Stellantis joined federal officials, including Department of Transportation Secretary Sean Duffy, at the announcement. The administration said, without providing evidence during today’s announcement, that the current CAFE standards have increased vehicle prices and estimated that changing those standards would save American families $109 billion in total.

Vehicle fuel efficiency standards, which set the average gas mileage that vehicles must achieve, have been in place since 1975. The standards were most recently tightened in June 2024 by the Biden administration, and required automakers to ensure vehicles achieved an average fuel efficiency of about 50.4 miles per gallon by model year 2031. The Biden administration estimated that the rule would lower fuel costs by $23 billion and prevent the emission of more than 710 million metric tons of carbon dioxide by 2050. 

Fuel economy standards have significantly reduced greenhouse gas emissions from vehicles, which are one of the largest sources of carbon emissions in the United States. According to one estimate, fuel economy improvements spurred by the standards have avoided 14 billion tons of greenhouse gas emissions since 1975. 

However, Duffy said, the current standards are “completely unattainable” for automakers.

The announcement did not specify the degree to which the administration would lower the standards.

 
Related

Weakening fuel economy rules for vehicles is the latest step in President Trump’s continued efforts to slow the adoption of electric vehicles and boost the fossil fuel industry. 

Trump’s One Big Beautiful Bill, the omnibus spending bill that became law in July, also eliminated fines for automakers that did not comply with fuel economy standards. The Environmental Protection Agency is also expected to weaken limits of greenhouse gas emissions from vehicles by finalizing the repeal of the 2009 Endangerment Finding, which underpins important federal climate regulations, early next year. 

Policy advocates said weakening the standards would slow the transition to electric vehicles and make the U.S. vehicle market less competitive. “While Trump tells G.M., Ford and others that they needn’t make gas-saving cars, China is telling its carmakers to take advantage of the lack of U.S. competition and accelerate their efforts to grab the world’s burgeoning clean car market,” Dan Becker, director of the Safe Climate Transport Campaign at the Center for Biological Diversity, told The New York Times

However, automakers supported the proposal. “As America’s largest auto producer, we appreciate President Trump’s leadership in aligning fuel economy standards with market realities,” Jim Farley, Ford’s CEO, told Fox News.

“Today is a victory of common sense and affordability,” Farley, who attended the announcement, said. 

The Transportation Department will solicit public comments about the rule and is expected to finalize it next year.

—Grace van Deelen (@gvd.bsky.social), Staff Writer

These updates are made possible through information from the scientific community. Do you have a story about how changes in law or policy are affecting scientists or research? Send us a tip at eos@agu.org. Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

When a Prayer Is Also a Climate Signal

Wed, 12/03/2025 - 14:25

As a child in Algeria in the late 1990s, Walid Ouaret remembers going to the mosque when droughts turned severe. There, he and his family would join their neighbors in a communal prayer for rain called the Salat al-Istisqāʼ. It was no informal event: The ceremony had been announced by the government.

“I was not a farmer, but I was feeling for other people from my own community,” remembered Ouaret, who’s now a Ph.D. candidate at the University of Maryland studying the intersections of climate and agriculture.

As he explored ways to improve the climate models he was using to understand the ramifications of climate change, Ouaret remembered the rain prayers. Rainfall patterns are changing globally due to climate change, but data from places like Algeria can be sparse. The Salat al-Istisqāʼ, on the other hand, is practiced across the Muslim world, which spans northern Africa, the Middle East, and Central Asia.

“I was trying to find a proxy, something that would tell me when food production was impacted or soil moisture was impacted at this regional [scale].”

“I was trying to find a proxy, something that would tell me when food production was impacted or soil moisture was impacted at this regional [scale],” he said. The call for rain prayers, he realized, could be a key data point revealing when droughts had become sufficiently severe to warrant state-led interventions.

In most instances, the ceremony is widely advertised, giving Ouaret a simple way of tracking its prevalence over time.

A New Kind of Climate Data

For research that will be presented on 18 December at AGU’s Annual Meeting 2025, Ouaret and his coauthors combed through mass media, including newspapers and websites, from Algeria, Morocco, and Tunisia from 2000 to 2024, looking for announcements of Salat al-Istisqāʼ. Then, they calculated how likely the calls for rain prayers were to correspond to drought conditions, as measured by the Standardized Precipitation Evapotranspiration Index.

Ouaret found a strong correlation between Salat al-Istisqāʼ notices and 6-month drought severity, which validated the announcement of rain prayers as a proxy for extreme weather. The environment wasn’t the only relevant influence on the calls to prayer, however. Ouaret said social unrest, as measured by conflict event data, was also associated with the announcement of rain prayers. That confluence is a sign, he said, that calls to prayer may also function as a governance tool for increasing social cohesion.

These kinds of data are valuable, as they illuminate areas of the planet with fewer reliable climate monitoring networks, said Jen Shaffer, an ecological anthropologist also at the University of Maryland, who wasn’t involved in the research.

“This sort of grassroots, bottom-up view is really valuable to get at areas where we don’t have weather stations.”

“People are getting signals of change going on in the environment that’s not easy to record with satellite data, or with all of our instruments,” Shaffer said. “This sort of grassroots, bottom-up view is really valuable to get at areas where we don’t have weather stations.”

The Maghreb and other regions of Africa are vulnerable to such lack of data, but agricultural communities around the world are beset by climate-induced challenges.

Rituals that ask for rain are common in cultures both past and present, from the kachina of the Pueblo cultures of the American Southwest to Catholic pro pluvia rogation ceremonies practiced in Spain to Days of Prayer for Rain in the State of Texas designated by the state’s governor in 2011. These practices offer both a historical record of drought and a potential input for climate models.

Adding cultural events to climate models, which are normally fed rigorously quantitative data, can be difficult, Shaffer noted. But Ouaret’s dataset benefits from the fact that a public, official announcement of rain prayers can be tied to specific dates and locations.

In the future, Ouaret believes his work could provide a potential early-warning system for drought vulnerability in specific communities, allowing more time to marshal aid to where it’s needed most. Data on the frequency of calls for rain prayers could also be a helpful tool for talking about climate change in affected communities, he said.

Communities “have been doing this in the past, but it was happening like once every 5 years. Now it’s happening every year,” Ouaret said. Incorporating calls for rain prayers into scientific models would be “validating [people’s] experience and telling them that it’s scientifically valid.”

The work also aligns with another goal for Ouaret, which is expanding the reach of open science in North Africa and other places underprioritized by Western researchers.

“Empowering people to do their science will help them so much to bring innovation to the whole community and bring a new way of addressing our traditional problems,” he said.

—Nathaniel Scharping (@nathanielscharp), Science Writer

Citation: Scharping, N. (2025), When a prayer is also a climate signal, Eos, 106, https://doi.org/10.1029/2025EO250450. Published on 3 December 2025. Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

98% of Gaza’s Tree Cropland Destroyed by Israel

Wed, 12/03/2025 - 14:24

Two years of war in Gaza have taken a devastating toll on the people living there. Nearly 70,000 people, including more than 20,000 children, have been killed by Israeli attacks. Disease and famine have taken hold as Israel blocks the flow of food and medical aid into the territory. Several international human rights organizations have determined that Israel is committing genocide against Palestinians in Gaza.

Alongside the human casualties and the destruction of homes and infrastructure, the war has brought the widespread destruction of arable land. Agriculture comprised 32% of land use in Gaza before 7 October 2023, when Hamas, a recognized terrorist organization supporting Palestinian self-determination, attacked Israeli communities in Gaza and Israel launched a massive military response.

A recent analysis has tracked the destruction of tree cropland and greenhouses in Gaza since the start of the war. The analysis revealed that 70% of tree cropland and 58% of greenhouses were damaged or destroyed in the first year of the conflict. By the end of October 2025, 98% of Gaza’s tree cropland had been destroyed. Ninety percent of greenhouses were damaged, and 75% were destroyed.

“After 2 years, we see that most of the greenhouses are gone and the remaining tree cover is largely gone.”

“Now, after 2 years, we see that most of the greenhouses are gone and the remaining tree cover is largely gone,” said Mazin Qumsiyeh, a biologist and social justice advocate at Bethlehem University in the West Bank and a researcher on the project. He said that over the past 2 years, Gaza has endured an ecocide of agricultural lands.

“This is unprecedented damage,” said He Yin, a geographer and remote sensing researcher at Kent State University in Ohio and lead researcher on the project. “I have never seen anything like this,” said Yin, who previously studied other areas of armed conflict, including Syria and the northern Caucasus. Gaza, he said, has “become like a barren land.”

Gazan Agriculture Before the War

Farmers have been cultivating crops in Gaza and the surrounding land for thousands of years. Olive trees, in particular, have played an important cultural role throughout Palestinian and Israeli history, featuring prominently in celebrations, art, literature, and religion.

Prior to October 2023, Gaza’s agricultural sector made up 11% of the territory’s gross domestic product (roughly $575 million) and 45% of its exports. Palestinian farmers cultivated olive and citrus trees, as well as grapes, guava, dates, palms, and figs. More delicate fruits, vegetables, and flowers were grown in tunnels or other protective structures like greenhouses.

“Like everything else in Gaza, people managed and survived and resisted and did agriculture.”

“Gaza was not self-sufficient in foods, but [it] did produce significant number of products,” Qumsiyeh said. Despite Israel blocking rainwater harvesting and severely restricting Palestinian access to a shared aquifer, “like everything else in Gaza, people managed and survived and resisted and did agriculture,” he said.

The agricultural sector contributed to Palestinians’ food and economic security. Of Gaza’s 365 square kilometers of land, roughly 32% of it was used to grow food, mostly on small-scale family farms. Tree crops covered 23% of the Gaza Strip. Exports like olive oil, strawberries, and flowers found purchase in high-income markets across Europe, Qumsiyeh said, as well as the West Bank. And in years with enhanced drought or poor harvests, selling high-quality, shelf-stable products like olive oil could provide for a family in need.

Now, after 2 years of war, most agricultural land has been destroyed. Pinning down where, when, and how that happened is necessary for recovery and remediation, explained Najat Aoun Saliba, an atmospheric chemist at the American University of Beirut in Lebanon.

Saliba, also a member of Lebanon’s parliament, has studied the impacts of war-related pollutants on public and environmental health in Lebanon but was not involved with the new research about Gaza. Israel has used many of the same types of munitions to attack Gaza as it has to attack southern Lebanon, and Saliba suspects that the long-term environmental damage in Gaza might mirror what she has seen in her own country.

“The long-term environmental impacts of munitions include persistent heavy-metal and explosive residue contamination; persistent phosphorous materials that were used heavily at least in southern Lebanon; [unconfirmed] presence of radioactive materials…especially in the bunker buster ammunitions; reduced soil fertility and microbial imbalance; groundwater pollution and loss of irrigation capacity; and heightened erosion and desertification risks,” Saliba said.

Tracking the Destruction This animation tracks the damage to tree cropland in Gaza from October 2023 to October 2025. Undamaged croplands are colored green and turn purple in the month in which they sustained damage. Credit: Maps: He Yin, with data from Yin et al., 2025, https://doi.org/10.1016/j.srs.2025.100199, CC BY-NC-ND 4.0; Animation: Mary Heinrichs/AGU

The United Nations Satellite Center (UNOSAT) has been remotely monitoring the destruction of buildings, land, and infrastructure in Gaza since the start of the war. Their monthly agricultural damage assessments have shown widespread damage, but their methodology has some limitations when applied to a region as small as Gaza, Yin explained.

UNOSAT relies on data from the Sentinel-2 satellite, which has a nominal spatial resolution of 10 meters; that might not be the best choice for monitoring Gaza’s small-scale and sometimes fragmented agricultural land. What’s more, in such a rapidly evolving conflict, a monthly observing cadence is not able to track the progression of damage or trace the destruction of individual plots to specific military actions.

To overcome those challenges, Yin and his team turned to two commercial satellite data sources with higher spatial resolutions and daily monitoring: PlanetScope, with a nominal 3-meter resolution, and SkySat, with a nominal 50-centimeter resolution. The higher-resolution datasets allowed the team to create detailed land use maps of Gaza before October 2023, including tree cropland and greenhouses, and then track partial damage or total destruction of those plots every day since Israel’s war commenced.

Most of Gaza’s greenhouses have been damaged or destroyed during 2 years of war. The maps show how damage to greenhouses began in the north and progressed south. Undamaged greenhouses are marked with white circles, and damaged or destroyed greenhouses are marked with red circles. Greenhouses damaged or destroyed between October 2023 and October 2024 are on the left. Greenhouses damaged or destroyed between October 2024 and October 2025 are on the right. Credit: He Yin, with data from Yin et al., 2025, https://doi.org/10.1016/j.srs.2025.100199, CC BY-NC-ND 4.0

The researchers compared their damage maps to UNOSAT’s to validate their technique. They further validated their remote sensing results by consulting with Yaser Al A’wdat from the Palestine Ministry of Agriculture in Gaza and with other individuals on the ground, who checked whether certain areas flagged in the analysis as “destroyed” truly were. Those consultants in Gaza declined to be interviewed for this story out of concern for their safety.

The initial analysis covered the destruction of agricultural land through the first year of the war and was published in Science of Remote Sensing in February. The researchers found that 64%–70% of tree crop fields and 58% of greenhouses had been damaged by the end of September 2024, after almost 1 year of war. By the end of 2023, all greenhouses in the North Gaza and Gaza City governorates had been damaged, as well as nearly all greenhouses in the Gazan governorate of Deir al-Balah. The analysis showed how damage to both cropland and greenhouses progressed southward toward Khan Yunis and Rafah as Israel’s military campaign shifted focus.

The team continued its analysis through the second full year of the war, and those results, which will be presented on 18 December at AGU’s Annual Meeting 2025 in New Orleans, reveal the near-total destruction of tree cropland (98%) and increasing damage to greenhouses (90% damaged and 75% destroyed). Greenhouses in Rafah, in particular, suffered extensive and widespread damage as Israel’s military operation advanced south.

Remediate, Replant, Restore

Although a shaky (and repeatedly violated) ceasefire went into effect on 10 October, restoration and remediation will take time and very careful planning.

“Research like this can play a critical role in restoration and recovery efforts in Gaza by providing an evidence-based foundation for agricultural rehabilitation.”

“Research like this can play a critical role in restoration and recovery efforts in Gaza by providing an evidence-based foundation for agricultural rehabilitation,” Saliba said.

“This type of spatial assessment allows policymakers and humanitarian agencies to plan sequenced restoration—starting with fast-growing crops before replanting long-term trees like olives and citrus—and to design targeted compensation and replanting programs based on verified damage maps,” she added.

Future analyses seeking to map the scope of agricultural damage, as well as efforts to remediate that damage, should incorporate the food-energy-water nexus, Saliba said. “Because no agriculture restoration could happen without providing water.”

As focus turns toward restoration, the first thing that is needed are data, Qumsiyeh said. “For example, we don’t know the extent of soil contamination in Gaza and what residues of war are there, whether depleted uranium or white phosphorus or heavy metals and other things,” he said. “We don’t even have access to get the soil samples out of Gaza.”

There is also increased concern about aquifer contamination. When Israel flooded tunnels in Gaza with seawater, some of that water undoubtedly seeped through the ground into the aquifer that supplies most of the territory. In addition, Gaza has now seen three rainy seasons since the start of the war.

“All of that water from the rain will wash these pollutants from the soil down into the water aquifer,” Qumsiyeh said. “Again, we don’t have the data because we don’t have samples of water from the water aquifer to be able to test.”

If there were stronger international laws related to ecocide, Qumsiyeh said, data like that could hold those responsible to account.

At present, Israeli troops have partially retreated from the territory, but the area they still occupy beyond the so-called Yellow Line comprises much of Gaza’s agricultural land and is inaccessible to Palestinian farmers. According to the United Nations Office for the Coordination of Humanitarian Affairs, Israel continues to block the entry of agricultural inputs like seed kits, organic fertilizers, and materials needed to rebuild greenhouses.

“Agriculture is part of life. We are part of the land,” Yin said. Ultimately, “who has the power to rebuild Gaza really matters.”

—Kimberly M. S. Cartier (@astrokimcartier.bsky.social), Staff Writer

Citation: Cartier, K. M. S. (2025), 98% of Gaza’s tree cropland destroyed by Israel, Eos, 106, https://doi.org/10.1029/2025EO250447. Published on 3 December 2025. Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Hydrothermal Circulation and Its Impact on the Earth System

Wed, 12/03/2025 - 14:00
Editors’ Vox is a blog from AGU’s Publications Department.

In May 2023, a group of scientists gathered in Agros, Cyrpus, for an AGU Chapman Conference, “Hydrothermal Circulation and Seawater Chemistry: What’s the Chicken and What’s the Egg?” They discussed the role of hydrothermal fluxes in regulating ocean biogeochemistry and the Earth system. To share key finding from that conference—and other groundbreaking research on hydrothermal systems—the AGU book Hydrothermal Circulation and Seawater Chemistry: Links and Feedbacks came to life.

The latest volume in AGU’s Geophysical Monograph Series, this book explores on- and off-axis hydrothermal systems, boundary conditions such as climate and sedimentation history, and approaches for tracking oceanic processes. We asked the book’s Volume Editors about the latest methods and techniques, practical applications, and the future of the field of study.

In simple terms, what is hydrothermal circulation as it relates to seawater chemistry?

Hydrothermal circulation in this context is the flow of seawater through ocean crust.

Hydrothermal circulation in this context is the flow of seawater through ocean crust. It occurs at high-temperatures at mid-ocean ridges and at lower temperatures across much of the seafloor.

Along mid-ocean ridges, high-temperature (~400°C) reactions between seawater and crust turn the circulating seawater into hydrothermal fluids that become enriched in many elements such as potassium, calcium, and iron. Because these hot fluids are much less dense than cold seawater, the fluid flows rapidly upwards and vents at the seafloor. Cooling leads to precipitation of dissolved constituents forming chimney structures and particles carried by the fluid (the well-known “black smoke”). Other dissolved ions stay in solution and are added to the ocean, changing the composition of the seawater.

Hydrothermal circulation farther from mid-ocean ridges typically heats seawater by only 5–10°C and changes the chemical composition of the fluid much less than that of the hotter fluids. However, orders of magnitude more seawater circulates through low-temperature systems than high-temperature systems, making them equally important to seawater chemistry.

How do various boundary conditions impact hydrothermal processes in the ocean, and why is it important to study them?

The amount of fluid that flows through mid-ocean ridge hydrothermal systems depends on the geodynamic boundary conditions that control characteristics such as the average global rate of accretion of new oceanic crust and its thickness. These boundary conditions also control the types of rocks that make up the ocean crust, which impacts fluid–rock reactions and hence the composition of the fluid vented into the ocean. The composition and temperature of the deep seawater that circulates into the crust are also important boundary conditions controlling fluid–rock reactions and have both changed substantially over Earth’s history. For example, changes in the redox state of seawater over Earth’s history changed fluid–rock reactions and in-turn hydrothermal fluxes into the ocean.

What are the latest methods and techniques for studying hydrothermal circulation discussed in the book?

The book covers the latest methods and techniques for advancing various interdisciplinary fields focused on hydrothermal vents.

The book covers the latest methods and techniques for advancing various interdisciplinary fields focused on hydrothermal vents. For example, deploying novel instrumentation in the harsh conditions of high-pressure and high-temperature seafloor hydrothermal systems is improving studies of hydrothermal systems. Attaching instruments, such as Raman spectrometers and mass spectrometers, to seafloor cabled observatories will provide new insights into the dynamics of hydrothermal systems. Advancements in instrumentation will also significantly benefit the study of diffuse flow along mid-ocean ridges where fluids vent at tens of degrees Celsius rather than the much higher temperatures characteristic of “black smokers.” Diffuse venting circulates much more water, and probably more heat, than black smoker venting and yet has received only a fraction of the study. Scientists studying low-temperature seafloor hydrothermal systems distributed across the seafloor are starting to borrow methods used for studying continental weathering systems, which should lead to rapid progress being made in understanding these systems.

What practical applications do the studies presented in the book have for Earth system science?

Understanding the Earth system requires a quantitative understanding of the controls on global biogeochemical cycles. Currently, most models of global biogeochemical cycles either ignore hydrothermal systems or assume they do not change over time. However, there is now copious evidence that the fluxes of elements into and out of seawater due to hydrothermal systems are dependent on environmental conditions (e.g., climate, seawater chemistry) and hence do change over time. Furthermore, there can be feedbacks between the environment and the hydrothermal fluxes. This book will help people modeling the Earth system to better incorporate hydrothermal systems into biogeochemical models. In turn, the results of such models will become more robust.

Octopi brooding their eggs in warm water venting from the ocean crust near Davidson Seamount underwater volcano at a depth of 3,200 meters. Credit: Chad King / OET, NOAA

Where do you see the study of hydrothermal systems heading in the next 10 years?

The book features many exciting directions that the study of hydrothermal systems will hopefully take in the next decade. For example, there is a pressing need for more intense study of seafloor weathering until we understand it as well as we understand continental weathering. Expanding the availability of novel instruments that can be deployed at hydrothermal vents at mid-ocean ridges will advance the understanding of hydrothermal systems (e.g., during volcanic eruptions that are currently poorly understood). Vast potential also exists to better incorporate hydrothermal processes into Earth system models. Finally, a growth area for future research will be the role of hydrothermal systems on exoplanets and in the search for other habitable bodies. For example, NASA’s ongoing Clipper Mission to Jupiter’s moon Europa will hopefully enrich our understanding of the role of hydrothermal activity in controlling the habitability of this body.

How is the book organized?

After an introductory chapter, the next six chapters address hydrothermal processes at mid-ocean ridges. They consider both black smoker and diffuse flow systems, as well as the impact of these systems on the water column above mid-ocean ridges. The methods used to study hydrothermal systems, both in the lab and field, are also covered. An example of the fingerprint of changes in axial hydrothermal processes through changing seawater chemistry is discussed next. This is followed by three chapters about low-temperature hydrothermal systems that discuss how much more we have to discover about these systems. The last four chapters address the role of oceanic hydrothermal systems on planetary scale processes, both on Earth and other rocky bodies. They discuss how global-scale models work, how hydrothermal processes can be incorporated in such models, and how hydrothermal systems might work on other rocky bodies.

Who is the intended audience of the book? 

The audience for the book is intended to be broad—anyone interested in oceanic hydrothermal systems and/or ocean chemistry. People who are new to the field can use the book to get up-to-speed on ongoing interdisciplinary research in this area. This includes new graduate students or experienced researchers who have not previously considered the role of oceanic hydrothermal system in ocean chemistry. The book can also act as a starting point for researchers who develop global biogeochemical cycle models and who want to incorporate hydrothermal fluxes into these models. Finally, the book will appeal to people interested in planetary habitability and the role that hydrothermal systems may play in making other rocky bodies habitable, or the role hydrothermal systems may have played in nurturing early life on Earth.

Hydrothermal Circulation and Seawater Chemistry: Links and Feedbacks, 2025. ISBN: 978-1-394-22915-4. List price: $225 (hardcover), $180 (e-book)

Chapter 1 is freely available. Visit the book’s page on Wiley.com and click on “Read an Excerpt” below the cover image.

—Laurence A. Coogan (lacoogan@uvic.ca; 0000-0001-7289-5120), University of Victoria, Canada; Alexandra V. Turchyn (0000-0002-9298-2173), University of Cambridge, United Kingdom; Ann G. Dunlea (0000-0003-1251-1441), Woods Hole Oceanographic Institution, United States; and Wolfgang Bach (0000-0002-3099-7142), University of Bremen, Germany

Editor’s Note: It is the policy of AGU Publications to invite the authors or editors of newly published books to write a summary for Eos Editors’ Vox.

Citation: Coogan, L. A., A. V. Turchyn, A. G. Dunlea, and W. Bach (2025), Hydrothermal circulation and its impact on the Earth system, Eos, 106, https://doi.org/10.1029/2025EO255036. Published on 3 December 2025. This article does not represent the opinion of AGU, Eos, or any of its affiliates. It is solely the opinion of the author(s). Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Using Lightning-Induced Precipitation to Estimate Electron Belt Decay Times

Wed, 12/03/2025 - 14:00
Editors’ Highlights are summaries of recent papers by AGU’s journal editors. Source: Journal of Geophysical Research: Space Physics

Earth is surrounded by rings of energetic particles called radiation belts. The inner belt can sometimes be populated by megaelectron volt (MeV) energetic electrons during particularly strong solar storms. When moved by electromagnetic waves, these energetic particles can rain into the atmosphere.

Feinland and Blum [2025] show that periodic signatures of relativistic electron rain observed by satellites can be used to better predict when and where they might happen in the future. The authors find that these high-energy electrons usually came into the inner belt quickly after solar storms and gradually rained out over the course of a few weeks. During particularly quiet solar conditions, there were no detectable high-energy electrons in this region at all. These results are important to incorporate into models of the radiation belts, to better characterize and predict the high radiation environment in near-Earth space.

Citation: Feinland, M. A., & Blum, L. W. (2025). Lightning-induced precipitation as a proxy for inner belt MeV electron decay times. Journal of Geophysical Research: Space Physics, 130, e2025JA034258. https://doi.org/10.1029/2025JA034258

—Viviane Pierrard, Editor, JGR: Space Physics

Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Tracing Fire, Rain, and Herbivores in the Serengeti

Tue, 12/02/2025 - 14:23

The Serengeti is one of the most diverse ecosystems on Earth. The massive savanna stretches more than 30,000 square kilometers across Tanzania and southwestern Kenya, and conservation sites, including national parks and a United Nations Educational, Scientific and Cultural Organization World Heritage Site, mark its significance as one of the world’s last intact large-animal migration corridors.

Life in the Serengeti is shaped by interactions between herbivores, vegetation, fire, and rain. Every year, millions of wildebeest, zebras, and gazelles hoof it across the savanna for their great migration, an 800-kilometer loop through the Serengeti and Kenya’s adjacent Maasai Mara game reserve. The iconic migration is dictated by rainfall, with herbivores following the green grass brought by the rainy season.

New research documenting the far-reaching impact of increasing rainfall on the Serengeti will be presented on Monday, 15 December, at AGU’s Annual Meeting. Megan Donaldson, a postdoctoral researcher at Duke University’s Nicholas School of the Environment, and her colleagues will share how vegetation is consumed by both grazing herbivores and fire in the Serengeti and how that consumption is reflected in the landscape. Studies like Donaldson’s are emerging as an important area of research for scientists assessing how climate change will affect the closely intertwined biotic and abiotic components in tropical grassland ecosystems around the world.

“For now, we’re just looking at how those interactions are feeding back to each other, how increased rainfall is affecting the dynamics between vegetation, herbivores, and fire,” said Donaldson.

Rainfall, Fuel, and Food

Rainfall controls how much grass grows in the Serengeti: When rainfall is intense, grasses grow quickly.

That growth is consumed in two primary ways: by fire as fuel and by herbivores as food.

Fire can eradicate excess vegetation, which is why a previous rainy season in the Serengeti might be a reliable predictor for how much land will burn there in the near future.

More than 30 species of large herbivores consume vegetation in the Serengeti, each with its own ecological niche.

“Some are constantly on the move, others are residents, some are grazers, some browsers, others are mixed feeders, and they range in size from the minuscule dik-dik to the massive elephant. They all thrive together by seeking out seasonal sources of water and feeding differentially on the rich diversity and abundance of grasses, shrubs, and trees,” said Monica Bond, a wildlife biologist at the University of Zurich who was not part of the recent study.

Herbivores consume vegetation at a much slower rate than fire does. Under normal conditions, grazing herbivores keep grass levels low enough to reduce the spread of fire across large areas. But it can take several seasons for animal populations to adjust to differences in food availability, so as rainfall totals increase and cause explosive growth in savanna vegetation, herbivores are unable to maintain their ability to minimize the fuel available for wildfires.

In the new research, Donaldson and her colleagues examined weather station and camera trap data from sites inside Serengeti National Park in Tanzania.

In particular, the researchers tracked how recent shifts in the Indian Ocean Dipole caused rainfall totals to increase across the Serengeti. The Indian Ocean Dipole is a weather pattern similar to the El Niño–Southern Oscillation phenomenon that spawns El Niño or La Niña conditions in the Pacific. It alters wind, rain, and temperature conditions in East Africa. Between 2019 and 2024, mean rainfall totals in the Serengeti were 268 millimeters higher than in the period from 1999 through 2003.

The researchers found that within the park, rainfall was not uniform. “There’s a rainfall gradient. You get low rainfall in the south and high rainfall in the north,” said Donaldson.

In the northern Serengeti, surplus rainfall supported such rapid growth of grass that herbivore consumption had little influence on reducing the amount of fuel available for wildfires.

In the typically drier south, however, herbivores were able to keep grasses short enough to slow the buildup of fuel.

But during periods of increased rainfall, Donaldson explained, “we see that those feedbacks are quicker. You’re getting fuel buildup much quicker, and you need all the [animal] migrants to come through that system to have any effect on fire.”

Untangling a Complex Ecosystem

Between 2019 and 2024, fire size in the Serengeti increased, but the increase was more complex than “more fuel feeding more fires.”

“The number of fires necessarily isn’t changing; it seems to be staying stable,” explained Donaldson. “We’re not seeing this very strong correlation between increased rainfall and increased fire. What is driving that? Why are we seeing that? And what are herbivores doing to that? Those are the things we’re trying to tease apart right now.”

“Because the Serengeti is one of the few intact biologically functioning ecosystems left on the planet, it makes for a perfect natural laboratory.”

Future work from Donaldson and her colleagues will further researchers’ understanding of how the Serengeti’s four major players—herbivores, biomass, fire, and rainfall—connect.

“Because the Serengeti is one of the few intact biologically functioning ecosystems left on the planet, it makes for a perfect natural laboratory to study complex ecological interactions and how these are affected by climate change,” said Bond. “This research has important implications for fire management and thus for wildlife conservation in this ecologically critical landscape. It is incredible the research that they have done here in fostering understanding of how this system works.”

—Rebecca Owen (@beccapox.bsky.social), Science Writer

Citation: Owen, R. (2025), Tracing fire, rain, and herbivores in the Serengeti, Eos, 106, https://doi.org/10.1029/2025EO250444. Published on 2 December 2025. Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Planet-Eating Stars Hint at Earth’s Ultimate Fate

Tue, 12/02/2025 - 14:23

Our Sun is about halfway through its life, which means Earth is as well. After a star exhausts its hydrogen nuclear fuel, its diameter expands more than a hundredfold, engulfing any unlucky planets in close orbits. That day is at least 5 billion years off for our solar system, but scientists have spotted a possible preview of our world’s fate.

Elderly stars just get hungry.

Using data from the TESS (Transiting Exoplanet Survey Satellite) observatory, astronomers Edward Bryant of the University of Warwick and Vincent Van Eylen of University College London compared systems with stars in the main sequence of their lifetimes—fusing hydrogen, like the Sun—with post–main sequence stars closer to the end of their lifetimes, both with and without planets.

“We saw that these planets are getting rarer [as stars age],” Bryant said. In other words, planets are disappearing as their host stars grow old. The comparison between planetary systems with younger and older stars makes it clear that the discrepancy does not stem from the fact that the planets weren’t there in the first place: Elderly stars just get hungry.

“We’re fairly confident that it’s not due to a formation effect,” Bryant explained, “because we don’t see large differences in the mass and [chemical composition] of these stars versus the main sequence star populations.”

Complete engulfment isn’t the only way giant stars can obliterate planets. As they grow, giant stars also exert increasingly larger tidal forces on their satellites that make their orbits decay, strip them of their atmospheres, and can even tear them apart completely. The orbital decay aspect is potentially measurable, and this is the effect Bryant and Van Eylen considered in their model for how planets die.

“We’re looking at how common planets are around different types of stars, with number of planets per star,” Bryant said. Bryant and Van Eylen identified 456,941 post–main sequence stars in TESS data and, from those, found 130 planets and planet candidates with close-in orbits. “The fraction [of stars with planets] gets significantly lower for all stars and shorter-period planets, which is very much in line with the predictions from the theory that tidal decay becomes very strong as these stars evolved.”

Astronomers use TESS to find exoplanets by looking for the diminishment in light as they pass in front of their host stars, a miniature eclipse known as a transit. As with any exoplanet detection method, transits are best suited to large, Jupiter-sized planets in relatively small orbits lasting less than half of an Earth year, sometimes much less. So these solar systems aren’t much like ours in that respect. Studying planets orbiting post–main sequence stars poses additional challenges.

“If you have the same size planet but a larger star, you have a smaller transit,” Bryant said. “That makes it harder to find these systems because the signals are much shallower.”

However, though the stars in the sample data have a much greater surface area, they are comparable in mass to the Sun, and that’s what matters most, the researchers said. A star with the same mass as the Sun will go through the same life stages and die the same way, and that similarity is what helps reveal our solar system’s future.

“The processes that take place once the star evolves [past main sequence] can tell us about the interaction between planets and host star,” said Sabine Reffert, an astronomer at Universität Heidelberg who was not involved in the study. “We had never seen this kind of difference in planet occurrence rates between [main sequence] and giants before because we did not have enough planets to statistically see this difference before. It’s a very promising approach.”

Planets: Part of a Balanced Stellar Breakfast

Exoplanet science is one of astronomy’s biggest successes in the modern era: Since the first exoplanet discovery 30 years ago, astronomers have confirmed more than 6,000 planets and identified many more candidates for follow-up observations. At the same time, the work can be challenging when it comes to planets orbiting post–main sequence stars.

One tricky aspect of this work is related to the age of the stars, which formed billions of years before our Sun. Older stars have a lower abundance of chemical elements heavier than helium, a measure astronomers call “metallicity.” Observations have found a correlation between high metallicity and exoplanet abundance.

“A small difference in metallicity…could potentially double the occurrence rate.”

“A small difference in metallicity…could potentially double the occurrence rate,” Reffert said, stressing that the general conclusions from the article would hold but the details would need to be refined with better metallicity data.

Future observations to measure metallicity using spectra, along with star and planet mass, would improve the model. In addition, the European Space Agency’s Plato Mission, slated to launch in December 2026, will add more sensitive data to the TESS observations.

Earth’s fiery fate is a long way in the future, but researchers have made a big step toward understanding how dying stars might eat their planets. With more TESS and Plato data, we might even glimpse the minute orbital changes that indicate a planet spiraling to its doom—a grim end for that world but a wonderful discovery for our understanding of the coevolution of planets and their host stars.

—Matthew R. Francis (@BowlerHatScience.org), Science Writer

Citation: Francis, M. R. (2025), Planet-eating stars hint at Earth’s ultimate fate, Eos, 106, https://doi.org/10.1029/2025EO250448. Published on 2 December 2025. Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Heatwaves Increase Home Births in India

Tue, 12/02/2025 - 14:00
Editors’ Highlights are summaries of recent papers by AGU’s journal editors. Source: GeoHealth

Heatwaves can disrupt many parts of daily life, including access to essential healthcare services. Dey et al. [2025] evaluate how heatwaves are related to where women in India give birth.

The authors analyze data from over 200,000 births during 2019-2021 and find that during periods of heatwaves, women were more likely to deliver at home instead of in a health facility. This association was stronger for warmer regions, regions without government programs supporting facility-based births, and non-Hindu populations. The study indicates that extreme heat may create barriers to healthcare services (e.g., difficulty traveling or strained health services), which makes it challenging to reach a hospital in time for delivery. This brings a major concern because giving birth at home without a skilled medical attendant may lead to higher health risks for both the mother and the newborn.

As the frequency and intensity of heatwaves increases under climate change, these findings emphasize the urgent need for early warning systems and stronger healthcare support to protect vulnerable mothers and newborns.

Citation: Dey, A. K., Dimitrova, A., Raj, A., & Benmarhnia, T. (2025). Heatwaves and home births: Understanding the impact of extreme heat on place of delivery in India. GeoHealth, 9, e2025GH001540. https://doi.org/10.1029/2025GH001540

—Lingzhi Chu, Associate Editor, GeoHealth

Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Climate Variations in Tropical Oceans Drive Primarily Extreme Events

Mon, 12/01/2025 - 20:21
Editors’ Highlights are summaries of recent papers by AGU’s journal editors. Source: AGU Advances

Using data from the GRACE and GRACE-FO satellite missions, Rateb et al. [2025] monitored global changes in terrestrial water storage to study how hydrological extremes—floods and droughts—have developed over the past two decades. Their analysis indicates that these extremes are mainly driven by climate variability in tropical oceans, with both interannual and multi-year patterns playing a significant role.

However, the approximately 22-year satellite record is still too short to fully identify long-term drivers, which limits the ability to determine whether global extremes are increasing or decreasing. To fill data gaps in certain months, the authors use non-parametric probabilistic methods to reconstruct storage anomalies. The reconstructed data closely matched independent datasets, confirming the reliability of their approach. Overall, the study highlights the need to extend satellite observations to capture multi-decadal climate variability and better distinguish natural fluctuations from human-induced changes.

Citation: Rateb, A., Scanlon, B. R., Pokhrel, Y., & Sun, A. (2025). Dynamics and couplings of terrestrial water storage extremes from GRACE and GRACE-FO missions during 2002–2024. AGU Advances, 6, e2025AV001684. https://doi.org/10.1029/2025AV001684

—Tissa Illangasekare, Editor, AGU Advances

Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

How Can We Tell If Climate-Smart Agriculture Stores Carbon?

Mon, 12/01/2025 - 14:20

Since the first agricultural revolution, circa 10,000 BCE, humanity has adapted its farming practices to meet climatic variation. The genesis of early farming is even thought to have resulted from a shift in seasonal conditions that favored regular planting and harvesting intervals after the last ice age.

In the modern era, the necessity to adapt has led to expansive land use, fertilization, irrigation, and other agricultural routines—powered primarily by combusted carbon and freshwater extractions—to suit local environmental conditions and meet demands of growing populations. These practices have been a boon to food supplies, but they have also contributed to many of today’s climatic and environmental challenges.

Climate-smart practices have primarily been studied in small, controlled experiments, not at the extent needed to verify their effectiveness on a large scale.

Recognition of global crises with respect to climate change and biodiversity has motivated landmark international agreements such as the Paris Agreement and the Global Biodiversity Framework. The Paris Agreement legally binds participating nations to implement land use methods that mitigate emissions and actively remove carbon from the atmosphere.

One such set of modified land management practices, known collectively as climate-smart agriculture [U.S. Department of Agriculture, 2025], is lauded as a pragmatic, low-barrier pathway to manage climate change through nature-based atmospheric carbon removal and avoided emissions (related to both land use and livestock). However, these practices have primarily been studied in small, controlled experiments, not at the extent needed to verify their effectiveness—and help motivate their adoption—on a large scale.

Recently, soil carbon experts explored the utility of applying causal approaches to quantify soil carbon accrual and avoided emissions from large-scale land management interventions and to address concerns and uncertainties that are slowing their uptake [Bradford et al., 2025a]. Such approaches have long been applied in other contexts to measure and verify treatment efficacy. In particular, methods in medical science for studying vaccine efficacy broadly offer important insights for assessing climate-smart applications.

Accounting for Carbon

Climate-smart agriculture includes a variety of management practices such as cover cropping (planting noncash crops on otherwise fallow land), reducing or eliminating soil tilling, and diversifying crops. These applications can offer various cobenefits, including increased yields; greater soil water holding capacity; improved soil microbiomes; reduced erosion and runoff; enhanced control of pests, disease, and weeds; and greater soil nutrient availability that reduces the need for chemical fertilizers [U.S. Department of Agriculture, 2025].

Such benefits are linked to the idea that the applications either avoid losses or improve gains in soil organic matter. But can we measure how much they really help?

To account for carbon lost, gained, or stored in agricultural land, soil organic matter is typically measured by elemental analysis of soil samples in a laboratory. Amounts of carbon stored are determined by tracking changes in soil carbon stocks over time. Comparing results following the application of climate-smart agriculture approaches with those following business-as-usual practices provides a measure of the approaches’ effectiveness for carbon management.

Cover crop grows amid rows of corn stubble in a farm field in Deerfield, Mass. Credit: Lance Cheung, U.S. Department of Agriculture/Flickr, PDM 1.0

Assuming this carbon accounting reveals increased soil carbon stocks, agricultural projects implementing these approaches can be considered natural climate solutions, which are valued in the voluntary carbon market for their carbon offset and removal power. For example, one project developer selling carbon credits since 2022 recently reported that their efforts have so far stored nearly 1 million tons of soil carbon in U.S. farmlands. Further, across farms in four U.S. states, the combined use of three climate-smart agriculture techniques—no tillage, cover cropping, and crop rotation of corn and soybeans—is claimed to have resulted in a shift to carbon gains from soil carbon loss using conventional practices [U.S. Department of Agriculture, 2025].

Limited Evidence, Low Adoption

Despite claims about the successes of climate-smart agricultural practices, adoption remains low. Although no-till and reduced-till methods have been implemented on more than half of all U.S. soybean, corn, and sorghum fields, cover cropping is used across less than 5% of the country’s agricultural lands.

If robust data showing that climate-smart practices lead to widespread yield increases, cost reductions, and climate benefits were available, they might be more widely adopted by growers.

A multitude of social, cultural, and economic factors—along with questions about the viability for meaningful climate change mitigation—contribute to the limited adoption of some climate-smart practices [Prokopy et al., 2019; Eagle et al., 2022]. However, if robust data showing that they lead to widespread yield increases, cost reductions, and climate benefits were available, they might be more widely adopted by growers.

Presently, most evidence supporting the benefits of climate-smart agriculture for carbon management relies on a limited set of small-plot experimental trials and projected outcomes derived from applying process-based biogeochemical models. Public and private investment in studies aimed at quantifying the practices’ efficacy through measurement, monitoring, reporting, and verification (MMRV) at scales of real-world commercial agriculture has been inhibited by the assumption that soils vary too much to measure treatment effects feasibly [Poeplau et al., 2022].

This assumption is driven by the fact that regional and national soil carbon inventories reveal substantial variation in soil carbon contents at scales within individual fields (meters to tens of meters) and between fields (kilometers to tens of kilometers)—variation that is thought to preclude detections of how agricultural practices affect carbon stocks [Bradford et al., 2023]. Yet this variability can be overcome by scaling up field-level data to multifield scales focused on understanding the average effect of interventions.

What could this scaling look like, and what cues from other fields can we use to make progress?

Adapting Methods from Medical Research

Causal approaches are used regularly in health sciences, including in vaccine trials. In later-stage trials, vaccine efficacy is quantified under conditions approximating real-world delivery by measuring the differences in the health responses of people who receive the vaccine and those who do not.

Public health scientists use large-scale clinical intervention-style experiments to account for factors that can modify real-world vaccine efficacy. Earth scientists can take direction from such trials.

Importantly, such real-world trials occur only after there is enough experimental evidence—typically from controlled laboratory experiments and small-scale clinical trials—of underlying mechanisms indicating the likelihood of broad, meaningful positive effects and minimal negative effects of the vaccine. Public health scientists use these large-scale clinical intervention-style experiments (or observational studies) to account for factors such as varied exposure risks and preexisting conditions that can modify real-world vaccine efficacy compared with efficacy under controlled conditions.

Earth scientists can take direction from such trials. Adapting this experiment structure for soil science research would allow project developers, scientists, land managers, and policymakers to assess the ability of climate-smart agricultural practices to store carbon and reduce emissions across real fields and farms. It would also better inform meaningful climate action policy initiatives.

A base of highly controlled small-scale experiments—typically conducted in plots operated by researchers—already exists that suggests the carbon benefits of improved agricultural practices under highly controlled conditions. What is missing are the large-scale intervention studies sampling soil carbon in fields that receive a climate-smart treatment (e.g., no till or reduced till, crop rotation, cover cropping) versus those that are conventionally managed [Bradford et al., 2025b].

Such studies must be undertaken with appropriate design principles to confirm whether treatment interventions cause measured carbon gains and to focus on the external validity of the experiments. In the case of climate-smart agriculture, “external validity” refers to the extent to which a study’s results are applicable to other fields receiving similar management interventions. Achieving external validity necessitates sustained observation of realistic intervention behaviors on working commercial farms and on well-defined and preserved control fields, repetition of experiments at a variety of sites, and quantification of average outcomes from interventions across fields rather than for individual fields.

Empirical causal studies at the regional scales of commercial agricultural practices should be the gold standard of evidence for evaluating the effectiveness of climate-smart approaches.

New research suggests that empirical measure-and-remeasure projects are scientifically feasible at regional agricultural scales using current best practices for soil sampling and carbon analysis [Potash et al., 2025; Bradford et al., 2023]. Potash et al. [2025], for example, simulated a randomized-controlled trial for intervention projects across hundreds to thousands of fields, incorporating known variations in soil carbon stocks and measurement errors. The results showed that such projects can reliably estimate the effects of the treatments applied.

Using causal empirical approaches can complement, rather than compete with, the development of other approaches for MMRV of carbon storage and emissions. Approaches using satellite and airborne remote sensing may, for example, enable more efficient scaling of climate mitigation projects, albeit only if they are first validated against causal empirical data.

Empirical causal studies at the regional scales of commercial agricultural practices should thus be the gold standard of evidence for evaluating the effectiveness of climate-smart approaches. Data from these experiments will provide a rigorous basis for independent validation of established and emerging digital- and model-based approaches for soil carbon MMRV. They will also build confidence that adopting climate-smart practices really does result in mitigation of carbon emissions and climate change under real-world conditions.

Acknowledgments

The perspectives presented here were informed by discussions at and outcomes from a workshop convened in October 2024 by researchers at Yale University and the Environmental Defense Fund. Funding support was provided by the Yale Center for Natural Carbon Capture and gifts to the Environmental Defense Fund from King Philanthropies and Arcadia, a charitable fund of Lisbet Rausing and Peter Baldwin.

References

Bradford, M. A., et al. (2023), Testing the feasibility of quantifying change in agricultural soil carbon stocks through empirical sampling, Geoderma, 440, 116719, https://doi.org/10.1016/j.geoderma.2023.116719.

Bradford, M. A., et al. (2025a), Agricultural soil carbon: A call for improved evidence of climate mitigation, Yale Applied Science Synthesis Program and Environmental Defense Fund white paper, Yale Appl. Sci. Synth. Program, New Haven, Conn., https://doi.org/10.31219/osf.io/uk3n2_v1.

Bradford, M. A., et al. (2025b), Upstream data need to prove soil carbon as a climate solution, Nat. Clim. Change, 15, 1,013–1,016, https://doi.org/10.1038/s41558-025-02429-4.

Eagle, A. J., N. Z. Uludere Aragon, and D. R. Gordon (2022), The realizable magnitude of carbon sequestration in global cropland soils: Socioeconomic factors, Environ. Defense Fund, New York, www.edf.org/sites/default/files/2022-12/realizable-magnitude-carbon-sequestration-cropland-soils-socioeconomic-factors.pdf.

Poeplau, C., R. Prietz, and A. Don (2022), Plot-scale variability of organic carbon in temperate agricultural soils—Implications for soil monitoring, J. Plant Nutr. Soil Sci., 185, 403–416, https://doi.org/10.1002/jpln.202100393.

Potash, E., et al. (2025), Measure-and-remeasure as an economically feasible approach to crediting soil organic carbon at scale, Environ. Res. Lett., 20(2), 024025, https://doi.org/10.1088/1748-9326/ada16c.

Prokopy, L. S., et al. (2019), Adoption of agricultural conservation practices in the United States: Evidence from 35 years of quantitative literature, J. Soil Water Conserv., 74(5), 520–534, https://doi.org/10.2489/jswc.74.5.520.

U.S. Department of Agriculture (2025), Documentation of literature, data, and modeling analysis to support the treatment of CSA practices that reduce agricultural soil carbon dioxide emissions and increase carbon storage, Off. of the Chief Econ., Off. of Energy and Environ. Policy, Washington, D.C., www.usda.gov/sites/default/files/documents/USDA_Durability_WhitePaper_01_14.pdf.

Author Information

Savannah Gupton (savannah.gupton@yale.edu), Applied Science Synthesis Program, The Forest School at the Yale School of the Environment, Yale Center for Natural Carbon Capture, Yale University, New Haven, Conn.; Mark Bradford, Alex Polussa, and Sara E. Kuebbing, The Forest School at the Yale School of the Environment, Yale Center for Natural Carbon Capture, Yale University, New Haven, Conn.; and Emily E. Oldfield, Environmental Defense Fund, New Haven, Conn.; also at Yale School of the Environment, Yale University, New Haven, Conn.

Citation: Gupton, S., M. Bradford, A. Polussa, S. E. Kuebbing, and E. E. Oldfield (2025), How can we tell if climate-smart agriculture stores carbon?, Eos, 106, https://doi.org/10.1029/2025EO250446. Published on 1 December 2025. This article does not represent the opinion of AGU, Eos, or any of its affiliates. It is solely the opinion of the author(s). Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Fungi, Fertilizer, and Feces Could Help Astronauts Grow Plants on the Moon

Mon, 12/01/2025 - 14:19

Early in the time-twisting, exoplanet-exploring film Interstellar, a scientist on a blight-plagued Earth stares at corn in a greenhouse, watching the crop die. That scene, said Northern Arizona University doctoral candidate Laura Lee, got her thinking about growing food in difficult soils.

The idea propelled Lee, a planetary scientist and astronomer, into a new project, studying how the outer veneer of planetary bodies might be enriched to sustain crops needed for future human settlements. At AGU’s Annual Meeting 2025 on 16 December, Lee will present findings about how various amendments, such as fungi, urea-based fertilizer, and even poop, could help plants like corn grow on the Moon and Mars.

Necessary Ingredients

Plants need 17 specific elements to survive. Carbon, hydrogen, and oxygen combine to form cellulose—the building blocks of cell walls. Nitrogen helps lush green leaves flourish. Phosphorous stimulates stability-providing roots. Iron, potassium, and other nutrients are also critical for plants to function.

“If you can avoid bringing all that up, it’s super advantageous. Mass is really expensive.”

But on the Moon and Mars, the regolith—the loose outer layer of any planetary body—lacks some of these plant essentials. For instance, lunar regolith contains almost no carbon or nitrogen, said Steve Elardo, a planetary geochemist at the University of Florida who was not involved in Lee’s study.

Plus, the phosphorus that is present, at least on the Moon, isn’t in a useful form for plants, said Jess Atkin, a doctoral candidate and space biologist at Texas A&M who studies how microbes can remediate regolith to grow plants on the Moon.

Taking terrestrial soil to space is not ideal because of cost. “If you can avoid bringing all that up, it’s super advantageous,” Elardo said. “Mass is really expensive.” Taking microbes to the Moon, on the other hand, is a much lighter option.

What’s in a Regolith?

Scientists rely on data from rovers, landers, and satellite remote sensing to understand the chemistry of Martian regolith. The Apollo missions brought back 382 precious kilograms (842 pounds) of the Moon. The Chang’e and Luna missions combined brought back another ~4 kilograms of lunar samples. Because of the limited supply of real lunar regolith, most planetary crop studies, including Lee’s, rely on something called simulant, a synthetic imitation of extraterrestrial regolith.

For her experiments, Lee selected two simulants from Space Resource Technologies: one of the lunar highlands and one that approximates Martian regolith on the basis of data from both remote sensing and the Curiosity rover. But because of the lack of necessary nitrogen in both simulants, Lee tested two nitrogen-bearing media to introduce this key ingredient.

For the first, she used a synthetic urea-based fertilizer used by many home gardeners. For the second, Lee used Milorganite—a nitrogen-rich biosolid made from processing human waste produced by the population of Milwaukee, Wis. For Lee, the Milorganite imitates a nutrient-rich resource that future astronauts heading to planetary bodies will certainly have and that shouldn’t add weight to the mission payload: their own waste.

The hardened final remains from a sewage plant are called sludge or biosolids. The semisolid leftovers form desiccation cracks as they dry. This image is from a sewage plant in Kos, Greece. Credit: Hannes Globe/Wikimedia Commons, CC BY-SA 2.5

“When they’re adding human waste, the best thing they’re doing is adding organic matter” that can also help bind regolith particles together, said Atkin, who was not involved with Lee’s study.

“You can go full Mark Watney on this,” said Elardo, referencing the 2015 film The Martian, in which a botanist astronaut amends Martian regolith with the crew’s biosolids to grow potatoes. “If you compost [astronaut waste] and make it safe…it should provide a pretty good fertilizer.”

Fabulous Fungi

Lee also tested how crops grew with and without arbuscular mycorrhizae, a microscopic, symbiotic interconnection between certain fungi and the plant roots in which they reside.

“It extends that root zone, giving stability,” Atkin said, “like a glue in our soil.” The plant provides carbon to the fungi, and the fungi transfer water and nutrients, particularly phosphorus, to the plant, she explained.

In the fertilizer-only experiments, Lee found that plants grown in lunar simulant with Milorganite tended to grow larger, but in comparison, plants grown in lunar simulant with urea-based fertilizer were more likely to survive the 15-week growing period. For the Martian simulant, no plants survived in Milorganite.

There is a huge ethical question about bringing microorganisms to extraterrestrial places.

The fertilizer-only experiments provided a control to help Lee assess what happens with the addition of fungi. In the lunar experiments with fungi, no matter which nitrogen fertilizer source was used, plants grew larger than in the fertilizer-only trials. Lee also found higher chlorophyll levels in the leaves of plants grown with fungi and Milorganite. These results are signs that fungi facilitate healthier plants. Plants grown in Martian simulant amended with either fertilizer option also fared better with the addition of fungi. Although only a single plant out of six survived in Martian simulant amended with Milorganite and arbuscular mycorrhizae, this plant “produced the highest chlorophyll levels across all lunar and Martian corn, and produced the most biomass out of all plants grown in Martian regolith,” Lee wrote in an email.

“There is a huge ethical question about bringing microorganisms” to extraterrestrial places, said Lee, whether in the form of fertilizer or fungi. But any future astronauts will introduce microorganisms to the Moon and Mars via their own microbiomes, she said. Plus, 96 bags of human waste already languish on the lunar surface, divvied up between the six Apollo landing sites.

Simulant Versus Regolith

In an experiment published in 2022, a team of scientists including Elardo demonstrated that lunar regolith collected during Apollo 11, 12, and 17 could grow a plant called Arabidopsis thaliana, or thale cress. But the plants were stressed. “They grew, but they were not particularly happy,” Elardo said. The same plants produced healthy roots and shoots when grown in lunar simulants.

These findings demonstrated that for biology purposes, “[simulants] don’t capture the chemistry of extraterrestrial regoliths,” Elardo said, in part because that’s not always what simulants are designed to do. Several are made by the truckload for large-scale engineering projects, like testing the wheels of a rover destined for Mars, he explained. Moreover, the Moon’s iron isn’t in the same state as Earth’s, and it’s a version plants don’t want. Plus, real lunar regolith grains are extremely sharp and shard-like, impeding the progress of delicate roots.

Nevertheless, comparative studies such as Lee’s might be useful, Elardo said. “Can you add a fungus…that increases nutrient uptake?” he pondered. “That’s an awesome idea.”

—Alka Tripathy-Lang (@dralkatrip.bsky.social), Science Writer

Citation: Tripathy-Lang, A. (2025), Fungi, fertilizer, and feces could help astronauts grow plants on the Moon, Eos, 106, https://doi.org/10.1029/2025EO250445. Published on 1 December 2025. Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer