EOS

Syndicate content Eos
Science News by AGU
Updated: 1 day 13 hours ago

The giant Tupaasat rock avalanche in South Greenland

Fri, 10/24/2025 - 14:38

A new paper describes a rock avalanche in Greenland about 10,900 years BP that had a volume of over 1 billion cubic metres and that travelled almost 16 kilometres.

A fascinating paper (Pedersen et al. 2026) has just been published in the journal Geomorphology that describes a newly-discovered ancient rock avalanche in Greenland. This landslide, which is located in the Tupaasat Valley, is truly enormous. The authors estimate that it has a volume that exceeds 1 km3 (1 billion m3), with a runout distance of 15.8 kilometres and a vertical height difference of 1,440 metres.

The rear scar of the landslide is located at [60.4117, -44.2791]. It is really hard to capture this landslide on Google Earth, but fortunately the paper has been published under a creative commons licence. Here, therefore, is a map of the landslide by Pedersen et al. (2026):-

A) Geomorphological map of the Tupaasat rock avalanche deposits within the landslide outline together with the paleo-sea level line at 10 m a.s.l., and the proposed paleo-ice sheet extent.
B) Map showing the bathymetry data and the landslide outline. The bathymetry data is acquired from the Danish Geodata Agency and is not suitable for navigation C) Cross-section of Tupaasat rock avalanche with columns indicating the geomorphological features described in the results. The terrain slopes are presented below.
Images from Pedersen et al. (2026).

I have quickly annotated a Google Earth image of the site, showing the source and the track of the landslide. Note that the toe extends into the fjord, and thus is underwater, by a couple of kilometres:-

Annotated Google Earth image showing of the Tupaasat rock avalanche.

Landslides on this scale are hard to fathom. If this volume of rock was standing on a standard American football field (110 m x 49 m) it would form a column 185.5 km tall.

Pedersen et al. (2026) have dated the time of occurrence of this landslide. They conclude that it occurred about 10,900 years ago. This coincides remarkably well with the dated deglaciation (retreat of the icesheets) in this area. Thus, the authors suggest that the instability was probably associated with debuttressing of the glacier (i.e. the removal of the ice adjacent to the slope. They cannot rule out the possibility that final failure might have been triggered by an earthquake, though.

A further intriguing question is whether the event triggered a tsunami in the fjord. The distance that the landslide has moved suggests that it was very energetic. Given that it extended to the water (and some of the deposit is now within the lake) it is extremely likely that a displacement wave was triggered.

The latter point is very pertinent as there is increasing concern about the dangers of giant rock slope failures generating damaging tsunami events in fjords. For example, CNN published an article this week in the aftermath of the Tracy Arm landslide and tsunami that highlights the risk to cruise ships. It notes that:

Alaska’s foremost expert on these landslides knows why there hasn’t been a deadly landslide-turn-tsunami disaster, yet: sheer luck.

“It’s not because this isn’t a hazard,” said geologist Bretwood Higman, co-founder and executive director of nonprofit Ground Truth Alaska. “It’s because it just hasn’t happened to be above someone’s house or next to a cruise ship.”

An additional piece of context is the remarkable flooding that occurred in Alaska last weekend as Typhoon Halong tracked across parts of the state. This appears to have received far less attention than might have been anticipated, at least outside the US.

It is surely only a matter of time before we see a really large-scale accident as a result of a tsunami triggered by a rock slope failure. A vey serious scenario is that a large cruise ship is overwhelmed and sunk. The loss of life could be very high.

Reference

L.L. Pedersen et al. 2026. A giant Early Holocene tsunamigenic rock-ice avalanche in South Greenland preconditioned by glacial debuttressing. Geomorphology, 492, 110057,
https://doi.org/10.1016/j.geomorph.2025.110057.

Return to The Landslide Blog homepage Text © 2023. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Tiny Uranian Moon Likely Had a Massive Subsurface Ocean

Fri, 10/24/2025 - 13:25

Uranus’s tiny moon Ariel may have had a subsurface ocean that made up around 55% of its total volume. By mapping craters, crags, and ridges on the moon’s surface, planetary scientists modeled how thick Ariel’s crust was before it cracked under tidal stress and created the geologic features seen today. By subtracting the size of the crust and core, the researchers found that the Arielian ocean could have been about 170 kilometers thick as recently as 1 billion years ago.

“If Ariel had a subsurface ocean, it definitely does imply that other small icy moons could also have [had] subsurface oceans,” said Caleb Strom, who conducted this research as a planetary geologist fellow at the University of North Dakota in Grand Forks.

Maybe “it’s easier to make an ocean world than we thought,” he added.

An Unlikely Ocean World

Ariel is the second closest of the five large moons of Uranus. But large is a bit of a misnomer, as Ariel is only about 1,160 kilometers across, or about a third the size of Earth’s Moon.

When Voyager 2 flew through the Uranus system in 1986, scientists were surprised to see that Ariel’s icy surface was relatively young, was geologically complex, and showed some signs of cryovolcanism. Some features on the moon’s surface are similar to those seen on Europa, Enceladus, and Triton, three confirmed ocean worlds.

“We weren’t necessarily expecting it to be an ocean world.”

“What’s interesting about Ariel is that it’s unexpected,” Strom said. “We weren’t necessarily expecting it to be an ocean world.”

Later studies also found ammonia and carbon oxide compounds on Ariel’s surface, chemistry that often suggests the presence of subsurface liquid. The molecules disappear quickly unless they are frequently replenished.

But with Ariel being so small and unable to retain heat for very long, scientists thought that any subsurface ocean it may once have had was relatively thin and short-lived.

Strom and his colleagues didn’t initially set out to challenge this understanding of Ariel’s interior. They were interested in understanding the forces that could have created the moon’s geologic features.

To do this, the researchers first mapped the moon’s surface using images from the Voyager 2 flyby, cataloging ridges, fractures, and craters. They then modeled Ariel’s internal structure, giving it, from the top down, a brittle crust, a flexible crust, and an ocean all atop a solid core. They then simulated how that crust would deform under different levels of stress from tidal forces from other nearby Uranian moons and the planet itself. By varying the crust and ocean thickness and the strength of the tidal stress, the team sought to match the stress features in their models to the Voyager-derived geologic maps.

In 2023, the James Webb Space Telescope imaged Uranus and several of its major moons and rings. Credit: NASA, ESA, CSA, STScI; Image Processing: Joseph DePasquale (STScI)

The team’s models indicate that a crust less than 30 kilometers thick would have fractured under a moderate amount of tidal stress and created the geologic features seen today. The researchers suggest that to cause that stress, in the past 1–2 billion years (Ga), an orbital resonance with nearby moon Miranda stretched Ariel’s orbit about 4% from circular and fractured the surface.

“This is really a prediction about the crustal thickness” and the stress level it can withstand, Strom said. Then, with a core 740 kilometers across and a crust 30 kilometers thick, that would mean that Ariel’s subsurface ocean was 170 kilometers from top to bottom and made up about 55% of its total volume. The researchers published their results in Icarus in September.

Is Ariel Odd? Maybe Not

“The possible presence of an ocean in Ariel in the past [roughly] 1 Ga is certainly an exciting prospect,” said Richard Cartwright, an ocean world scientist at Johns Hopkins Applied Physics Laboratory (JHUAPL) in Laurel, Md. “These results track with other studies that suggest the surface geology of Ariel offers key clues in terms of recent activity” and the possibility that Ariel is, or was, an ocean world. Cartwright was not involved with the new research.

Strom cautioned that just because Ariel once had a substantial subsurface ocean doesn’t mean that it still does. The moon is very small and doesn’t retain heat very well, he said. Any ocean that remained would likely be much thinner and probably not a good place to search for life.

However, the fact that tiny Ariel may once have had such a large ocean may mean that ocean worlds are more common and easier to create than scientists once thought. Understanding the conditions that led to Ariel’s subsurface ocean could help scientists better understand how such worlds come about and how they evolve.

“Ariel’s case demonstrates that even comparatively sized moons can, under the right conditions, develop and sustain significant internal oceans.”

“Ariel’s case demonstrates that even comparatively sized moons can, under the right conditions, develop and sustain significant internal oceans,” said Chloe Beddingfield, a planetary scientist also at JHUAPL. “However, that doesn’t mean all similar bodies would have done so. Each moon’s potential for an ocean depends on its particular mix of heat sources, chemistry, and orbital evolution.”

An ocean composing 55% of a planet’s or moon’s total volume might seem pretty huge, but it also might be perfectly within normal range for ocean worlds, added Beddingfield, who was not involved with this research. “The estimated thickness of Ariel’s internal ocean…is striking, but not necessarily unexpected given the diversity of icy satellites.”

Too, Voyager 2 did not image all of Ariel’s surface, only the 35% that was illuminated during its flyby. A future long-term mission to the Uranus system could provide higher-resolution global maps of Ariel and other moons to help refine calculations of crustal thickness and determine the existence of subsurface oceans, Strom said.

Strom and his team plan to expand their stress test research to other moons of Uranus such as Miranda, Oberon, and Umbriel and possibly icy moons around other planets.

—Kimberly M. S. Cartier (@astrokimcartier.bsky.social), Staff Writer

Citation: Cartier, K. M. S. (2025), Tiny Uranian moon likely had a massive subsurface ocean, Eos, 106, https://doi.org/10.1029/2025EO250398. Published on 24 October 2025. Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

A medida que el Ártico se calienta, los suelos pierden nutrientes clave

Fri, 10/24/2025 - 13:22

This is an authorized translation of an Eos article. Esta es una traducción al español autorizada de un artículo de Eos.

Los suelos árticos y subárticos almacenan una proporción considerable del carbono de la Tierra. Sin embargo, el aumento de las temperaturas podría drenar el nitrógeno de estos suelos — un nutriente clave —. Según un nuevo estudio, la pérdida de nitrógeno podría reducir el crecimiento de las plantas, limitando la capacidad de los suelos para almacenar carbono y amplificando el calentamiento global.

Los suelos de latitudes altas almacenan grandes cantidades de carbono porque las bajas temperaturas retardan la actividad microbiana. Aunque las plantas producen materia orgánica a través de la fotosíntesis, los microorganismos no pueden consumirla lo suficientemente rápido, provocando su acumulación con el tiempo. Los científicos han estado preocupados de que un Ártico más cálido aceleraría la actividad microbiana, liberando el carbono almacenado a la atmósfera como dióxido de carbono (CO₂). Pero también esperaban que las temperaturas más cálidas estimularan el crecimiento de las plantas, lo que reabsorbería parte del carbono y compensaría parcialmente estas emisiones.

La nueva investigación muestra que este último escenario es muy improbable, ya que el calentamiento provoca que los suelos pierdan nitrógeno, una pérdida que podría inhibir el crecimiento de las plantas.

“No esperábamos ver una pérdida de nitrógeno.”

Los hallazgos provienen de un experimento de una década de duración realizado en un pastizal subártico cerca de Hveragerði, Islandia. En 2008, un potente terremoto alteró los flujos de agua geotérmica en la región, convirtiendo parcelas de suelo que antes eran normales en zonas calentadas naturalmente con gradientes de temperatura que oscilan entre 0.5 °C y 40 °C por encima de los niveles anteriores. El evento creó un laboratorio natural único para observar cómo responden los ecosistemas al calentamiento a largo plazo.

Usando isótopos estables de nitrógeno-15 para rastrear los flujos de nutrientes en el paisaje, los investigadores encontraron que, por cada grado Celsius de calentamiento, los suelos pierden entre 1.7 % y 2.6 % de su nitrógeno. Las mayores pérdidas ocurrieron durante el invierno y principios de la primavera, cuando los microbios permanecían activos pero las plantas estaban inactivas. Durante este tiempo, se liberaron compuestos nitrogenados como el amonio y el nitrato en el suelo, pero las plantas no podían absorberlos, se perdieron ya sea por lixiviación al agua subterránea o escapándose a la atmósfera como óxido nitroso, un gas de efecto invernadero casi 300 veces más potente que el CO₂.

Los resultados se publicaron en un artículo en Global Change Biology.

«No esperábamos ver una pérdida de nitrógeno», mencionó Sara Marañón, científica del suelo del Centro de Investigación Ecológica y Aplicaciones Forestales de España y primera autora del estudio. «Los mecanismos del suelo para almacenar nitrógeno se están deteriorando».

Un ecosistema menos fértil, más rápido

Los investigadores también encontraron que el calentamiento debilitó los mecanismos que ayudan a los suelos a retener el nitrógeno. En las parcelas más cálidas, la biomasa microbiana y la densidad de las raíces finas — ambas fundamentales para el almacenamiento de nitrógeno — eran mucho menores que en las parcelas más frías. Aunque los microbios eran menos abundantes, su metabolismo era más rápido, liberando más CO2 por unidad de biomasa. Mientras tanto, las plantas luchaban por adaptarse, quedando rezagadas tanto en su crecimiento como en la absorción de nutrientes.

«Las comunidades microbianas son capaces de adaptarse y alcanzar un nuevo equilibrio con tasas de actividad más rápidas», dijo Marañón. «Pero las plantas no pueden seguirles el ritmo»

“Este no es un mensaje muy optimista.”

El aumento del metabolismo microbiano resulta inicialmente en un mayor consumo del nitrógeno y carbono disponibles en el suelo. Sin embargo, después de 5 o 10 años, el sistema parece alcanzar un nuevo equilibrio, con niveles reducidos de materia orgánica y menor fertilidad. Ese cambio sugiere que el calentamiento de los suelos puede provocar una transición hacia un estado permanentemente menos fértil, haciendo más difícil la recuperación de la vegetación y conduciendo a una pérdida irreversible de carbono.

Tradicionalmente, los científicos han pensado que, dado que la materia orgánica se descompone más rápidamente en un clima más cálido, el nitrógeno que contiene estará más disponible, lo que conducirá a una mayor productividad, según Erik Verbruggen, ecólogo del suelo de la Universidad de Amberes, en Bélgica, que no participó en el estudio. «Este artículo demuestra que, en realidad, esto no está ocurriendo».

En cambio, el nitrógeno está siendo filtrado del suelo durante la primavera, lo que lo hace inaccesible para una mayor producción de biomasa. «Este no es un mensaje muy optimista», afirmó Verbruggen.

Una fuente subestimada de gases de efecto invernadero

Dado que las regiones árticas se están calentando más rápido que el promedio global, esta alteración del ciclo de nutrientes podría volverse más evidente pronto. La pérdida de nitrógeno y carbono de los suelos en regiones frías puede representar una fuente significativa y previamente subestimada de emisiones de gases de efecto invernadero, que los modelos climáticos actuales aún no han incorporado por completo.

Los investigadores regresaban periódicamente a los cálidos pastizales cercanos a Hveragerði, Islandia, para medir el nitrógeno. Crédito: Sara Marañón.

Los investigadores planean explorar las fases tempranas del calentamiento del suelo, trasplantando fragmentos de suelos normales hacia áreas calentadas, y también investigar cómo distintos tipos de suelo responden al calor. Marañón señaló que los suelos islandeses estudiados son de origen volcánico y muy ricos en minerales, a diferencia de los suelos orgánicos de turba comunes en otras regiones árticas.

“Los suelos árticos también incluyen el permafrost en lugares como el norte de Rusia y partes de Escandinavia, y ellos son los mayores reservorios de carbono en los suelos del mundo”, dice Verbruggen. Por otro lado, los suelos analizados en esta investigación eran suelos de pastizal someros. “No son necesariamente representativos de todos los suelos árticos.”

Aun así, Verbruggen añadió, los hallazgos del estudio resaltan el delicado equilibrio entre productividad y pérdida de nutrientes en estos sistemas.

Las abundantes reservas de carbono del suelo lo convierten en un riesgo importante si se gestiona inadecuadamente, dijo Marañón. «Pero también puede convertirse en un aliado potencial y compensar las emisiones de CO2».

—Javier Barbuzano (@javibar.bsky.social), Escritor de ciencia

This translation by Saúl A. Villafañe-Barajas (@villafanne) was made possible by a partnership with Planeteando and Geolatinas. Esta traducción fue posible gracias a una asociación con Planeteando y Geolatinas.

Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

A Better Way to Monitor Greenhouse Gases

Fri, 10/24/2025 - 13:21

In recent years, the international community has made progress in slowing increases in the rate of carbon dioxide emissions and in acknowledging the scale of methane leaks from oil and gas facilities. However, carbon dioxide emissions continue to rise, methane releases from the energy sector have not abated, and there is more need than ever for targeted and sustained greenhouse gas (GHG) emissions reductions and other climate change mitigation approaches.

The success of climate change mitigation approaches relies in part on having accurate, timely, and integrated carbon cycle data from surface, airborne, and satellite sensors.

The success of such actions relies in part on having accurate, timely, and integrated carbon cycle data from surface, airborne, and satellite sensors covering local, regional, and international scales. These data improve efforts to track emissions reductions, identify and mitigate unexpected emissions and leaks, and monitor ecosystem feedbacks to inform land management.

In September 2024, researchers in the carbon cycle monitoring community met to discuss how best to establish a more effective system for monitoring GHGs and to help accelerate climate action through better data and decision support.

Here we highlight issues and challenges facing emissions monitoring and documentation efforts illuminated during the September meeting, as well as ideas and proposals for tackling the challenges. The recommendations emphasize the urgency of enhanced monitoring to support the goals of the Paris Agreement and the Global Methane Pledge, particularly in the face of increasing climate extremes and the vulnerability of Earth’s natural carbon reservoirs [Friedlingstein et al., 2025].

Bottom-Up Meets Top-Down

Parties to the Paris Agreement track their progress toward meeting GHG emissions reduction targets through bottom-up accounting methods that track carbon using local ground-based observations. These methods combine information about the spatial extents of carbon sources and sinks with estimates of how much these sources and sinks emit or take up, respectively.

This inventorying approach offers high-precision information at time intervals that support long-term tracking. However, it is also often time intensive, depends on country-specific methodologies, may not accurately reflect spatiotemporal variability in GHG fluxes, and is not suited for operational monitoring of sudden changes or reversals [Elguindi et al., 2020; Nicholls et al., 2015].

Top-down approaches using remotely sensed atmospheric GHG and biomass observations offer an independent accounting method [Friedlingstein et al., 2025], with the potential for low-latency (weekly to monthly) monitoring of GHG emissions and removals. Technological advances offered by facility-scale plume imagers (e.g., GHGSat, Earth Surface Mineral Dust Source Investigation (EMIT), Carbon Mapper) and global GHG mappers (e.g., Orbiting Carbon Observatory-2 and -3 (OCO-2 and -3), Tropospheric Monitoring Instrument (TROPOMI), Greenhouse gases Observing Satellite-2 (GOSAT-2)) show promise for monitoring GHG fluxes at the local and global scale, respectively [Joint CEOS-CGMS Working Group on Climate Greenhouse Gas Task Team, 2024].

Greenhouse gas (GHG) observations with existing capabilities alone are insufficient for adequately informing climate change mitigation measures.

However, a significant gap remains in our ability to monitor weaker, spatially distributed emissions and removals at intermediate (10- to 1,000-kilometer) scales [Joint CEOS-CGMS Working Group on Climate Greenhouse Gas Task Team, 2024], particularly in systems managed by humans such as energy production and land use.

Conversations during the 2024 workshop—partly intended to inform the development of the next Decadal Survey for Earth Science and Applications from Space—highlighted limitations in current GHG monitoring capabilities. They also emphasized the critical need for an operational observing system that leverages top-down and bottom-up approaches to support climate action at local, national, and international scales.

Because of a lack of sensitivity to subregional processes, GHG observations with existing capabilities alone are insufficient for adequately informing climate change mitigation measures [e.g., Jacob et al., 2022; Watine-Guiu et al., 2023]. We must also integrate state-of-the-art science and improved understanding of Earth’s changing carbon cycle, as well as data from new observing system technologies, into the information provided to decisionmakers.

This integration requires identifying gaps and opportunities with respect to knowledge, data, and stakeholder needs. It also requires defining a vision for sustained, operational GHG monitoring to support emissions reductions, track carbon cycle feedbacks, and deliver reliable, timely, transparent, and actionable information.

This vision could be achieved with a unified multitiered global system combining models and observations of the atmosphere, land, and ocean collected with surface, airborne, and satellite tools to track carbon fluxes (e.g., atmospheric emissions and removals) and stocks (e.g., biomass, soil carbon) with improved frequency, spatial coverage, and precision (Figure 1).

Fig. 1. An effective multitiered greenhouse gas (GHG) observing system should integrate observations of the atmosphere, land, and ocean from sensors and samples on Earth’s surface, in the air, and aboard satellites. Carbon dioxide is shown as black and red molecules, and methane is shown as black and white molecules. ARGO refers to a fleet of sensors floating in the upper ocean. FTIR is Fourier transform infrared spectroscopy. Credit: Created in BioRender; Carroll, 2025, https://BioRender.com/b77439n

Organizing such a system would require substantial international coordination among governmental, academic, and nongovernmental organizations, perhaps mediated through entities such as the World Meteorological Organization’s Global Greenhouse Gas Watch, the Committee on Earth Observation Satellites, and the U.S. Greenhouse Gas Center (USGHGC).

Addressing Gaps from Space

A globally unified GHG observing system should capitalize on spaceborne technologies to fill spatial and temporal gaps in in situ networks and to monitor the responses of carbon fluxes and stocks to disturbances, weather extremes, and environmental change. This system should prioritize four key elements.

First, gathering more vertically detailed data—from the top of the atmosphere to ground level—is critical. Existing satellites measure the total amounts of carbon dioxide and methane in the atmospheric column. These measurements work well for detecting changes over large (e.g., continental) spatial scales and at facility scale, but they provide less detail about smaller-scale processes. Knowing GHG concentrations near the surface relative to those in the upper atmosphere could, for example, provide improved tracking of fluxes and understanding of the processes responsible.

Sustained vertical GHG profiling, achieved using multichannel passive sensors deployed on missions such as GOSAT-2 or emerging cloud-slicing lidar methods, for example, is foundational to the proposed system. This profiling would provide long-term time series data to help researchers detect weak but consistent flux changes and increased sensitivity to natural and anthropogenic regional sources [e.g., Parazoo et al., 2016].

Sampling the atmosphere every day would enable better detection of sudden changes in GHG concentrations and linking of those changes to particular sources.

Second, more frequent observations—obtained with a constellation of satellites observing from low, geostationary, and highly elliptical Earth orbits—are needed. Sampling the atmosphere every day, or even multiple times per day, would enable better detection of sudden changes in GHG concentrations and linking of those changes to particular sources.

Third, mapping of carbon stocks should be harmonized by combining information from different sensors and methods. Several means exist to map carbon in vegetation from space, for example, including lidar altimetry used to identify treetops and synthetic aperture radar used to estimate the volumes of trees.

Combining the strengths of existing methods and missions would facilitate more accurate and better resolved monitoring of carbon accumulation and loss due to management practices, disturbances, and ecosystem recovery. Future biomass satellite missions should focus on measurements at the scale of forest plots (i.e., hectare-scale systems with many trees) to provide more useful maps with reduced uncertainty, rather than on applying very high resolution sensors that resolve individual trees.

The fourth key is expanded satellite coverage of tropical, high-latitude, and oceanic regions to better monitor carbon cycle feedbacks [Sellers et al., 2018]. This coverage should involve the use of new active and imaging spectrometer techniques, such as those being developed in the Carbon-I mission concept study, to probe through prevalent clouds and darkness that hinder continuous monitoring.

Beyond the primary focus on GHG and biomass data, we also need—and have opportunities to obtain—complementary datasets to better constrain the locations of and processes affecting carbon sources and sinks. Atmospheric measurements of solar-induced fluorescence by vegetation, carbonyl sulfide, oxygen, carbon monoxide, and isotopes of carbon and oxygen could help disentangle fossil sources of emissions from biological sources and provide insights into processes such as photosynthesis and wildfire activity.

Currently, land and ocean ecosystems remove about half of the anthropogenic carbon emitted into the atmosphere, but this amount could change in the future [Friedlingstein et al., 2025]. Sustained monitoring of these ecosystems—and of the indicators of how they are changing—is necessary to understand and track diverse change across the Earth system.

Addressing Gaps from the Ground

Surface and airborne observations are essential for calibrating spaceborne measurements and for monitoring processes that can’t be observed from space.

Expanded surface and airborne networks for gathering data in situ from oceanic, terrestrial, and aquatic ecosystems are also a critical part of the proposed global observing system. These observations are essential for calibrating spaceborne measurements, for improving our understanding of undersampled regions (e.g., nonforest lands, rivers, wetlands, oceans), and for monitoring processes that can’t be observed from space.

Efforts on several fronts are required to provide more comprehensive ground- and air-based information on carbon fluxes and stocks to better meet stakeholder and research needs. Examples of these needed efforts include obtaining more atmospheric GHG profiles from research and commercial aircraft (e.g., through campaigns such as NOAA’s National Observations of Greenhouse Gasses Aircraft Profiles program), expanding measurements of surface-atmosphere GHG exchanges from tower-mounted sensors in undersampled terrestrial and aquatic systems [Baldocchi, 2020], and collecting seawater composition data from autonomous vehicles (e.g., Argo floats) in coastal and open oceans.

Other needed efforts include collecting more in situ measurements of above- and below-ground biomass and soil carbon and airborne sampling of managed and unmanaged (natural) experimental field sites. For example, monitoring of biomass reference measurement networks, such as GEO-TREES, should be expanded to facilitate monitoring and validation of spaceborne biomass data. These complementary measurements of quantities unobserved by remote sensing, such as soil carbon and respiration, are essential for tracking long-term storage [e.g., Konings et al., 2019].

Connecting Users to Data

Workshop participants envisioned a framework to support decisionmaking by scientists and stakeholders that links observing systems with actionable knowledge through a two-way flow of information. This framework involves three key pieces.

Identifying the underlying causes and drivers of changes in GHG emissions and removals is critical for developing effective, targeted mitigation and management policies.

First, integrating information from data-constrained models is crucial. Guan et al. [2023] offered a “system of systems” approach for monitoring agricultural carbon that is also applicable to other ecosystems. This approach leverages multitiered GHG and biomass data as constraints in land, ocean, and inverse models (which start with observed effects and work to determine their causes) to generate multiscale maps of observable and unobservable carbon stock and flux change. The result is a stream of continuous, low-latency information (having minimal delays between information gathering and output) for verifying GHG mitigation strategies.

Second, scientists must work with stakeholders to identify the underlying causes and drivers of changes in GHG emissions and removals. This identification is critical for assessing progress and developing effective, targeted mitigation and management policies.

Third, the actionable knowledge resulting from this framework—and provided through organizations such as the USGHGC—must be applied in practice. Stakeholders, including corporations, regulatory agencies, and policymakers at all levels of government, should use improved understanding of carbon flux change and underlying drivers to track progress toward nationally determined contributions, inform carbon markets, and evaluate near- and long-term GHG mitigation strategies.

Meeting the Needs of the Future

Benchmarking and validation are important parts of building trust in models and improving projections of carbon-climate feedbacks. By using comprehensive observations of carbon fluxes and stocks to assess the performance of Earth system models [e.g., Giorgetta et al., 2013], scientists can generate more reliable predictions to inform climate action policies that, for example, adjust carbon neutrality targets or further augment GHG observing systems to better study regional feedbacks [Ciais et al., 2014].

The globally unified observing system envisioned, which would integrate advanced spaceborne technologies with expanded ground and air networks and a robust decision support framework, could significantly enhance our ability to track and mitigate GHG emissions and manage carbon stocks.

Successful implementation of this system would also hinge on data accessibility and community building. Developing a universal data platform with a straightforward interface that prioritizes data literacy is crucial for ensuring accessibility for a global community of users. In addition, fostering cross-agency partnerships and engagement and collaborative networking opportunities among stakeholders will be essential for building trust, catalyzing further participation in science, and developing innovative solutions for a more sustainable future.

Acknowledgments

The September 2024 workshop and work by the authors on this article were funded as an unsolicited proposal (Proposal #226264: In support of ‘Carbon Stocks Workshop: Sep 23–25, 2024’) by the U.S. Greenhouse Gas Center, Earth Science Division, NASA. A portion of this research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (80NM0018D0004).

References

Baldocchi, D. D. (2020), How eddy covariance flux measurements have contributed to our understanding of global change biology, Global Change Biol., 26(1), 242–260, https://doi.org/10.1111/gcb.14807.

Ciais, P., et al. (2014), Current systematic carbon-cycle observations and the need for implementing a policy-relevant carbon observing system, Biogeosciences, 11(13), 3,547–3,602, https://doi.org/10.5194/bg-11-3547-2014.

Elguindi, N., et al. (2020), Intercomparison of magnitudes and trends in anthropogenic surface emissions from bottom-up inventories, top-down estimates, and emission scenarios, Earth’s Future, 8(8), e2020EF001520, https://doi.org/10.1029/2020EF001520.

Friedlingstein, P., et al. (2025), Global Carbon Budget 2024, Earth Syst. Sci. Data, 17(3), 965–1,039, https://doi.org/10.5194/essd-17-965-2025.

Giorgetta, M. A., et al. (2013), Climate and carbon cycle changes from 1850 to 2100 in MPI‐ESM simulations for the Coupled Model Intercomparison Project Phase 5, J. Adv. Model. Earth Syst., 5(3), 572–597, https://doi.org/10.1002/jame.20038.

Guan, K., et al. (2023), A scalable framework for quantifying field-level agricultural carbon outcomes, Earth Sci. Rev., 243, 104462, https://doi.org/10.1016/j.earscirev.2023.104462.

Jacob, D. J., et al. (2022), Quantifying methane emissions from the global scale down to point sources using satellite observations of atmospheric methane, Atmos. Chem. Phys., 22(14), 9,617–9,646, https://doi.org/10.5194/acp-22-9617-2022.

Joint CEOS-CGMS Working Group on Climate Greenhouse Gas Task Team (2024), Roadmap for a coordinated implementation of carbon dioxide and methane monitoring from space, 52 pp., ceos.org/document_management/Publications/Publications-and-Key-Documents/Atmosphere/CEOS_CGMS_GHG_Roadmap_Issue_2_V1.0_FINAL.pdf.

Konings, A. G., et al. (2019), Global satellite-driven estimates of heterotrophic respiration, Biogeosciences, 16(11), 2,269–2,284, https://doi.org/10.5194/bg-16-2269-2019.

Nicholls, D., et al. (2015), Top-down and bottom-up approaches to greenhouse gas inventory methods—A comparison between national- and forest-scale reporting methods, Gen. Tech. Rep. PNW-GTR-906, 30 pp., Pac. Northwest Res. Stn., For. Serv., U.S. Dep. of Agric., Portland, Ore., https://doi.org/10.2737/PNW-GTR-906.

Parazoo, N. C., et al. (2016), Detecting regional patterns of changing CO2 flux in Alaska, Proc. Natl. Acad. Sci. U. S. A., 113(28), 7,733–7,738, https://doi.org/10.1073/pnas.1601085113.

Sellers, P. J., et al. (2018), Observing carbon cycle–climate feedbacks from space, Proc. Natl. Acad. Sci. U. S. A., 115(31), 7,860–7,868, https://doi.org/10.1073/pnas.1716613115.

Watine-Guiu, M., et al. (2023), Geostationary satellite observations of extreme and transient methane emissions from oil and gas infrastructure, Proc. Natl. Acad. Sci. U. S. A., 120(52), e2310797120, https://doi.org/10.1073/pnas.2310797120.

Author Information

Dustin Carroll (dustin.carroll@sjsu.edu), Moss Landing Marine Laboratories, San José State University, San José, Calif.; also at Jet Propulsion Laboratory, California Institute of Technology, Pasadena; Nick Parazoo and Hannah Nesser, Jet Propulsion Laboratory, California Institute of Technology, Pasadena; Yinon Bar-On, California Institute of Technology, Pasadena; also at Department of Earth and Planetary Sciences, Weizmann Institute of Science, Rehovot, Israel; and Zoe Pierrat, Jet Propulsion Laboratory, California Institute of Technology, Pasadena

Citation: Carroll, D., N. Parazoo, H. Nesser, Y. Bar-On, and Z. Pierrat (2025), A better way to monitor greenhouse gases, Eos, 106, https://doi.org/10.1029/2025EO250395. Published on 24 October 2025. This article does not represent the opinion of AGU, Eos, or any of its affiliates. It is solely the opinion of the author(s). Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

1.5 Million Acres of Alaskan Wildlife Refuge to Open for Drilling

Thu, 10/23/2025 - 21:54
body {background-color: #D2D1D5;} Research & Developments is a blog for brief updates that provide context for the flurry of news regarding law and policy changes that impact science and scientists today.

A large swath of the Arctic National Wildlife Refuge (ANWR) will soon open for drilling, the Trump administration announced today.

“For too long, many politicians and policymakers in DC treated Alaska like it was some kind of zoo or reserve, and that, somehow, by not empowering the people or having even the slightest ability to tap into the vast resources was somehow good for the country or good for Alaska,” Secretary of the Interior Doug Burgum said during an Alaska Day event.

As of July 2025, Alaska ranked sixth in the nation for crude oil production.

 
Related

The news is the latest in a saga involving the ANWR, which in total spans 19.6 million acres. The 1.5 million acres to be opened for drilling represent the coastal plain of the refuge.

The 1980 Alaska National Interest Lands Conservation Act, which created most of the state’s national park lands, included a provision that no exploratory drilling or production could occur without congressional action.

Trump first opened the 1.5 million-acre coastal plain region for drilling in 2020, but the sale of drilling leases in early 2021 generated just $14.4 million in bids, rather than the $1.8 billion his administration had estimated.

On his first day in office, Biden placed a temporary moratorium on oil and gas drilling in the refuge, later going on to cancel the existing leases.

Trump resumed his efforts to allow drilling in ANWR early in his second term, though in January 2025, a lease sale attracted zero bidders. Previously, major banks had ruled out financing such drilling efforts, some citing environmental concerns. Cost is also likely a factor, as the area currently has no roads or facilities.

In addition to opening drilling, the Department of Interior also announced today the reissuing of permits to build a road through Izembek National Wildlife Refuge and a plan to greenlight another road.

“Today’s Arctic Refuge announcement puts America — and Alaska — last,” said Erik Grafe, an attorney for the environmental law nonprofit Earthjustice, in a statement. “The Gwich’in people, most Americans, and even major banks and insurance companies know the Arctic Refuge is no place to drill.”

In contrast, Voice of the Arctic Iñupiat (VOICE), a nonprofit dedicated “to preserving and advancing North Slope Iñupiat cultural and economic self-determination,” released a statement on Thursday in favor of the policy shift.

“Developing ANWR’s Coastal Plain is vital for Kaktovik’s future,” said Nathan Gordon, Jr., mayor of Kaktovik, an Iñupiat village on the northern edge of ANWR. “Taxation of development infrastructure in our region funds essential services across the North Slope, including water and sewer systems to clinics, roads, and first responders. Today’s actions by the federal government create the conditions for these services to remain available and for continued progress for our communities.”

The Department of the Interior said it plans to reinstate the 2021 leases that were cancelled by the Biden administration, as well as to hold a new lease sale sometime this winter.

—Emily Gardner (@emfurd.bsky.social) Associate Editor

These updates are made possible through information from the scientific community. Do you have a story about how changes in law or policy are affecting scientists or research? Send us a tip at eos@agu.org. Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

New Satellite Data Reveal a Shift in Earth’s Once-Balanced Energy System

Thu, 10/23/2025 - 13:22

Years ago, scientists noted something odd: Earth’s Northern and Southern Hemispheres reflect nearly the same amount of sunlight back into space. The reason why this symmetry is odd is because the Northern Hemisphere has more land, cities, pollution, and industrial aerosols. All those things should lead to a higher albedo—more sunlight reflected than absorbed. The Southern Hemisphere is mostly ocean, which is darker and absorbs more sunlight.

New satellite data, however, suggest that symmetry is breaking.

From Balance to Imbalance

In a new study published in the Proceedings of the National Academy of Sciences of the United States of America, Norman Loeb, a climate scientist at NASA’s Langley Research Center, and colleagues analyzed 24 years of observations from NASA’s Clouds and the Earth’s Radiant Energy System (CERES) mission.

They found that the Northern Hemisphere is darkening faster than the Southern Hemisphere. In other words, it’s absorbing more sunlight. That shift may alter weather patterns, rainfall, and the planet’s overall climate in the decades ahead.

Since 2000, CERES has recorded how much sunlight is absorbed and reflected, as well as how much infrared (longwave) radiation escapes back to space. Loeb used these measurements to analyze how Earth’s energy balance changed between 2001 and 2024. The energy balance tells scientists whether the planet is absorbing more energy than it releases and how that difference varies between hemispheres.

“Any object in the universe has a way to maintain equilibrium by receiving energy and giving off energy. That’s the fundamental law governing everything in the universe,” said Zhanqing Li, a climate scientist at the University of Maryland who was not part of the study. “The Earth maintains equilibrium by exchanging energy between the Sun and the Earth’s emitted longwave radiation.”

The team found that the Northern Hemisphere is absorbing about 0.34 watt more solar energy per square meter per decade than the Southern Hemisphere. “This difference doesn’t sound like much, but over the whole planet, that’s a huge number,” said Li.

Results pointed to three main reasons for the Northern Hemisphere darkening: melting snow and ice, declining air pollution, and rising water vapor.

To figure out what was driving this imbalance, the scientists applied a technique called partial radiative perturbation (PRP) analysis. The PRP method separates the influence of factors such as clouds, aerosols, surface brightness, and water vapor from calculations of how much sunlight each hemisphere absorbs.

The results pointed to three main reasons for the Northern Hemisphere darkening: melting snow and ice, declining air pollution, and rising water vapor.

“It made a lot of sense,” Loeb said. “The Northern Hemisphere’s surface is getting darker because snow and ice are melting. That exposes the land and ocean underneath. And pollution has gone down in places like China, the U.S., and Europe. It means there are fewer aerosols in the air to reflect sunlight. In the Southern Hemisphere, it’s the opposite.”

“Because the north is warming faster, it also holds more water vapor,” Loeb continued. “Water vapor doesn’t reflect sunlight, it absorbs it. That’s another reason the Northern Hemisphere is taking in more heat.”

Curiosity About Cloud Cover

One of the study’s interesting findings is what didn’t change over the past 20 years: cloud cover.

“The clouds are a puzzle to me because of this hemispheric symmetry,” Loeb said. “We kind of questioned whether this was a fundamental property of the climate system. If it were, the clouds should compensate. You should see more cloud reflection in the Northern Hemisphere relative to the Southern Hemisphere, but we weren’t seeing that.”

Loeb worked with models to understand these clouds.

“We are unsure about the clouds,” said Loeb.

“Understanding aerosol and cloud interactions is still a major challenge,” agreed Li. “Clouds remain the dominant factor adjusting our energy balance,” he said. “It’s very important.”

Still, Li said that “Dr. Norman Loeb’s study shows that not only does [the asymmetry] exist, but it’s important enough to worry about what’s behind it.”

Loeb is “excited about the new climate models coming out soon” and how they will further his work. “It’ll be interesting to revisit this question with the latest and greatest models.”

—Larissa G. Capella (@CapellaLarissa), Science Writer

Citation: Capella, L. G. (2025), New satellite data reveal a shift in Earth’s once-balanced energy system, Eos, 106, https://doi.org/10.1029/2025EO250399. Published on 23 October 2025. Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Melting Cylinders of Ice Reveal an Iceberg’s Tipping Point

Thu, 10/23/2025 - 13:22

The titanic dangers icebergs pose to ships are well documented. Sometimes, however, icebergs themselves can capsize, creating earthquakes and tsunamis or even pushing entire glaciers backward. Most of those dramatic events occur right after the chunk of floating ice splits off from its source, but sometimes icebergs flip over in the open ocean.

Earlier lab experiments using simulated plastic icebergs showed that the energy released in capsize events can rival nuclear weapon blasts. But beyond an understanding that capsize events are likely related to melting induced by ocean warming, knowing why icebergs flip is a question that’s harder to answer. Large variations in iceberg size and shape, along with slow drifting across wide distances, make studying icebergs expensive and challenging.

One solution: make miniature icebergs in the lab and watch them melt under controlled conditions.

“Understanding the mathematics and the physics of what’s going on at a base level is important in order to scale up.”

“We wanted to study the simplest capsize problem we could come up with,” said Bobae Johnson, a physicist and Ph.D. student at the Courant Institute at New York University. She and her colleagues simplified and standardized iceberg shape to a cylinder of pure water ice 8 centimeters in diameter and 24 centimeters long. In their article for Physical Review Fluids, they described how each cylinder flipped several times over the course of a 30-minute experiment.

“It is good to look at these things on smaller scales because even what we were doing in the simplest setting gave us something very complex,” Johnson said. “Understanding the mathematics and the physics of what’s going on at a base level is important in order to scale up.”

From their experiments, Johnson and her colleagues linked the different rates of ice melt above and below the waterline to dynamic changes in the shape of the iceberg—including the location of the center of mass, which makes them flip. Despite the small scale of the experiments, the implications could be enormous.

“Icebergs play a key role in the climate system,” said Sammie Buzzard, a glaciologist at the Centre for Polar Observation and Modelling and Northumbria University who was not involved in the experiments. “When they melt, they add fresh, cold water to the ocean, which can impact currents.”

Icebergs, Soda Pop, and Cheerios

Real-world icebergs range in size from about 15 meters to hundreds of kilometers across, rivaling the size of some small nations. Tolkienesque mountain-like structures (“iceberg” literally means “ice mountain”) split off from glaciers, whereas flat slablike icebergs tend to break off from ice sheets like those surrounding Antarctica.

“An iceberg’s shape determines how it floats in the water and which parts are submerged and which parts sit above the ocean’s surface,” Buzzard said, adding that icebergs change shape as they melt or erode via wind and wave action. But the precise manner of this change is uncertain because in situ measurements are challenging. “If this erosion changes the shape enough that the iceberg is no longer stable in the water, [the iceberg] can suddenly flip over into a position in which it is stable.”

“Even if lab experiments aren’t exactly the same as a natural system, they can go a long way to improving our understanding of [iceberg capsizing].”

Whatever their major differences in shape and size, because they are fresh water floating on salt water, icebergs all exhibit the similar property that roughly 10% off their mass is above water, with the remaining 90% beneath. The similarities provided the starting point for the cylindrical iceberg experiments performed by Johnson and her collaborators.

A sphere or irregular body can rotate in many different directions, but a cylinder with a length greater than the diameter of its circular face floating in water will rotate along only one axis, effectively reducing the problem from three dimensions to two.

Standardizing the shape of the icebergs wasn’t the only simplification the team made. Under natural conditions, ice freezes from the outside in, which traps a lot of air. As icebergs melt, they sometimes release enough trapped air bubbles to make the surrounding water fizz like an opened can of soda pop. This effect can create chaotic motion in samples, so Johnson and collaborators opted to eliminate bubbles entirely in their experiment. To do so, they froze water in cylindrical molds suspended in extremely cold brine and stirred the water to drive residual air out—a process that took 24 to 48 hours for each cylinder.

This video depicts the flow of water beneath the surface of a melting model iceberg. Credit: New York University’s Applied Mathematics Laboratory

Finally, to keep the cylinders from drifting randomly in the ocean simulation tank, the researchers exploited the “Cheerios effect.” Floating cereal pieces tend to group together because of surface tension, so the team 3D printed pieces of flat plastic and coated them with wax. Placing those objects in the tank created a meniscus on either side of the cylinder, keeping it in place so the only motion it exhibited was the rotation they were looking for.

“The ice melts very slowly in the air and very quickly underwater,” Johnson said. In the experiment, that difference resulted in a gravitational instability as the center of mass shifted upward, making the whole cylinder flip. “Every time the ice locks into one position, it carves out a facet above the water and very sharp corners at the waterline, giving you a shape that looks quasi pentagonal about halfway through the experiment. We ran many, many experiments, and this happened across all of them.”

Buzzard emphasized the need for this sort of work. “Even if lab experiments aren’t exactly the same as a natural system, they can go a long way to improving our understanding of [iceberg capsizing],” she said. Every flip of a simulated iceberg could help us understand the effects on the warming ocean and the connection between small occurrences and global consequences.

—Matthew R. Francis (@BowlerHatScience.org), Science Writer

Citation: Francis, M. R. (2025), Melting cylinders of ice reveal an iceberg’s tipping point, Eos, 106, https://doi.org/10.1029/2025EO250390. Published on 23 October 2025. Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

How Plant-Fungi Friendships Are Changing

Wed, 10/22/2025 - 13:30
Source: Journal of Geophysical Research: Biogeosciences

Just as the human body contains a multitude of symbiotic microbial companions, most plant species also live alongside microbial friends. Among these companions are mycorrhizal fungi, which help plants gather water and nutrients—particularly nitrogen—from the soil. In exchange, plants provide mycorrhizal fungi with an average of 3% to 13% of the carbon they pull from the atmosphere through photosynthesis and sometimes as much as 50%.

This carbon donation to support mycorrhizal fungi can incur a significant carbon cost for plants. But few groups have investigated how environmental factors such as soil temperature and nitrogen levels influence the amount of carbon flowing from plants to mycorrhizal fungi and how this flow is likely to shift with climate change. To fill this gap, Shao et al. derived a model that they call Myco-CORPSE (Mycorrhizal Carbon, Organisms, Rhizosphere, and Protection in the Soil Environment) that illustrates how the environment influences interactions between plants and mycorrhizal fungi.

When the researchers fed data from more than 1,800 forest sites in the eastern United States into Myco-CORPSE, they obtained some familiar results and also made some new discoveries. The model echoed previous work in suggesting that increasing the abundance of soil nitrogen, for example, through fertilizer runoff, decreases the dependence of plants on mycorrhizal fungi and therefore reduces the amount of carbon plants allocate to their microbial counterparts. But in contrast to previous studies, these researchers found that rising soil temperatures had the same effect of reducing the amount of nitrogen and carbon exchanged by fungi and plants. That’s because warmth accelerates the breakdown of organic material, which releases nitrogen. Increasing atmospheric carbon dioxide levels, on the other hand, will likely increase the reliance of plants on mycorrhizal fungi by increasing the growth rate of plants and therefore increasing their need for nutrients.

The Myco-CORPSE model also replicated observed patterns, showing that the two major kinds of mycorrhizal fungal species (arbuscular and ectomycorrhizal) behave differently: Arbuscular trees tend to donate less carbon to their associated fungi relative to how much ectomycorrhizal trees donate to theirs. The model also found that forests with a mix of both kinds of species typically accrue less carbon from plants than forests with less mycorrhizal diversity.

As forest managers navigate the many stresses that forests face today, promoting a diversity of mycorrhizal species within forests could optimize plant growth while minimizing the carbon diverted to mycorrhizal fungi, the researchers wrote. (Journal of Geophysical Research: Biogeosciences, https://doi.org/10.1029/2025JG009198, 2025)

This article is part of the special collection Biogeosciences Leaders of Tomorrow: JGR: Biogeosciences Special Collection on Emerging Scientists.

—Saima May Sidik (@saimamay.bsky.social), Science Writer

Citation: Sidik, S. M. (2025), How plant-fungi friendships are changing, Eos, 106, https://doi.org/10.1029/2025EO250397. Published on 22 October 2025. Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

An Asteroid Impact May Have Led to Flooding near the Grand Canyon

Wed, 10/22/2025 - 13:30

When it comes to famous holes in the ground, northern Arizona has two: Grand Canyon and Barringer Meteorite Crater.

New research now suggests that these famous depressions might, in fact, be linked—the impact that created the crater roughly 56,000 years ago might also have unleashed landslides in a canyon that’s part of Grand Canyon National Park today. Those landslides in turn likely dammed the Colorado River and temporarily created an 80-kilometer-long lake, the team proposed. The results were published in Geology.

Driftwood Then and Now

“These are two iconic features of Arizona.”

Karl Karlstrom, a geologist recently retired from the University of New Mexico, grew up in Flagstaff, Ariz. Grand Canyon and Barringer Meteorite Crater both were therefore in his proverbial backyard. “These are two iconic features of Arizona,” said Karlstrom.

Karlstrom’s father—also a geologist—used to regularly explore the caves that dot the walls of Grand Canyon and surrounding canyons. In 1970, he collected two pieces of driftwood from a cavern known as Stanton’s Cave. The mouth of Stanton’s Cave is more than 40 meters above the Colorado River, so finding driftwood in its recesses was unexpected. Routine flooding couldn’t have lofted woody detritus that high, said Karlstrom. “It would have required a flood 10 times bigger than any known flood over the last 2,000 years.”

The best radiocarbon dating available in the 1970s suggested that the driftwood was at least 35,000 years old. A colleague of the elder Karlstrom suggested that the driftwood had floated into Stanton’s Cave when an ancient landslide temporarily dammed the Colorado, raising water levels. The researchers even identified the likely site of the landslide—a wall of limestone in Nankoweap Canyon.

But what had set off that landslide in the first place? That’s the question that Karl Karlstrom and his colleagues sought to answer. In 2023, the researchers collected two additional samples of driftwood from another cave 5 kilometers downriver from Stanton’s Cave.

A “Striking” Coincidence

Modern radiocarbon dating of both the archival and newly collected driftwood samples yielded ages of roughly 56,000 years, with uncertainties of a few thousand years, for all samples. The team also dated sand collected from the second cave; it too had ages that, within the errors, were consistent with the sand having been emplaced 56,000 years ago.

The potential significance of that timing didn’t set in until one of Karlstrom’s international collaborators took a road trip to nearby Barringer Meteorite Crater, also known as Meteor Crater. There, he learned that the crater is believed to have formed around 56,000 years ago.

That coincidence was striking, said Karlstrom, and it got the team thinking that perhaps these two famous landmarks of northern Arizona—Meteor Crater and Grand Canyon National Park—might be linked. The impact that created Meteor Crater has been estimated to have produced ground shaking equivalent to that of an M5.2–5.4 earthquake. At the 160-kilometer distance of Nankoweap Canyon, the purported site of the landsliding, that ground movement would have been attenuated to roughly M3.3–3.5.

It’s impossible to know for sure whether such movement could have dislodged the limestone boulders of Nankoweap Canyon, Karlstrom and his colleagues concede. That’s where future modeling work will come in, said Karlstrom. It’s important to remember that an asteroid impact likely produces a distinctly different shaking signature than an earthquake caused by slip on a fault, said Karlstrom. “Fault slip earthquakes release energy from several kilometers depths whereas impacts may produce larger surface waves.”

But there’s good evidence that a cliff in Nankoweap Canyon did, indeed, let go, said Chris Baisan, a dendrochronologist at the Laboratory of Tree-Ring Research at the University of Arizona and a member of the research team. “There was an area where it looked like the canyon wall had collapsed across the river.”

An Ancient Lake

Using the heights above the Colorado where the driftwood and sand samples were collected, the team estimated that an ancient lake extended from Nankoweap Canyon nearly 80 kilometers upstream. At its deepest point, it would have measured roughly 90 meters. Such a feature likely persisted for several decades until the lake filled with sediment, allowing the river to overtop the dam and quickly erode it, the team concluded.

“They’re certainly close, if not contemporaneous.”

The synchronicity in ages between the Meteor Crater impact and the evidence of a paleolake in Nankoweap Canyon is impressive, said John Spray, a planetary scientist at the University of New Brunswick in Canada not involved in the research. “They’re certainly close, if not contemporaneous.” And while it’s difficult to prove causation, the team’s assertion that an impact set landslides in motion in the area around Grand Canyon is convincing, he added. “I think the likelihood of it being responsible is very high.”

Karlstrom and his collaborators are continuing to collect more samples from caves in Grand Canyon National Park. So far, they’ve found additional evidence of material that dates to roughly 56,000 years ago, as well as even older samples. It seems that there might have been multiple generations of lakes in the Grand Canyon area, said Karlstrom. “The story is getting more complicated.”

—Katherine Kornei (@KatherineKornei), Science Writer

Citation: Kornei, K. (2025), An asteroid impact may have led to flooding near the Grand Canyon, Eos, 106, https://doi.org/10.1029/2025EO250391. Published on 22 October 2025. Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Another landslide dam flood at the site of the Matai’an rock avalanche in Taiwan

Wed, 10/22/2025 - 06:59

Failure of the landslide debris from the Matai’an rock avalanche allowed another barrier lake to form. This breached on 21 October 2025, generating another damaging debris flow.

Newspapers in Taiwan are reporting that a new landslide barrier lake formed and then failed at the site of the giant Matai’an rock avalanche. The breach event apparently occurred at baout 9 pm local time on 21 October 2025. The risk had been identified in advance and the downstream population had been evacuated successfully this time, so there are no reports of fatalities.

The Taipei Times has an image of the barrier lake that was released by the Hualien branch of the Forestry and Nature Conservation Agency:-

The Matai’an landslide barrier lakes prior to the failure of the lower one on 21 October 2025. Photo courtesy of the Hualien branch of the Forestry and Nature Conservation Agency via the Taipei Times.

There is also a video on Youtube from Focus Taiwan (CNA English News) that includes helicopter footage of the site, also provided by the Forestry and Nature Conservation Agency:-

This includes the following still:-

The lower Matai’an landslide barrier lake prior to the failure on 21 October 2025. Still from a video posted to Youtube by CNA English News – original footage courtesy of the Hualien branch of the Forestry and Nature Conservation Agency.

It appears to me that the barrier lake has formed because of a large landslide in the debris from the original rock avalanche note the dark coloured landslide scar on the left side of the image.

Loyal readers will remember that I highlighted that this could be an issue in my post on 3 October:-

“So, a very interesting question will now pertain to the stability of these slopes. How will they perform in conditions of intense rainfall and/or earthquake shaking? Is there the potential for a substantial slope failure on either side, allowing a new (enlarged) lake to form.”

“This will need active monitoring (InSAR may well be ideal). The potential problems associated with the Matai’an landslide are most certainly not over yet.”

There is a high probability that this will be a recurring issue in periods of heavy rainfall.

Meanwhile, keep a close eye on Tropical Storm Melissa, which is tracking slowly northwards in the Caribbean. This could bring exceptionally high levels of rainfall to Haiti and Jamaica as it is moving very slowly. This one looks like a disaster in waiting at the moment.

Return to The Landslide Blog homepage Text © 2023. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

To Find Critical Minerals, Look to Plate Tectonics

Tue, 10/21/2025 - 13:31

For much of the 20th century, “petroleum politics” shaped international policy. In the 21st century, a new set of resources has taken center stage: critical minerals. Sourcing and extracting these minerals have become a priority for countries and communities around the world because they are used in everything from solar panels to cell phones to superconductors.

A new study suggests where prospectors can search for critical minerals: rifting sites left behind by the supercontinent Rodinia, which broke up in the Proterozoic, more than 800 million years ago.

“To better find [critical] resources, really, we need a better understanding of geology.”

“Unless it is grown, absolutely everything on the planet that we use as a manufactured good requires something that comes out of a mine,” said Chris Kirkland, a geologist at Curtin University in Australia and a coauthor of the new study, published last month in Geological Magazine. “To better find those resources, really, we need a better understanding of geology.”

Kirkland and his colleagues began by analyzing rocks unearthed by drilling companies in Western Australia. The slabs contain carbonatite, a “weird,” rare, and poorly understood kind of igneous rock formed in the mantle from magmas rich in carbonate minerals. As the magmas rise through Earth’s interior, they react with surrounding rocks, altering the chemical signatures that geologists typically use to trace a sample’s origins.

Carbonatites often contain rare earth elements, such as niobium. Although niobium can be found in different rocks, carbonatites are the only ones offering it in amounts economically suitable for extraction. The Western Australia sites are home to more than 200 million metric tons of the metal.

The team “threw the whole kitchen sink of analytical techniques” at the carbonatites, explained Kirkland. The first step was to take a drill core sample and image its structure to see the broad geological ingredients inside. Then the researchers used lasers to sample individual grains and piece out their crystals.

The carbonatites contained zircon, apatite, and mica, all crystals with isotopes that decay at known rates and can tell researchers about the sample’s age and source. The researchers also analyzed the helium present in zircon, because helium is a volatile element that easily escapes rocks near the surface and can help reveal when the rocks reached the crust.

Written in Stone

The story written in the slabs is one tied to the long history of plate tectonics. The breakup of Rodinia began around 800 million years ago and continued for millions of years as hot, metal-enriched oozes of magma rose up from the mantle. Pressure from this rising rock helped split apart the supercontinent, and the metals encased in carbonatites breached the surface at once-stable mounds of continental crust called cratons.

Today, said Kirkland, tracking these “old fossil scars” where cratons split could reveal stores of minerals.

More than 200 million metric tons of niobium were recently identified in Australia’s Aileron Province, a likely result of the breakup of Rodinia. Credit: Dröllner et al., 2025, https://doi.org/10.1017/S0016756825100204

“Reconstructing a geologic history for one particular area on Earth is something that I think has potential to help us in better understanding these pretty poorly understood carbonatite systems globally,” said Montana State University geologist Zachary Murguía Burton, who was not involved with the paper.

Burton estimates that some 20% of the carbonatites on Earth contain economically attractive concentrations of critical minerals, although he noted that the rocks in the study experienced a unique confluence of local and regional geologic processes that might influence the minerals they contain.

In particular, the carbonatites analyzed in the new study identified the source of recently discovered niobium deposits beneath central Australia. Niobium is a critical mineral used in lithium-ion batteries and to strengthen and lighten steel. Because 90% of today’s supply of niobium comes from a single operation in Brazil, finding additional deposits is a priority.

In addition to niobium, Kirkland said a geologic “recipe” similar to the one his team identified might work for finding gold.

The work is an important reminder of “how tiny minerals and clever dating techniques can not only solve deep-time geological puzzles, but also help guide the hunt for the critical metals we need,” Kirkland said.

—Hannah Richter (@hannah-richter.bsky.social), Science Writer

Citation: Richter, H. (2025), To find critical minerals, look to plate tectonics, Eos, 106, https://doi.org/10.1029/2025EO250393. Published on 21 October 2025. Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Seismic Anisotropy Reveals Deep-Mantle Dynamics

Tue, 10/21/2025 - 13:31
Source: Geochemistry, Geophysics, Geosystems

In some parts of Earth’s interior, seismic waves travel at different speeds depending on the direction in which they are moving through the layers of rock in Earth’s interior. This property is known as seismic anisotropy, and it can offer important information about how the silicate rock of the mantle—particularly at the mantle’s lowermost depths—deforms. In contrast, areas through which seismic waves travel at the same speed regardless of direction are considered isotropic.

In the bottom 300 kilometers of the mantle, also known as the D’’ layer, anisotropy is potentially caused by mantle plumes or mantle flow interacting with the edges of large low-shear-velocity provinces: continent-sized, dense, hot BLOBs (big lower-mantle basal structures) at the base of the mantle above the core. Many questions persist about the viscosity, movement, stability, and shape of the BLOBS, as well as about how they can be influenced by mantle plumes and subduction.

Roy et al. used ASPECT, a 3D mantle convection modeling software, and ECOMAN, a mantle fabric simulation code, to examine the deep mantle. They tested five different mantle model configurations, adjusting the viscosity and density of the BLOBs. The goal was to see which configuration would most closely re-create the observed seismic anisotropy.

The researchers treated the BLOBs as regions with their own unique chemistry, which form from a 100-kilometer-thick layer at the bottom of the mantle. Their models simulated how mantle plumes formed over the past 250 million years, during which time events such as the breakup of Pangaea, the opening of the Atlantic, and the evolution of various subduction zones occurred.

The study suggests that the best explanation for observed seismic anisotropy is when the BLOBs are 2% denser and 100 times more viscous than the surrounding mantle. This aligns with observations of anisotropy patterns in seismic data. Plumes form mainly at the edges of BLOBs, where strong deformation causes strong anisotropy. (Geochemistry, Geophysics, Geosystems, https://doi.org/10.1029/2025GC012510, 2025)

—Rebecca Owen (@beccapox.bsky.social), Science Writer

Citation: Owen, R. (2025), Seismic anisotropy reveals deep-mantle dynamics, Eos, 106, https://doi.org/10.1029/2025EO250392. Published on 21 October 2025. Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Alaska Awaits Response from FEMA in the Aftermath of Major Floods

Mon, 10/20/2025 - 16:45
body {background-color: #D2D1D5;} Research & Developments is a blog for brief updates that provide context for the flurry of news that impacts science and scientists today.

Major floods in Alaska have caused the death of at least one person and displaced thousands more over the course of the last two weeks. Many of the displaced may not be able to return home for 18 months or longer, according to Alaska Gov. Mike Dunleavy.

Tropical Storm Halong formed in the Northern Philippine Sea on 5 October, and had become a category 4 typhoon by 7 October. Though it was considered an ex-typhoon by the time it reached western Alaska, the storm brought wind speeds of up to 113 miles per hour (181 kilometers per hour), along with severe flooding across the Yukon Delta, Kuskokwim Delta, and Norton Sound.

 
Related

Among the hardest hit population centers were the villages of Kipnuk and Kwigillingok, home to a combined 1,000 people, mostly Alaska Native or American Indian. At this time of year, the remote villages can only be reached by water or by air.

In Kipnuk, water levels rose 5.9 feet (1.8 meters) above the normal highest tide line. In Kwigillingok, water levels measured 6.3 feet (1.9 meters) above the normal highest tide line—more than double the previous record set in 1990. According to a letter from the governor’s office to President Trump, 90% of structures in Kipnuk and 35% of structures in Kwigillingok have been destroyed.

The Alaska Air and Army National Guard, the U.S. Coast Guard, and Alaska State Troopers evacuated hundreds of residents to the regional hub of Bethel, then to the capital of Anchorage in what the Alaska National Guard called the state’s largest airlift operation in history.

“It’s been an all-hands-on deck endeavor, and everybody is trying to support their fellow Alaskans in their time of need,” said Col. Christy Brewer, the Alaska National Guard director of joint operations, in a 19 October statement.

Silence From FEMA

But calls for assistance from the Federal Emergency Management Agency seem to have so far gone unanswered, leaving some people asking, “Where is FEMA?”

An urgent question. According to the FEMA Daily Briefing a presidential disaster declaration was requested on October 16th. To the best of my knowledge it hasn’t been granted. Any event of this size should be an easy and immediate yes.

Dr. Samantha Montano (@samlmontano.bsky.social) 2025-10-18T23:13:44.421Z

As reported by the New York Times, the EPA revoked a $20 million grant in May that was intended to protect Kipnuk from extreme flooding. The grant cancellation was likely part of a larger effort by the administration to shift the burden of disaster response to states.

On 16 October, Dunleavy submitted a request to President Trump to declare a major disaster for the state.

The letter notes that Alaska has seen 57 state-declared disasters since November 2018, 14 of which have been approved for federal disaster assistance. There have been 14 state-declared disasters in Alaska in the last 12 months alone, including fires, freezes, landslides, and floods.

“It is anticipated that more than 1,500 Alaskans will be evacuated to our major cities, many of whom will not be able to return to their communities and homes for upwards of 18 months,” Gov. Dunleavy wrote. “This incident is of such magnitude and severity that an effective response exceeds state and local capabilities, necessitating supplementary federal assistance to save lives, protect property, public health, and safety, and mitigate the threat of further disaster.”

On 17 October, Alaska’s senators and state representative (all Republicans) also submitted a letter to President Trump, urging him to approve the governor’s request for a major disaster declaration.

Also on 17 October, Vice President JD Vance said on X that he and the president were “closely tracking the storm devastation,” and that the federal government was working closely with Alaska officials. On 18 October, Lisa Murkowski (R-AK) said she believed FEMA representatives were “totally on the ground.”

However, as of 20 October, the incident is not listed in FEMA’s disaster declaration database.

—Emily Gardner (@emfurd.bsky.social) Associate Editor

These updates are made possible through information from the scientific community. Do you have a story about science or scientists? Send us a tip at eos@agu.org. Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

The Southern Ocean May Be Building Up a Massive Burp

Mon, 10/20/2025 - 13:16
Source: AGU Advances

The ocean has helped mitigate global warming by absorbing around a quarter of anthropogenic carbon dioxide (CO2) emissions, along with more than 90% of the excess heat those emissions generate.

Many efforts, including assessments by the Intergovernmental Panel on Climate Change, have looked at how the oceans may continue to mitigate increasing emissions and global warming. However, few have looked at the opposite: How will the oceans respond if emissions and associated atmospheric heat levels begin to decrease in response to net negative emissions?

Frenger et al. examined what might happen in the Southern Ocean if after more than a century of human-induced warming, global mean temperatures were to be reduced via CO2 removal from the atmosphere. The Southern Ocean is a dynamic system, with large-scale upwelling and a robust ability to take up excess carbon and heat. To better understand how the Southern Ocean will behave in net negative carbon conditions, the researchers modeled how the ocean and the atmosphere would interact.

They used the University of Victoria climate model, UVic v. 2.9, to simulate multicentury timescales and carbon cycle feedbacks. UVic uses a combination of an atmospheric energy–moisture balance model, an ocean circulation and sea ice model, a land biosphere model, and an ocean biochemistry model. The researchers used UVic to model an idealized climate change scenario commonly used in climate modeling: Emissions increase until atmospheric CO2 levels double after 70 years, followed by a steep emissions cut and subsequent sustained net negative emissions.

The results showed that after several centuries of net negative emissions levels and gradual global cooling, the Southern Ocean abruptly released a burst of accumulated heat—an oceanic “burp”—that led to a decadal- to centennial-scale period of warming. This warming was comparable to average historical anthropogenic warming rates. The team said that because of seawater’s unique chemistry, this burp released relatively little CO2 along with the heat.

Frenger and colleagues note that their work uses a model with intermediate-level complexity and an idealized climate change scenario, but that their findings were consistent when tested with other modeling setups. They say the Southern Ocean’s importance to the global climate system, including its role in heat release to the atmosphere in a cooling climate, should be studied further and contemporary changes closely monitored. (AGU Advances, https://doi.org/10.1029/2025AV001700, 2025)

—Sarah Derouin (@sarahderouin.com), Science Writer

Citation: Derouin, S. (2025), The Southern Ocean may be building up a massive burp, Eos, 106, https://doi.org/10.1029/2025EO250385. Published on 20 October 2025. Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Publishing Participatory Science: The Community Science Exchange

Mon, 10/20/2025 - 12:00
Editors’ Vox is a blog from AGU’s Publications Department.

The Community Science Exchange was founded in 2021 to elevate the work of scientists, scholars and community members collectively engaged in participatory science and to broaden the reach of their discoveries, results and science-based solutions. Now more than ever, we would like to recognize the importance of the work of the Community Science Exchange in fostering an inclusive scientific community and strengthening public trust in science. Here, we highlight the publication outlets offered by the Community Science Exchange and encourage the AGU community to contribute.     

The Community Science Exchange aims to encourage, foster, and promote co-production between science and community.

Within equitable participatory science, or a collective scientific endeavor giving significant voice and weight to both science and publics, the Community Science Exchange defines “community” variously as place-based, a group defined by a shared culture or heritage, and/or a group defined by a shared experience. From environmental concerns to public health, anthropology to engineering, the Community Science Exchange aims to encourage, foster, and promote co-production between science and community. To aid in the integration of local knowledge and lived experience, the Community Science Exchange specifically includes community voice in its publications: as authors, in sections devoted to community description and community impact, and in quotes from community members involved in and/or affected by the work. Scientists and academic scholars with an interest in elevating their community partners within their publications instead of hiding them in an acknowledgment should consider publication within the Exchange.

The American Geophysical Union hosts the Community Science Exchange with further support and guidance from five partnership organizations: the American Anthropological Association (AAA), the American Public Health Association (APHA), the Association for Advancing Participatory Sciences (AAPS), the Unión Geofísica Mexicana (UGM), and Wiley. To broaden the publication venues for community members and organizations, practitioners, boundary spanners, and others who may not receive career benefits from scientific journal publication, the Community Science Exchange has created two new avenues for those who want to publish and share their work: the journal Community Science and the online publication venue managed by AGU, the Hub.

Since its first issue in June 2022, Community Science has published articles discussing a variety of topics of interest to communities and scientists, including water quality, plastic pollution, language as a barrier to equitable access to scientific literature, and integration of Indigenous knowledge in shellfish monitoring. Community Science has also participated in several special collections, including on air quality, equitable co-production, and sustainable agriculture. Growing steadily in submissions, Community Science received the PROSE Award for Journals from the Princeton University Press in 2024. The journal is open access, allowing anyone to read the work published for free.

As a peer-reviewed journal, manuscripts go through an evaluation and revision process to ensure that research published in the journal rigorously advances both science and community outcomes. Like the other journals within the AGU journal portfolio, those who review for Community Science are welcome to invite a co-reviewer. This endeavor can help early career researchers to become thorough and constructive reviewers, and can invite experienced community organizers, boundary spanners and those with relevant lived expertise to engage in thoughtful reviews complementary to scientific review. Publications in both Community Science and the Hub are periodically featured in Editor’s Highlights, in which editors explain what they found exciting about a work, or in Research Spotlights, which are written by Eos’ professional science writers and feature recent newsworthy work. These features offer a more approachable point of entry to explore the science.

Unlike any other journal in the AGU portfolio, the Community Science Exchange also supports an alternate publication venue – the Hub – which is hosted on the Community Science Exchange website. Broadening the definition and understanding of scientific research, work, and resources, the Hub seeks to deepen the connection between science and community.

The Hub is home to a wide variety of content, including stand-alone submissions that are intentionally written outside of the strictures of a scientific journal format.

The Hub is home to a wide variety of content, including stand-alone submissions that are intentionally written outside of the strictures of a scientific journal format, to “complementary materials” allowing journal paper authors to enrich their articles with linked materials furthering community voice. Although the Hub isn’t a scholarly journal in the traditional sense, all submissions are editor-vetted before potential revision and publication. Any new, original content published on the Hub is now eligible to receive a permanent digital object identifier (DOI) allowing it to be cited in the references of scholarly publications and other content.

Authors can submit materials to the Hub that fall into one of four categories:

Project Descriptions are narratives of work done, or even more formalized case studies. They should include a description of the community involved, an explanation of the community knowledge utilized, and a summary of the work done. Example: Climate Safe Neighborhoods [Project Description] (doi.org/10.1029/2024CSE000101)

Protocols and Methods are for describing how the community science work was done. These could be practiced approaches, descriptions of relevant policies to be considered, or outlines of project development.

Tools and Resources are items that can help others along on their own community science work, such as datasets or visualization tools. Even descriptions of useful apps that would be helpful would be welcome.

Educational Materials are items geared toward educating or training about community science practices. These could include instruction manuals, guidebooks, or even workshop or webinar curricula.

Because the Hub is a living initiative, evolving with the needs and desires of the community, submissions that don’t cleanly fit into any one of these categories will still be considered.

If you are interested in joining in the Community Science Exchange’s efforts to expand how we view, publish, and share science, please email us at communitysci@agu.org. Whether you have a resource to submit to the Hub, an article to submit to the journal, want to be a reviewer, or even want to apply to be an editor – we’d love to hear from you.

Finally, we want to thank all of those who have served as editors of this initiative so far, both past and present (starred are original editorial board members):

  • Julia Parrish*, current Editor-in-Chief
  • Kathryn Semmens*, current Deputy Editor of the Hub
  • Claire Beveridge*, current editor
  • Gillian Bowser, current editor
  • Muki Haklay*, current editor
  • Rajul Pandya, current editor
  • Jean Schensul*, founding Deputy Editor, current editor
  • Kevin Noone*, founding Editor-in-Chief, past editor
  • Paula Buchanan*, founding Deputy Editor, past editor
  • Shobhana Gupta*, past editor
  • Heidi Roop*, past editor
  • Roopam Shukla*, past editor

—Allison Schuette (aschuette@agu.org, 0009-0007-1055-0937), Program Coordinator, AGU Publications; Julia Parrish (0000-0002-2410-3982), Editor-in-Chief, Community Science Exchange; Kathryn Semmens (0000-0002-8822-3043), Deputy Editor, The Hub; Kristina Vrouwenvelder (0000-0002-5862-2502), Assistant Director, AGU Publications; and Sarah Dedej (0000-0003-3952-4250), Assistant Director, AGU Publications

Citation: Schuette, A., J. Parrish, K. Semmens, K. Vrouwenvelder, and S. Dedej (2025), Publishing participatory science: the Community Science Exchange, Eos, 106, https://doi.org/10.1029/2025EO255032. Published on 20 October 2025. This article does not represent the opinion of AGU, Eos, or any of its affiliates. It is solely the opinion of the author(s). Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Universities Reject Trump Funding Deal

Fri, 10/17/2025 - 16:09
body {background-color: #D2D1D5;} Research & Developments is a blog for brief updates that provide context for the flurry of news regarding law and policy changes that impact science and scientists today.

The “Compact for Academic Excellence in Higher Education,” developed by the Trump administration and sent to nine universities on 1 October, proposes that the institutions agree to a series of criteria in exchange for preferential treatment in funding decisions.

The compact’s provisions ask universities to: 

  • Ban the consideration of any demographic factors, including sex, ethnicity, race, sexual orientation, and religion in any admissions decisions, financial aid decisions, or hiring decisions.
  • Commit to “institutional neutrality,” create an “intellectually open campus environment,” and abolish “institutional units that purposefully punish, belittle, and even spark violence against conservative ideas.”
  • Require all employees to abstain from actions or speech related to social and political events unless such events have a direct impact on their university or they are acting in their individual capacity rather than as university representatives. 
  • Interpret the words “woman,” and “man” according to “reproductive function and biological processes.”
  • Stop charging tuition for any admitted student pursuing “hard science” programs. (This only applies for universities with endowments over $2 million per undergraduate student.)
  • Disclose foreign funding and gifts.
Compact-for-Academic-Excellence-in-Higher-Education-10.1Download

The proposed deal was sent to the University of Pennsylvania, the University of Virginia, the University of Arizona, the University of Texas at Austin, the University of Southern California, Vanderbilt University, Dartmouth University, Brown University, and the Massachusetts Institute of Technology. 

 
Related

“Any university that refuses this once-in-a-lifetime opportunity to transform higher education isn’t serving its students or their parents—they’re bowing to radical, left-wing bureaucrats,” Liz Huston, a White House spokesperson, told Bloomberg

Simon Marginson, a professor of higher education at Oxford University, told Time that if successful, the compact would “establish a level of federal control of the national mind that has never been seen before.” 

On 12 October, President Trump opened up the offer to all institutions of higher education in a post on social media website Truth Social.

As of 20 October, the following schools have responded to Trump’s offer:

  • Massachusetts Institute of Technology: MIT was the first to reject Trump’s offer. In a 10 October letter to the administration, MIT President Sally Kornbluth wrote that MIT’s practices “meet or exceed many standards outlined in the document,” but that the compact “also includes principles with which we disagree, including those that would restrict freedom of expression and our independence as an institution.”
  • Brown University: In a 15 October letter to the administration, Brown University President Christina H. Paxson declined the deal. She wrote that Brown “would work with the government to find solutions if there were concerns about the way the University fulfills its academic mission,” but that, like Kornbluth, she was “concerned that the Compact by its nature and by various provisions would restrict academic freedom and undermine the autonomy of Brown’s governance.”
  • University of Southern California: In a 16 October statement, USC Interim President Beong-Soo Kim informed the university community that he had declined the deal, and wrote that the university takes legal obligations seriously and is diligently working to streamline administrative functions, control tuition rates, maintain academic rigor, and ensure that students develop critical thinking skills. “Even though the Compact would be voluntary, tying research benefits to it would, over time, undermine the same values of free inquiry and academic excellence that the Compact seeks to promote,” he wrote.
  • University of Pennsylvania: In a 16 October statement, UPenn President J. Larry Jameson informed the university community that he had declined to sign the compact. “At Penn, we are committed to merit-based achievement and accountability. The long-standing partnership between American higher education and the federal government has greatly benefited society and our nation. Shared goals and investment in talent and ideas will turn possibility into progress,” he wrote.
  • University of Virginia: In a 17 October letter to the administration, UVA Interim President Paul Mahoney declined to sign the compact. “We seek no special treatment in exchange for our pursuit of those foundational goals,” the letter said. “The integrity of science and other academic work requires merit-based assessment of research and scholarship. A contractual arrangement predicating assessment on anything other than merit will undermine the integrity of vital, sometimes lifesaving, research and further erode confidence in American higher education.”
  • Dartmouth University: In a 18 October letter to the administration, Dartmouth President Sian Leah Beilock declined the deal. “I do not believe that the involvement of the government through a compact—whether it is a Republican- or Democratic-led White House—is the right way to focus America’s leading colleges and universities on their teaching and research mission,” Beilock wrote.
  • University of Arizona: In a 20 October announcement, President Suresh Garimella said he had declined to agree to the proposal and had instead submitted a Statement of Principles to the U.S. Department of Education informed by “hundreds of U of A stakeholders and partner organizations.” “This response is our contribution toward a national conversation about the future relationship between universities and the federal government. It is critical for the University of Arizona to take an active role in this discussion and to work toward maintaining a strong relationship with the federal government while staying true to our principles,” Garimella wrote.

—Grace van Deelen (@gvd.bsky.social), Staff Writer

20 October: This article was updated to include the University of Virginia and Dartmouth University.

21 October: This article was updated to include the University of Arizona.

These updates are made possible through information from the scientific community. Do you have a story about how changes in law or policy are affecting scientists or research? Send us a tip at eos@agu.org. Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

When the Earth Moves: 25 Years of Probabilistic Fault Displacement Hazards

Fri, 10/17/2025 - 16:08
Editors’ Vox is a blog from AGU’s Publications Department.

Earthquake surface ruptures can cause severe damage to infrastructure and, while preventative measures can be taken to allow the structures to adapt in the case of an earthquake, one of the best methods is to avoid unnecessary risks in the first place.

A new article in Reviews of Geophysics explores the history of Probabilistic Fault Displacement Hazard Assessments (PFDHA) and recent efforts to improve them with modern methods. Here, we asked the authors to give an overview of PFDHAs, how scientists’ methods have evolved over time, and future research directions.

What is fault displacement and what kinds of risks are associated with it?

Fault displacement occurs when an earthquake breaks the ground surface along a fault. This displacement along the fault can shift the ground horizontally and/or vertically, by several meters for the largest earthquakes. Such ruptures pose serious risks to infrastructures located across faults—such as pipelines, transportation systems, dams and power generation facilities—because they may be torn apart or severely damaged. While some facilities can be engineered to tolerate limited movements, many critical systems are highly vulnerable, making it essential to evaluate this hazard.

This figure shows the Trans-Alaska Pipeline crossing the Denali Fault, which ruptured during the 2002 earthquake. Photos and diagrams illustrate how the pipeline was designed to bend and slide, allowing it to survive several meters of fault movement without breaking. Credit: Valentini et al. [2025], Figure 5

In simple terms, what are Probabilistic Fault Displacement Hazard Assessments (PFDHA)?

A Probabilistic Fault Displacement Hazard Assessment (PFDHA) is a quantitative analysis based on a method that estimates the likelihood that an earthquake will rupture the surface at a specific site and anticipate the magnitude of the displacement. Instead of giving a single answer, PFDHA provides probabilities of different displacement levels for different reference periods of interest. This allows engineers and planners to evaluate risks in a structured way and make informed decisions about building designs or land use near faults.

This diagram explains how scientists estimate the expected amount of displacement due to an earthquake and at a specific site. It shows the main steps and data used in a Probabilistic Fault Displacement Hazard Assessment (PFDHA). Credit: Valentini et al. [2025], Figure 8

How have Fault Displacement Hazard Assessments evolved over time?

The first systematic PFDHA was developed in the early 2000s for the Yucca Mountain nuclear waste repository in the USA. Since then, the methodology has expanded from normal faults to include strike-slip and reverse faults worldwide. Over the past 25 years, new global databases of surface ruptures supporting statistical analysis, advances in statistical modeling, and international benchmark exercises have significantly improved the reliability and comparability of PFDHA approaches. In the future, the field should integrate remote sensing data, artificial intelligence, and physics-based modeling to better capture the complexity of earthquake ruptures.

What are the societal benefits of developing PFDHAs?

By quantifying the hazard of surface fault rupture, PFDHAs provide critical input for the safe design of infrastructures. This helps to avoid catastrophic failures such as pipeline leaks, dam collapses and resulting flooding, or road and railway disruption. Beyond engineering, PFDHAs also support land-use planning by identifying areas where construction should be avoided. Ultimately, these assessments reduce economic losses, improve resilience, and protect human lives in earthquake-prone regions.

What are some real-life examples of PFDHAs being developed and implemented?

One of the earliest and most influential applications was at Yucca Mountain, Nevada, where PFDHA helped assess the safety of a proposed nuclear waste repository. More recently, PFDHA approaches have been adopted internationally, including in Japan and Italy, for assessing risks to dams, tunnels, and other critical infrastructure.

What are some of the most exciting recent developments in this field?

These photos show how earthquakes can damage critical infrastructure such as bridges, dams, railways, and pipelines. The images highlight both principal and distributed fault ruptures, underscoring why engineers and planners must consider both when assessing earthquake hazards. Credit: Valentini et al. [2025], Figure 4

Recent years have seen major advances thanks to new global databases such as the worldwide and unified database of surface ruptures (SURE) and the Fault Displacement Hazard Initiative (FDHI), which collect tens of thousands of observations of past surface ruptures. Remote sensing techniques now allow scientists to map fault ruptures with unprecedented detail. Importantly, these techniques have also awakened the geological and seismological community to the relevance of moderate earthquakes. Since the 2000s and 2010s, it has become clear that earthquakes smaller than magnitude 6.5 can also produce significant surface ruptures, a threat that was often overlooked before these technological advances. Additionally, international collaborations, such as the International Atomic Energy Agency benchmark project, are helping to unify approaches and ensure that PFDHAs are robust and reproducible across different regions.

What are the major unsolved or unresolved questions and where are additional research, data, or modeling efforts needed?

Several challenges remain. A key issue is the limited number of well-documented earthquakes outside North America and Japan, leaving other regions underrepresented in global databases. Another challenge is how to model complex, multi-fault ruptures, which are increasingly observed in large earthquakes. Understanding the controls on off-fault deformation, as revealed by modern geodetic techniques during large to moderate events, is another critical open question. This knowledge could improve our ability to predict rupture patterns and displacement amounts.

Similarly, the role of near-surface geology in controlling the location, size, and distribution of surface ruptures for a given earthquake magnitude remains poorly constrained and deserves further study. Standardizing terminology and methods is also essential for consistent hazard assessments. Looking forward, more high-quality data, integration of physics-based models, and improved computational frameworks will be crucial to advance the field.


—A. Valentini (alessandro.valentini@univie.ac.at, 0000-0001-5149-2090), University of Vienna, Austria; Francesco Visini (0000-0001-9582-6443), Istituto Nazionale di Geofisica e Vulcanologia, Italy; Paolo Boncio (0000-0002-4129-5779),  Università degli Studi “G. d’Annunzio,” Italy; Oona Scotti (0000-0002-6640-9090), Autorité de Sureté Nucléaire et de Radioprotection, France; and Stéphane Baize (0000-0002-7656-1790), Autorité de Sureté Nucléaire et de Radioprotection, France

Editor’s Note: It is the policy of AGU Publications to invite the authors of articles published in Reviews of Geophysics to write a summary for Eos Editors’ Vox.

Citation: Valentini, A., F. Visini, P. Boncio, O. Scotti, and S. Baize (2025), When the earth moves: 25 years of probabilistic fault displacement hazards, Eos, 106, https://doi.org/10.1029/2025EO255033. Published on 17 October 2025. This article does not represent the opinion of AGU, Eos, or any of its affiliates. It is solely the opinion of the author(s). Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Scientists Must Join Forces to Solve Forecasting’s Predictability Desert

Fri, 10/17/2025 - 11:55

Should I wear a jacket to work today, or will I be too warm? Will that hurricane miss my town, or should I prepare to evacuate? We rely on accurate short-term weather forecasts both to make mundane daily decisions and to warn us of extreme events on the horizon. At the same time, Earth system scientists focus on understanding what drives variations in temperature, precipitation, and extreme conditions over periods spanning months, decades, and longer.

Between those two ends of the forecasting spectrum are subseasonal-to-seasonal (S2S) predictions on timescales of 2 weeks to 2 months. S2S forecasts bridge the gap between short-term weather forecasts and long-range outlooks and hold enormous potential for supporting effective advance decisionmaking across sectors ranging from water and agriculture to energy, disaster preparedness, and more. Yet these timescales represent an underdeveloped scientific frontier where our predictive capabilities are weakest. Indeed, the S2S range is often referred to as the predictability desert.

Forecasts at 3- to 4-week lead times, for example, remain inconsistent. Sometimes, so-called windows of opportunity arise when models provide strikingly accurate, or skillful, guidance at this timescale. But these windows of skillful S2S forecasting are themselves unpredictable. Why do they occur when they do? Do they have recognizable precursors? And how does predictability depend on the quantity (e.g., temperature versus precipitation) being predicted?

Three interlocking puzzle pieces represent the integration of weather prediction (left) and long-term outlooks (right) with the “missing middle” of S2S predictability (center). The center piece highlights key applications—agriculture, water availability, and disaster preparedness—and the tools needed to advance S2S skill, including modeling, data assimilation (DA), artificial intelligence (AI), and multiscale process understanding. Credit: Simmi Readle/NSF NCAR

These questions are more than academic curiosities. Answering them would transform our ability to gauge the value of S2S forecasts in real time and to anticipate and respond to high-impact events such as heat waves, flooding rains, drought onset, and wildfires.

Tackling this challenge requires traditionally siloed communities—scientists focused on predicting near-term weather and those focused on projecting long-term changes in the Earth system—to coordinate efforts. Together, these communities can advance scientific understanding and predictive capabilities across scales.

Discovering Windows of Opportunity

The challenges of subseasonal-to-seasonal (S2S) prediction reflect the complex and interconnected dynamics of the Earth system.

The challenges of S2S prediction reflect the complex and interconnected dynamics of the Earth system. At these lead times, forecast skill relies not only on the accuracy of initial input atmospheric conditions—always a vital element for weather forecasts—but also on model treatments of slowly evolving components of the Earth system. These components—including the ocean state, land surface conditions, snow cover, atmospheric composition, and large-scale patterns of variability such as the Madden-Julian Oscillation (MJO), El Niño–Southern Oscillation, stratospheric quasi-biennial oscillation, and sudden stratospheric warmings—interact in ways that enhance or degrade forecast performance. Volcanic eruptions can further influence these interactions, altering circulation patterns and modulating surface climate on S2S timescales.

Researchers have made substantial progress in understanding these individual Earth system components. But we still cannot reliably anticipate when models will yield skillful forecasts because their accuracy at S2S timescales is episodic and state dependent, meaning it comes and goes and depends on various interacting conditions at any given time. A model might perform well for a given region in one season—yielding a window of opportunity—but struggle in another region or season.

So how might we get better at anticipating such windows? For starters, rather than viewing the predictive capability of models as fixed, we can treat it as a dynamic property that changes depending on evolving system conditions. This paradigm shift could help scientists focus on developing tools and metrics that help them anticipate when forecasts will be most reliable. It could also suggest a need to rethink strategies for collecting environmental observations.

Just as predictability is episodic, so too might be the value of strategically enhanced observations. For example, targeted observations of sea surface temperatures, soil moisture, or atmospheric circulation during periods when these conditions strongly influence forecast skill could be far more valuable than the same measurements made at other times. Such adaptive, or state-aware, observing strategies, say, intensifying atmospheric sampling ahead of a developing MJO event, would mean concentrating resources where and when they will matter most. By feeding these strategically enhanced observations into forecast models, scientists could improve both the forecasts themselves and the ability to evaluate their reliability.

Aligning Goals Across Disciplines

S2S timescales fall at the intersection of weather forecasts and seasonal to decadal outlooks, and the communities working on those different types of predictions have different focuses and research strengths.

To drive needed technical advances supporting improved S2S predictability, we also need a cultural shift to remove barriers between scientific disciplines. S2S timescales fall at the intersection of weather forecasts and seasonal to decadal outlooks, and the communities working on those different types of predictions have different focuses and research strengths. Weather prediction emphasizes initial condition accuracy, data assimilation, and high-resolution modeling of fast atmospheric processes. Studying Earth system behavior and variability over longer timescales focuses on modeling slowly evolving boundary conditions (e.g., the ocean) and coupled component interactions (e.g., between the land and the atmosphere).

Historically, these communities have operated along parallel tracks, each with its own institutions, funding structures, and research priorities. The challenge of identifying windows of opportunity at S2S timescales offers a unifying scientific problem.

Earth system features that offer potentially promising signals of S2S predictability, such as the MJO, are already shared terrain, studied through the lenses of both weather and longer-term change. Extreme events are another area of convergence: Weather models focus on forecasting specific short-lived, high-impact events, whereas Earth system models explore the conditions and teleconnections that influence the likelihood and persistence of extremes. Together, these complementary perspectives can illuminate not only what might happen but why and when skillful forecasts are possible.

The path to unlocking S2S predictability involves more than simply blending models, though. It requires aligning the communities’ scientific goals, model performance evaluation strategies, and approaches for dealing with uncertainty. These approaches include the design of model ensembles, data assimilation strategies that quantify uncertainty in initial conditions, probabilistic evaluation methods, and ways of communicating forecast confidence to users.

The path forward also entails building modeling systems that capitalize on the weather community’s expertise in initialization and the Earth system modeling community’s insights into boundary forcing and component coupling. Accurate initialization must capture all Earth system components—from soil moisture, ocean heat content, and snow cover, for example, to the state of the atmosphere, including the stratosphere. However, observations and data assimilation for several key variables, especially in the ocean, stratosphere, and other data-sparse regions, remain limited, constraining our ability to represent their influences in prediction systems.

A near-term opportunity for aligning goals and developing models lies in improving prediction of MJO-related extreme rainfall events, which arise from tropical ocean–atmosphere interactions and influence regional circulation and precipitation. This improvement will require that atmospheric convection be better represented in models, a long-standing challenge in both communities.

Emerging kilometer-scale models and machine learning offer shared innovation and collaboration spaces. Kilometer-scale models can explicitly resolve convection, validate and refine model parameterizations, and elucidate interactions between large-scale circulation and small-scale processes. Machine learning provides new avenues to emulate convection-permitting simulations, represent unresolved processes, and reduce systematic model errors.

Success with this challenge could yield immediate value for science and decisionmaking by, for example, enabling earlier warnings for flood-prone areas and supporting more informed planting and irrigation decisions in agriculture.

From Forecast Skill to Societal Resilience

The societal need for more skillful S2S prediction is urgent and growing. Communities worldwide are increasingly vulnerable to extreme conditions whose impacts unfold on weekly to monthly timescales. In scenarios such as a prolonged dry spell that turns into drought, a sudden warming trend that amplifies wildfire risk, or a stalled precipitation pattern that leads to flooding, insights from S2S forecasting could provide foresight and opportunities to prepare in affected areas.

Officials overseeing water management, energy planning, public health, agriculture, and emergency response are all seeking more reliable guidance for S2S time frames. In many cases, forecasts providing a few additional weeks of lead time could enable more efficient resource allocation, preparedness actions, and adaptation strategies. Imagine if forecasts could reliably indicate prolonged heat waves 3–4 weeks in advance. Energy providers could prepare for surges in cooling demand, public health officials could implement heat safety campaigns, and farmers could adjust planting or irrigation schedules to reduce losses.

The resilience of infrastructure, ecosystems, and economies hinges on knowing not only what might happen but also when we can trust our forecasts. By focusing on understanding when and where we have windows of opportunity with S2S modeling, we open the door to developing new, intermediate-term forecasting systems that are both skillful and useful—forecast systems that communicate confidence dynamically and inform real-world decisions with nuance.

Realizing this vision will require alignment of research priorities and investments. S2S forecasting and modeling efforts have often fallen between the traditional mandates of agencies concerned with either weather or longer-term outlooks. As a result, the research and operational efforts of these communities have not always been coordinated or sustained at the scale required to drive progress.

Coordination and Collaboration

With growing public attention on maintaining economic competitiveness internationally and building disaster resilience, S2S prediction represents an untapped opportunity space. And as machine learning and artificial intelligence offer new ways to explore predictability with models and to extract meaningful patterns from model outputs, now is the time to advance the needed coordination.

The many use cases for S2S prediction underscore that it isn’t just a scientific challenge, it’s a strategy for achieving resilience and prosperity.

The many use cases for S2S prediction underscore that it isn’t just a scientific challenge, it’s a strategy for achieving resilience and prosperity. We call on a variety of communities and enterprises to collaborate and rally around the challenge of illuminating windows of opportunity in S2S modeling.

Scientists from traditionally distinct disciplines should codesign research strategies to jointly investigate when, where, and why S2S skill emerges. For example, they could examine weather regimes (e.g., the Pacific or Alaska ridges) and their links to modes of variability (e.g., the North Atlantic Oscillation) and leverage data assimilation to better understand how these phenomena evolve across timescales.

The scientific community could also identify and evaluate critical observational gaps that limit progress in modeling and data assimilation. And they could develop strategies to implement adaptive observing approaches that, for example, target soil moisture, surface energy fluxes, and boundary layer profiles to better capture land-atmosphere interactions at S2S timescales. Such approaches would help to fill gaps and advance understanding of key Earth system processes.

Modeling centers could build flexible prediction systems that allow for advanced data assimilation and incorporate robust coupling of Earth system components—drawing from the weather and Earth system modeling communities, respectively—to explore how initial conditions and boundary forcing jointly influence S2S skill. Using modular components—self-contained pieces of code that represent individual Earth system processes, such as atmospheric aerosols and dynamic vegetation—within these systems could help isolate sources of predictability and improve process-level understanding.

To sustain progress initiated by scientists and modeling centers, agencies and funders must recognize S2S prediction as a distinct priority and commit to investing in the needed modeling, observations, and institutional coordination.

Furthermore, it’s essential that scientists, decisionmakers, and end users codevelop forecast tools and information. Close integration among these groups would focus scientific innovation on user-defined needs of what is useful and actionable, allowing scientists to build tools that meet those needs.

S2S forecasting may never deliver consistent skill across all timescales and regions, but knowing when and where it is skillful could make it profoundly powerful for anticipating high-impact hazards. Can we reliably predict windows of opportunity to help solve the predictability desert? Let’s do the work together to find out.

Author Information

Jadwiga H. Richter (jrichter@ucar.edu) and Everette Joseph, National Science Foundation National Center for Atmospheric Research, Boulder, Colo.

Citation: Richter, J. H., and E. Joseph (2025), Scientists must join forces to solve forecasting’s predictability desert, Eos, 106, https://doi.org/10.1029/2025EO250389. Published on 17 October 2025. This article does not represent the opinion of AGU, Eos, or any of its affiliates. It is solely the opinion of the author(s). Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

A Flash, a Boom, a New Microbe Habitat

Fri, 10/17/2025 - 11:54

A sizable asteroid impact generally obliterates anything alive nearby. But the aftermath of such a cataclysm can actually function like an incubator for life. Researchers studying a Finnish impact structure found minerals whose chemistry implies that microbes were present roughly 4 million years after the impact. These findings, which were published in Nature Communications last month, shed light on how rapidly microscopic life colonizes a site after an asteroid impact.

A Special Lake

Finland is known for its myriad lakes used by boaters, fishers, swimmers, and other outdoor afficionados. Lake Lappajärvi is a particularly special Finnish lake with a storied past: Its basin was created roughly 78 million years ago when an asteroid slammed into the planet. In 2024, the United Nations Educational, Scientific and Cultural Organization (UNESCO) established a geopark in South Ostrobothnia, Finland, dedicated to preserving and sharing the history of the 23-kilometer-diameter lake and the surrounding region.

“It’s one of the places where you think that life could have started.”

Jacob Gustafsson, a geoscientist at Linnaeus University in Kalmar, Sweden, and his colleagues recently analyzed a collection of rocks unearthed from deep beneath Lake Lappajärvi. The team’s goal was to better understand how rapidly microbial life colonized the site after the sterilizing impact, which heated the surrounding rock to around 2,000°C (3,632°F).

There’s an analogue between this type of work and studies of the origin of life, said Henrik Drake, a geochemist at Linnaeus University and a member of the team. That’s because a fresh impact site contains a slew of temperature and chemical gradients and no shortage of shattered rocks with nooks and crannies for tiny life-forms. A similar environment beyond Earth would be a logical place for life to arise, Drake said. “It’s one of the places where you think that life could have started.”

Microbe-Sculpted Minerals

In 2022, Gustafsson and his collaborators traveled to Finland to visit the National Drill Core Archive of the Geological Survey of Finland.

There, in the rural municipality of Loppi, the team pored over sections of cores drilled from beneath Lake Lappajärvi in the 1980s and 1990s. The researchers selected 33 intervals of core that were fractured or shot through with holes. The goal was to find calcite or pyrite crystals that had formed in those interstices as they were washed with mineral-rich fluids.

“It’s amazing what we can find out in tiny crystals.”

The team used tweezers to pick out individual calcite and pyrite crystals from the cores. Gustafsson and his collaborators then estimated the ages of those crystals using uranium-lead dating and a technique known as secondary ion mass spectrometry to calculate the ratios of various carbon, oxygen, and sulfur isotopes within them. Because microbes preferentially take up certain isotopes, measuring the isotopic ratios preserved in minerals can reveal the presence of long-ago microbial activity and even identify types of microbes. “We see the products of the microbial process,” Drake said.

“It’s amazing what we can find out in tiny crystals,” Gustafsson added.

The researchers also used isotopic ratios of carbon, oxygen, and sulfur to estimate local groundwater temperatures in the distant past. By combining their age and temperature estimates, the team could trace how the Lake Lappajärvi impact site cooled over time.

A Slow Cool

Groundwater temperatures at Lake Lappajärvi had cooled to around 50°C (122°F) roughly 4 million years after the impact, the team found. That’s a far slower cooling rate than has been inferred for other similarly sized impact craters, such as Ries Crater in Germany, in which hydrothermal activity ceased after about 250,000 years, and Haughton Crater in Canada, where such activity lasted only about 50,000 years.

“Four million years is a very long time,” said Teemu Öhman, an impact geologist at the Impact Crater Lake–Lappajärvi UNESCO Global Geopark in South Ostrobothnia, Finland, not involved in the research. “If you compare Lappajärvi with Ries or Haughton, which are the same size, they cooled way, way, way faster.”

That difference is likely due to the type of rocks that predominate at the Lappajärvi impact site, Gustafsson and his collaborators proposed. For starters, there’s only a relatively thin layer of sedimentary rock at the surface. “Sedimentary rocks often don’t fully melt during impact because of their inherent water and carbon dioxide content,” Drake explained. And Lappajärvi has a thick layer of bedrock (including granites and gneisses), which would have melted in the impact, sending temperatures surging to around 2,000°C, earlier research estimated.

About 4 million years after the impact is also when microbial activity in the crater began, according to Gustafsson and his collaborators. Those ancient microbes were likely converting sulfate into sulfide, the team proposed. And roughly 10 million years later, when temperatures had fallen to around 30°C (86°F), methane-producing microbes appeared, the researchers surmised on the basis of their isotopic analysis of calcite.

In the future, Gustafsson and his colleagues plan to study other Finnish impact craters and look for similar microbial features in smaller and older impact structures. In the meantime, the team is carefully packaging up their material from the Lappajärvi site. It’s time to return the core samples to the Geological Survey of Finland, Drake said. “Now we need to ship them back.”

—Katherine Kornei (@KatherineKornei), Science Writer

Citation: Kornei, K. (2025), A flash, a boom, a new microbe habitat, Eos, 106, https://doi.org/10.1029/2025EO250388. Published on 17 October 2025. Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Tectonics and Climate Are Shaping an Alaskan Ecosystem

Thu, 10/16/2025 - 13:24
Source: AGU Advances

Increased warming in high-latitude wetlands seems poised to increase the activity of methanogens, or methane-producing microbes. These ecosystems are complex places, however, making outcomes hard to predict.

In new biogeochemical research taking into account tectonic, climatic, and ecological factors affecting the Copper River Delta in Alaska, Buser-Young et al. found that seismic uplift and glacial meltwater have each contributed to changes in microbial metabolism, with the surprising effect of potentially decreasing methane production.

The Copper River Delta in south central Alaska has a history of large seismic events. That includes, most recently, a 1964 earthquake that lifted portions of the delta to up to 3.4 meters above sea level, turning much of it from a marine environment to a freshwater one. In more recent decades, increasing amounts of iron-rich glacial runoff have also begun flowing through the delta, the result of climate change.

Combining geochemical studies of sediment cores from six wetland locations in the delta with metagenomic analyses of the microbes in the cores, the authors documented a distinct shift in microbial metabolism. Though genes for methanogenesis are still prevalent, and organic matter is available, they found that in an increasingly freshwater, iron-rich environment, the dominant means of energy production among the microbes shifted to involve iron cycling. Their findings are a demonstration of the ways large-scale geological and climatic shifts can affect small-scale processes such as the dynamics of microbial communities.

Looking ahead, the researchers say analyzing deeper sediment core samples could provide more information about how microbial dynamics have changed over time. In addition, they say, further culture-based experiments could improve understanding of the relationships between iron and organic matter within the carbon cycle. (AGU Advances, https://doi.org/10.1029/2025AV001821, 2025)

—Nathaniel Scharping (@nathanielscharp), Science Writer

Citation: Scharping, N. (2025), Tectonics and climate are shaping an Alaskan ecosystem, Eos, 106, https://doi.org/10.1029/2025EO250387. Published on 16 October 2025. Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer