EOS

Syndicate content Eos
Science News by AGU
Updated: 1 day 23 hours ago

Glaciers May Flow into the Ocean More Quickly Than We Think

Tue, 04/14/2026 - 13:03
Source: AGU Advances

Models of glacial flow and retreat rely on estimates of glacial ice viscosity, the measure of the ice’s resistance to flow.

Ice viscosity is dependent on the stress applied to the glacier. Most ice sheet models use a standard equation to model ice flow that includes the variable n, called the stress exponent. A larger value of n means ice viscosity is more sensitive to changes in stress. For decades, glaciologists have, almost exclusively, used an assumed n value of 3 in the models they use to predict ice flow.

However, through recent experiments and observations, researchers have found that an n value of 4 may actually better represent the conditions of Earth’s ice sheets and glaciers.

Martin et al. created a model representation of the fast-retreating Pine Island Glacier in West Antarctica. The ice sheet in their model had a true n value of 4, but they ran model projections using both n = 4 and n = 3. That allowed them to observe how their model would incorrectly predict glacial flow and resulting sea level change, given an incorrect n value.

The researchers modeled glacial retreat for 100 years under both equations with two different glacial melting scenarios. They then modeled glacial recovery for another 300 years. Under a moderate scenario, the n = 3 model underestimated glacial retreat by 18% and sea level change contributions by 21%. Under an extreme melting scenario, the model underestimated sea level contributions by 35%.

Notably, those disparities in glacial retreat and sea level change contribution predictions increased more than would be expected between the two scenarios, potentially increasing the level of uncertainty in current projections of sea level change. The researchers also suggest that incorrect n values may be mistakenly attributed to other physical processes in current ice sheet models.

The results could have far-reaching implications for predictions of future glacial melt and may prompt investigations into its effects on sea level, the authors say. (AGU Advances, https://doi.org/10.1029/2025AV001946, 2026)

—Madeline Reinsel, Science Writer

Citation: Reinsel, M. (2026), Glaciers may flow into the ocean more quickly than we think, Eos, 107, https://doi.org/10.1029/2026EO260107. Published on 14 April 2026. Text © 2026. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

On the Seattle Fault, the Biggest Quakes Aren’t the Most Likely

Tue, 04/14/2026 - 13:02

In the winter of 923, a magnitude 7.5 earthquake struck the heart of Puget Sound. Shorelines slid into the water, the seafloor rose up, and a tsunami swept through the region.

The Seattle fault zone, actually a mesh of faults that runs right under its eponymous city, was responsible for this quake. The fault continues to pose one of the deadliest threats to the Pacific Northwest; if a similar quake were to hit today, it would threaten millions of lives and cause billions of dollars in damage.

Two new papers dig into recurrence intervals, or the quiescent periods between earthquakes, for the Seattle fault zone. They offer good news and bad news: One study, published in Geology, found that in the past 11,000 years, the massive 923 event was the only quake of magnitude 7.5 or greater. The other study, published in GSA Bulletin, found that smaller, but still damaging, quakes occur more frequently than previously thought.

The Seattle fault zone runs east-west under the city and the surrounding Puget Sound. Credit: Washington Geological Survey (Washington Department of Natural Resources)

The new research indicates the worst-case scenario of frequent 923-style events is less likely than some scientists thought, said Harold Tobin, a geophysicist at the University of Washington and head of the Pacific Northwest Seismic Network, who was not involved in either study. But researchers also found that “the less worse, but still bad scenarios” are more likely than previously thought.

Meet the Seattle Fault

“For a fault that has had so much attention, there’s so much we still don’t know.”

The Seattle fault zone is a thrust fault system that stretches about 75 kilometers (46 miles) from the foothills of the Cascades east of Seattle to the Hood Canal, which runs along the shores of the Olympic Peninsula to the city’s west, passing under Seattle along the way.

Geologists began rigorously exploring the fault system in the early 1990s, intrigued by gravitational anomalies, uplifted marine terraces (stair-step geological formations along coastlines), and evidence of a roughly 1,000-year-old tsunami. All these features hinted at a major, shallow earthquake on a local fault zone—likely the 923 event.

But “for a fault that has had so much attention, there’s so much we still don’t know,” said Elizabeth Davis, an earthquake geologist at the University of Washington who led the Geology study.

The most pressing questions are how big quakes on the fault get, how often they hit, and, ultimately, what risks the fault poses to people who live in the Puget Sound area.

“It takes some real geologic sleuthing to get at those tough questions,” Tobin said.

Biggest Seattle Fault Quakes Are Rare

Davis focused on the activity of the main fault, which can generate the biggest quakes in the Seattle fault zone complex. It was responsible for the 923 quake. But the existing record went back only about 5,000 years.

“We just don’t know what the recurrence interval for these big quakes is,” Davis said. “We wanted to lengthen the record.”

To do so, Davis and her collaborators turned to marine terraces, the oldest of which date back to the end of the last ice age about 11,000 years ago. The quake in 923 raised terraces by about 8 meters (26 feet), and scientists wanted to look for similar-scale uplift in terraces all around the sound.

The researchers mapped more than 150 terraces around Puget Sound and measured their depths. After accounting for regional slopes, they estimated uplift over time that could have been caused by quakes.

They found that in that 11,000-year period, only the 923 event generated significant uplift. Thick sediment mantles could mask smaller events but not 923-scale quakes, Davis said.

Estimating true recurrence intervals requires knowing the timing of multiple events. But the finding is “not bad news,” she said. It provides some evidence that the recurrence interval is likely not shorter than about 5,000 years.

“That could give us more of a buffer between now and when the next big one like that will happen,” said Stephen Angster, a U.S. Geological Survey geologist who led the GSA Bulletin study.

Smaller, Damaging Quakes Are More Frequent

Angster’s work focused on Seattle’s secondary faults, which are smaller, mostly blind faults (those not visible at the surface) capable of generating damaging earthquakes. Previous work had shown that one of these secondary faults generated a magnitude 6.7 earthquake, highlighting the risk they pose. Angster wanted to explore rupture histories of these secondary faults, particularly whether they could rupture independently from the main fault.

The researchers used a suite of paleoseismic tools, including magnetic data, field and lidar mapping, trenches dug across faults, and geochronology. They studied two newly identified secondary faults that have orientations similar to the main fault.

They found three new earthquakes to add to the region’s seismic history, including the oldest and youngest events in the known record, which were around 11,000 years ago and in the early 1800s, respectively. The earthquakes appear to be evidence of ruptures that occurred independently of the main fault, suggesting that the smaller—but still dangerous—secondary faults should be considered in hazard modeling.

With that lengthened record and the addition of three quakes, the recurrence interval the researchers found was about every 350 years over the past 2,500 years. This timing refined the previous estimate of every several hundred years.

There also appears to be an increase in activity over the past 2,000 years.

“Maybe we should be paying attention to that,” Angster said.

What Happens Next

“There are other earthquakes that aren’t as big but that occur more frequently. Those might not be as catastrophic, but it would be a very bad scenario for Seattle” if such events occurred.

“These are both carefully done studies,” Tobin said. “We now have evidence that the 923 event was the biggest in 11,000 years. But there are other earthquakes that aren’t as big but that occur more frequently. Those might not be as catastrophic, but it would be a very bad scenario for Seattle” if such events occurred.

It’s still to be determined whether the risk from secondary faults will be incorporated into the National Seismic Hazard Model, which includes the 923 quake but not smaller ones along the Seattle fault zone. The secondary faults were left out in previous efforts because they are shorter than the minimum length required to be included and because of uncertainties in their potential rupture magnitude.

—Rebecca Dzombak (@rdzombak.bsky.social), Science Writer

Citation: Dzombak, R. (2026), On the Seattle Fault, the biggest quakes aren’t the most likely, Eos, 107, https://doi.org/10.1029/2026EO260114. Published on 14 April 2026. Text © 2026. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Machine Learning Can Improve the Use of Atmospheric Observations in the Tropics 

Tue, 04/14/2026 - 12:00
Editors’ Highlights are summaries of recent papers by AGU’s journal editors. Source: Journal of Advances in Modeling Earth Systems

The purpose of atmospheric data assimilation is to obtain a 3-dimensional gridded representation of the fields of the atmospheric state variables (temperature, wind, pressure, etc.) for a specific time based on atmospheric observations. The product of data assimilation, called analysis, can be used to prepare weather maps and to start model-based weather forecasts. Analyses collected over a long period of time can also be used for research and to monitor variability and changes in the climate.

The main challenges of data assimilation are that observations are not collocated with the grid-points of the analysis, and most observations do not observe the variables of interest directly and have errors. For example, satellite-based observations, which form the bulk of the operationally assimilated observations, measure the intensity of electro-magnetic waves at the top of the atmosphere; a physical quantity that depends on the atmospheric state in highly complicated ways. The background-error covariance matrix is a key component of a data assimilation system, responsible for spreading information from observations to the unobserved locations and state variables. A good estimate of this matrix is essential to produce analyses in which the fields of the state variables are realistic and consistent with each other. Obtaining such an estimate is particularly challenging for tropical locations, where physics-based knowledge does not lead to a straightforward practical formulation.

In a new study, Melinc et al. [2026] propose a novel machine learning-based (ML-based) approach to define a background-error matrix that is equally effective in the midlatitudes and tropics. This approach takes advantage of the power of ML to learn quantitative relationships between different state variables at different locations-relationships that are either not known, or cannot be easily used for the formulation of a background-error matrix based on physics-based knowledge.

Citation: Melinc, B., Perkan, U., & Zaplotnik, Ž. (2026). A unified neural background-error covariance model for midlatitude and tropical atmospheric data assimilation. Journal of Advances in Modeling Earth Systems, 18, e2025MS005360. https://doi.org/10.1029/2025MS005360

—Istvan Szunyogh, Associate Editor, JAMES

Text © 2026. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Wealth and land-cover change govern landslide fatalities on world’s mountains

Tue, 04/14/2026 - 07:12

A new paper Fidan et al. (2026) demonstrates that wealth and the rate of land-cover change play a key role in determining the occurrence of fatal landslides in mountain areas. These factors are statistically more significant that precipitation and topography.

A fascinating new paper (Fidan et al. 2026 – this paper is both open access and published under a Creative Commons licence – hurrah!) has just been published in the journal Science Advances that explores rates of land-cover (in the paper, the authors use the term land-use – land-cover) change as a factor in determining fatal landslides in mountains globally. I must admit to some degree of personal interest in this paper, although I am neither an autor nor a reviewer, as it brilliantly uses the dataset that Melanie Froude and I collated on global landslide fatalities (see Froude and Petley 2018). I’m delighted to see our data being used in this way (and please do contact me if you want a copy of the spreadsheet).

Fidan et al. (2026) explores a range of factors that might influence the occurrence of fatal landslides from the perspective of either increased vulnerability (poorer people may live in more vulnerable locations for example) or increased landslide likelihood (land-cover change might increase the likelihood of a landslide being triggered, for example).

The fascinating result lies in land-cover change. The authors have looked at  approximately 60 years of land-cover changes in mountainous areas across 46 countries. Unsurprisingly, there is substantial change, especially in low- and lower-middle–income countries, often involving the loss of forest (which as a first order estimation, may buffer against slope failures), although the pattern is far more complex of course. Fidan et al. (2026) find that a key metric is the rate of change of land-cover, and that this is linked to the rate of population growth (perhaps unsurprisingly). Countries with high rates of population growth also show high rates of change of land-cover.

In many ways, the most interesting figure in this study is in the Supplementary Information. It is a complex diagram, but it’s worth more detailed analysis:-

The relationship between the land-cover change rate and the density of fatal landslides for mountain areas around the world. Figure from Fidan et al. (2026), published under a Creative Commons Licence.

The main map (A) shows mountain areas with high rates of land-cover change (orange), high density of fatal landslides (blue) or both (black). The left hand graph (B) shows the relationship between the landslide density and the rate of change of land-cover – here, higher rates of land-cover change are associated with a higher density of fatal landslides. The right hand graph is the same data as in (B), but with each point coloured according to the income level of the country. High income countries have a lower fatal landslide density. Thus, as the authors conclude, wealth and land-cover change appear to control fatal landslide density.

There is a really surprising element to this study, which I think requires more consideration. I think I should allow the authors themselves to express this finding, from the abstract:-

“Our statistical analyses show that land-use – land-cover changes have a substantially greater influence on the density of fatal landslides and landslide fatalities than physical factors such as topography and precipitation, especially in lower-income countries.”

As landslide researchers, we almost always default to topography and precipitation as being key in landslide occurrence. There are sound reasons for doing so. But statistically, the rate of land-cover change plays a more important role in mountain areas, especially in poorer countries.

This has (or should have) major implications for the way that we consider and manage landslide risk in such areas.

References

Fidan, S. et al. 2026. Wealth and land-cover change govern landslide fatalities on world’s mountains. Science Advances 12, eaec2739. DOI: 10.1126/sciadv.aec2739.

Froude M.J. and Petley D.N. 2018. Global fatal landslide occurrence from 2004 to 2016Natural Hazards and Earth System Science 18, 2161-2181. DOI: 10.5194/nhess-18-2161-2018.

Return to The Landslide Blog homepage

Text © 2026. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Constructive Debate on the Rise of the Tibetan Plateau

Mon, 04/13/2026 - 18:41
Editors’ Highlights are summaries of recent papers by AGU’s journal editors. Source: Tectonics

Scientific progress rarely follows a straight path. Instead, it develops through open discussion, critical evaluation, and the testing of new ideas. The exchange between authors and colleagues illustrates how this process unfolds in modern Earth sciences and provides a valuable example of constructive scientific debate.

At the center of the discussion lies a fundamental question about one of Earth’s most remarkable geological features: how did the Himalaya and the Tibetan Plateau become the highest and largest mountain system on the planet?

In their paper “Raising the Roof of the World: Intra-Crustal Asian Mantle Supports the
Himalayan–Tibetan Orogen,” Sternai et al. [2025] address this question using numerical geodynamic modeling. These computer simulations reproduce the physical behavior of large rock masses deep inside the Earth and allow researchers to investigate the long-term evolution of this vast orogenic system.

Their study specifically explores the possibility that, during the collision between the Indian and Asian plates, layers of mechanically strong Asian mantle rock became embedded within the thickened Indian continental crust beneath the Tibetan Plateau. According to this hypothesis, these mantle layers could help sustain the elevation of the Plateau by effectively withstanding stresses over long geological timescales: the Indian crust would provide buoyancy (raising the roof), while the Asian mantle would contribute mechanical strength to support the Himalayan–Tibetan topography.

Hetényi and Cattin disagree with and challenge this interpretation in their Comment. Drawing on a large body of well-established geophysical and geological observations, they argue that the structure beneath southern Tibet is better explained by underthrusting, the process by which the Indian plate slides beneath the Tibetan Plateau. Seismic imaging studies, including receiver-function analyses that use earthquake waves to map subsurface structures, consistently reveal features interpreted as Indian crust and upper mantle extending far north beneath Tibet.

In their Reply, Sternai and colleagues clarify that their models were not intended to accurately reproduce the present-day structure of the region in detail. Instead, they were designed as process-oriented experiments to test whether existing and/or alternative mechanisms for crustal thickening and plateau support are mechanically and rheologically viable.

This exchange highlights an important aspect of contemporary geoscience—observations of Earth’s interior such as seismic images, gravity data, and geological records often allow multiple, non-unique interpretations. Numerical modeling provides a complementary approach by evaluating whether proposed geological mechanisms are physically plausible.

Equally significant is the tone of the discussion itself. The Comment and Reply show how scientists, while strongly disagreeing about interpretations, can maintain a constructive and respectful dialogue. Such approach fuels scientific advance by encouraging the community to re-examine established assumptions, refine models, and integrate new observations.

Debates like this one, therefore, extend well beyond a specific geological question. They illustrate how scientific understanding advances through the interplay of observations, theoretical reasoning, and modeling experiments.

In this way, the dialogue highlighted here contributes not only to our understanding of the Himalayan–Tibetan mountain system but also to the broader methodology of Earth science.

Citations

Sternai, P., Pilia, S., Ghelichkhan, S., Bouilhol, P., Menant, A., Davies, D. R., et al. (2025). Raising the roof of the world: Intra-crustal Asian mantle supports the Himalayan-Tibetan orogen. Tectonics, 44, e2025TC009057. https://doi.org/10.1029/2025TC009057

Hetényi, G., & Cattin, R. (2026). Comment on “Raising the roof of the world: Intra-crustal Asian mantle supports the Himalayan-Tibetan orogen” by Sternai et al. Tectonics, 45, e2025TC009214. https://doi.org/10.1029/2025TC009214

Sternai, P., Pilia, S., Ghelichkhan, S., Bouilhol, P., Menant, A., Ostorero, L., et al. (2026). Reply to comment by Hetényi and Cattin on: “Raising the roof of the world: Intra-crustal Asian mantle supports the Himalayan-Tibetan orogen”. Tectonics, 45, e2026TC009436. https://doi.org/10.1029/2026TC009436

—Giulio Viola, Editor-in-Chief, Tectonics

Text © 2026. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Fixing Baltimore’s Unequal Weather Data Coverage

Mon, 04/13/2026 - 12:37
Source: Community Science

Heat, air pollution, and flooding can affect a city and the health of city residents. Yet few cities have a comprehensive network of weather stations providing accurate measurements of rainfall, humidity, and air temperature across different neighborhoods. Some of this information can be filled in by community members’ personal weather stations, like those connected through Weather Underground. But because of a lack of sensors and inconsistencies in data collection, these types of community networks are often not reliable on their own. Furthermore, most personal weather stations are located in higher-income neighborhoods, with very few in lower-income, underserved neighborhoods.

The same is true in Baltimore, where personal weather stations are more prevalent in higher-income, majority-white neighborhoods around and stretching north from the Inner Harbor but are lacking in lower-income and majority-Black neighborhoods to the west and east. Furthermore, only one National Weather Service sensor is present in the city itself, in the Inner Harbor, and another sensor is located about 12 kilometers (8 miles) away at Baltimore/Washington International Airport.

Waugh et al. describe a partnership between universities, state agencies, and Baltimore residents to build the Baltimore Community Weather Network (BCWN) that addresses the missing data coverage around the city. Unlike the patchwork of personal weather stations, community members participating in the BCWN are from underserved areas in the city and are actively involved in data collection and interpretation.

Weather stations are placed in open spaces to avoid obstacles like buildings or trees affecting measurements of temperature, rainfall, or wind. This careful placement is designed to ensure that the data collected are as close as possible to the conditions experienced by actual residents.

BCWN sites are carefully monitored and managed by community members. Baltimore residents are actively involved in data collection, weather station management, and decisionmaking with scientists and local organizations to help promote engagement, education, and community empowerment.

Because Baltimore is not the only U.S. city that has historically lacked accurate weather data coverage, the BCWN system could be applied to other locations—or even used to monitor other environmental exposures, such as air pollution, the authors say. (Community Science, https://doi.org/10.1029/2025CSJ000154, 2026)

—Rebecca Owen (@beccapox.bsky.social), Science Writer

Citation: Owen, R. (2026), Fixing Baltimore’s unequal weather data coverage, Eos, 107, https://doi.org/10.1029/2026EO260108. Published on 13 April 2026. Text © 2026. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

How Sediment Magnetism Captures the South Atlantic Anomaly

Mon, 04/13/2026 - 12:00
Editors’ Highlights are summaries of recent papers by AGU’s journal editors. Source: Journal of Geophysical Research: Solid Earth

Understanding geomagnetic field variability within the South Atlantic Anomaly (SAA) over the past tens of thousands of years is crucial for reconstructing its origin and anticipating its future evolution.

Liu et al. [2026] present high-resolution paleo- and rock magnetic data from ODP Site 1233, spanning a period of normal secular variation between the Laschamp (~41 ka) and the Norwegian-Greenland Sea excursion (~64.5 ka). Because reliable relative paleointensity (RPI) estimates require a detailed characterization of the magnetic mineral assemblage, the authors thoroughly examine the magnetic carriers and apply a normalization strategy that accounts for their magnetic properties while rescaling amplitudes to a common reference frame.

This approach yields a RPI record that correlates closely with both regional and global paleointensity stacks. Notably, these data reveal exceptionally weak geomagnetic field strengths in the SAA region during a global transition from a high-field to a low-field state. Such behavior suggests that a paleo-SAA may have exerted a dominant influence on global field morphology, remarkably similar to the situation observed today!

Citation: Liu, J., Nowaczyk, N. R., Huang, Y., Luo, X., Wang, H., Han, F., et al. (2026). Paleosecular Variations in the South Atlantic Anomaly Region Over 65–40 ka — Revisiting Site ODP 1233. Journal of Geophysical Research: Solid Earth, 131, e2025JB032061. https://doi.org/10.1029/2025JB032061

—Agnes Kontny, Associate Editor, JGR: Solid Earth

Text © 2026. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

The 19 March 2026 landslide on Interstate 5 near Bellingham in Washington State, USA

Mon, 04/13/2026 - 06:28

Post based on material kindly provided by Professor Douglas H. Clark of the Geology Department at Western Washington University. Many thanks to Doug for providing this information.

On 19 March 2026, a c.2000 cubic metre rockslide blocked the northern bound lanes of Interstate 5 near to Bellingham, WA. The road will not fully reopen until later this week.

On 19 March 2026, a substantial rockslide occurred that blocked Interstate 5 just south of Bellingham in Washington State. The landslide blocked the north-bound half of the freeway, which is still closed; the Washington DOT has a decent description of it. Fortunately no one was killed by the failure. The road is not expected to full reopen until 16 April 2026. The landslide is at about [48.69293, -122.44423].

There is an excellent gallery of images of the rockslide on the Cascadia daily site. This image, from the WSDOT blog, shows the aftermath of the landslide:-

The aftermath of the 19 March 2026 landslide onto Interstate 5 near to Bellingham, WA. Image from WSDOT.

KOMO news has a good drone video of the clean up operation:-

The geologic context for the rockfall is that this section of I-5 was cut into the south side of a steep ridge of Miocene Chuckanut Formation, a thick deposit of freshwater sandstones interbedded with thinner shales and coal beds.  As a local Geotech geologist, Dan McShane notes in his blog site, the sandstone in this area is steeply dipping away from the freeway, but prominent joint sets in the sandstone beds (presumably related to their tortuous folding) cause the roadcuts along this section to be particularly susceptible to rockfall failures.  Smaller rockfalls along this stretch caused the DOT to cut the slope back from the freeway to create a rockfall collection zone.  Lidar from the Washington DNR lidar portal shows the near-vertical, north-dipping bedding in the bedrock well (red arrow shows approximate location of the slide):-

LIDAR data from Washington DNR showing the site of the 19 March 2026 landslide onto Interstate 5 near to Bellingham, WA.

A local meteorology prof at University of Washington noted that this March was the wettest March on record (since before the freeway was built) in Bellingham, which almost certainly contributed to the failure:

Total precipitation from 1 to 29 March 2026 near to Bellingham WA.

Although this particular slide-prone area was largely created by the freeway construction, the Chuckanut Formation has been the source of thousands of historic and prehistoric landslides in the area, including some truly massive valley-blocking landslides further to the east in the Cascade foothills (e.g. https://www.flickr.com/photos/wastatednr/51148697281/).   The shear number of landslides in the county is truly impressive (many involving the Chuckanut Formation): 

Mapped landslides to the east of Bellingham, WA. Data from the Washington Geologic Information Portal. Return to The Landslide Blog homepage Text © 2026. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Artemis II Crew Splashes Down

Sat, 04/11/2026 - 00:12
body {background-color: #D2D1D5;} Research & Developments is a blog for brief updates that provide context for the flurry of news that impacts science and scientists today.

After a week-and-a-half journey to and around the Moon, the Artemis II crew splashed back to Earth off the coast of San Diego at 5:07 p.m. local time (8:07 p.m. ET) on 10 April.

“From the pages of Jules Vernes to a modern day mission to the Moon, a new chapter in the exploration of our celestial neighbor is complete,” said a NASA announcer as the astronauts splashed down. “Integrity’s astronauts, back on Earth.”

In a news conference on 9 April, the day before splashdown, NASA associate administrator Amit Kshatriya described what NASA astronauts Reid Wiseman, Victor Glover, and Christina Koch, and Canadian Space Agency (CSA) astronaut Jeremy Hansen will have accomplished upon arriving home.

“They will have traveled 400,000 miles. They will have seen what no living person has seen. They will have tested every system on the spacecraft in the environment it was built for. And they will have given us 10 days of data that will shape every mission that comes after,” he said.

 
Related

In the same news conference, lead flight director Jeff Radigan described the approximately “13 minutes of things that have to go right” prior to splashdown: At 4:53 p.m. local time, the spacecraft entered a 6-minute communications blackout as plasmas formed around the spacecraft in the face of heat reaching 2,200 to 2,760 °C (4,000 to 5,000 °F) and a G level of 3.9. Then, Orion jettisoned its forward bay cover, deployed drogue parachutes at 22,000 feet above Earth, and deployed three more parachutes at 6,000 feet to slow the spacecraft before splashdown.

In their journey to go farther from Earth than humans have ever traveled, the astronauts tested the Orion spacecraft’s life-support, propulsion, and navigation systems; captured images of Earth and the Moon; and conducted several trajectory correction burns.

The world watched as the astronauts on the Orion spacecraft and the International Space Station held a spaceship-to-spaceship call, and as the crew called mission control to request that a lunar crater be named “Carroll” after Commander Reid Wiseman’s late wife, Carroll Wiseman.

When the crew passed behind the Moon on 6 April, they entered a 40-minute planned communication blackout as the lunar surface blocked radio communication with Earth.

“You heard the word[s] ‘together,’ ‘togetherness’ a lot from our crew,” said NASA astronaut Victor Glover, from space, describing the blackout. “I really was hoping that, while we were waiting to get back into contact, that people could just feel that sense of togetherness, that we were all a crew on spaceship Earth.”

—Emily Gardner (@emfurd.bsky.social), Associate Editor

These updates are made possible through information from the scientific community. Do you have a story about science or scientists? Send us a tip at eos@agu.org. Text © 2026. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Synergistic Integration of Flood Inundation Modeling Methods

Fri, 04/10/2026 - 17:16
Editors’ Vox is a blog from AGU’s Publications Department.

Flood inundation models are tools that predict where water flows, how deep it gets, how fast it moves and how long it remains during a flood event. But despite recent advances in flood inundation models, some flood modeling paradigms are being used beyond their range of applicability rather than leveraging the strengths of different methods.

A new article in Review of Geophysics explores the strengths and limitations of different flood modeling methods and calls for an integrated approach to flood modeling. Here, we asked the authors to give an overview of flood inundation models, the challenges of “siloing,” and future directions for research. 

In simple terms, how do flood inundation models work and why are they important?

Flood inundation models take inputs, such as rainfall, ground elevation, river flow, and infrastructure data, and simulate how flooding develops across a given area. In some ways, they function as a replica of the physical world, allowing modelers to approximate how a flood scenario may evolve.

The models matter because they support decisions across a wide range of sectors. Emergency managers use them to plan evacuations and allocate resources. Engineers rely on them to design flood control infrastructure such as levees, bridges, and drainage systems. Regulatory agencies, like FEMA in the United States, use the models to delineate flood zones, which determine where properties are subject to flood risk. Flood inundation models also inform decisions related to public health, agriculture, insurance markets, transportation infrastructure, and environmental management, among many others.

Applications of flood inundation models in different sectors. Credit: Nazari et al. [2026], Figure 1

How have flood inundation models evolved since they first started being developed?

Flood inundation models have evolved significantly over the past century, driven primarily by advances in mathematics, computational power, and data availability. Early models were relatively simple and could only track water moving in one direction along a channel. As the field advanced, models expanded to simulate how floodwater propagates across the broader landscape, including areas far beyond waterbodies.

The availability of high-resolution terrain data, remote sensing, and satellite imagery further transformed the field. Modelers could work with detailed representations of the landscape at regional, continental and even global scales, a scale that was computationally out of reach just decades earlier. High-performance computing made it possible to run complex simulations faster and over much larger areas.

Rather than these different approaches growing together and complementing each other, they increasingly develop in isolation.

More recently, the rise of data-driven approaches, artificial intelligence and machine learning, introduced an additional modeling paradigm, one that learns patterns from observed data rather than solving physical equations. These methods often offer computational efficiency in data-rich environments. However, this rapid diversification has also introduced a challenge. Rather than these different approaches growing together and complementing each other, they increasingly develop in isolation, each evolving within its own methodological boundaries. This divergence and what it means for the future of the field is a defining concern in flood modeling today.

What are the flood inundation modeling methods described in your review article?

Our review groups flood inundation modeling into four broad methods. First are computational models, which are physics-based models that numerically solve equations representing conservation of mass and momentum and are often very robust for representing flood dynamics. Second, with the rise of big data, artificial intelligence and machine learning algorithms proliferated. These methods can be fast and efficient, but they often rely heavily on data, lack physical constraints and offer limited generalizability beyond their training conditions, which is particularly concerning since those “unseen” conditions could be the very extreme events that matter far more than data-rich frequent and milder scenarios. Third are observational and experimental methods, which use field measurements, satellite data, and laboratory studies to describe or analyze flooding; these can help with calibration and validation but usually have limited predictive skill on their own. Fourth are conceptual models, which simplify flood behavior into transparent and efficient rules. These can be useful for planning and broad analyses, but they overlook important hydraulic details.

What is “siloing” in flood inundation modeling and why does it occur?

In our review, “siloing” refers to the tendency of different modeling approaches evolving independently within their own methodological boundaries, with limited exchange or integration across paradigms. A concern is substantial investment on methods with a limited scope, assuming that methods can ultimately overcome their own simplifications and replace other methods. This has particularly been observed in the push to use data-driven and remote sensing paradigms to replace physics-based models, rather than integrating their strengths. This can be due to several reasons. Different applications demand different levels of accuracy, efficiency, predictive skill, and computing power. Some methods are easier to use or better matched with available data. In other cases, modelers may be more familiar with one method than with alternatives, so they continue refining that method even when another approach could solve part of the problem better. Siloing also grows when simplified methods are adopted for convenience or justified by data limitations and computing power constraints, gradually being treated as full replacements for more physically grounded models.

What are some of the challenges that siloing presents?

Siloing slows progress by underusing the strengths of complementary methods.

Siloing creates both scientific and practical problems. One major challenge is that models may be pushed beyond the scope they were designed for. For example, some simplified or data-driven methods can miss key flood dynamics, such as backwater effects, transient flow behavior, meaning how floods change rapidly over time, or infrastructure controls, yet still be used in consequential decisions. Another problem is that siloing slows progress by underusing the strengths of complementary methods. Siloing also makes it difficult to objectively evaluate model assumptions because each modeling community tends to focus on improving its own methods rather than testing where those methods perform best and where they fall short.

What are the pathways for future research in flood inundation modeling?

The main pathway we propose is synergistic integration of various modeling; moving away from developing modeling methods in isolation and toward integrating them so that each method contributes what it does best. This means, for example, using simple or data-driven models to identify where detailed hydrodynamic modeling is most needed, leveraging satellite and field observations to improve other models’ inputs and calibration, and incorporating machine learning in ways that are guided by physical constraints rather than data alone. It also means investing more in physics-based models, experiments and data collection, such as detailed surveys of ground elevation and physical infrastructure, rather than defaulting to simplification as a substitute for that investment. Advances in high-performance computing make this level of integration increasingly feasible.

The goal is not to sacrifice physics just to arrive at faster or more convenient approaches.

The goal is not to sacrifice physics just to arrive at faster or more convenient approaches, but to develop actionable models that are physically grounded, reliable across a range of conditions, and informative for the decisions that depend on them. Advances across all these fronts can help close the gap between physical realism and computational efficiency, making integrated modeling not just an aspiration but an achievable practice.

—Behzad Nazari (behzadnazari@gmail.com, 0009-0000-5568-4735), The University of Texas at Arlington, United States; Ebrahim Ahmadisharaf (eahmadisharaf@eng.famu.fsu.edu, 0000-0002-9452-7975), Florida State University: Tallahassee, United States

Editor’s Note: It is the policy of AGU Publications to invite the authors of articles published in Reviews of Geophysics to write a summary for Eos Editors’ Vox.

Citation: Nazari, B., and E. Ahmadisharaf (2026), Synergistic integration of flood inundation modeling methods, Eos, 107, https://doi.org/10.1029/2026EO265015. Published on 10 April 2026. This article does not represent the opinion of AGU, Eos, or any of its affiliates. It is solely the opinion of the author(s). Text © 2026. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Lessons from Linking Great Salt Lake Desiccation and Depression

Fri, 04/10/2026 - 14:01

The Great Salt Lake is disappearing. Driven by decades of water diversions for agriculture, development, and mining, as well as by the warming climate, Utah’s famed lake has lost roughly 73% of its volume since 1850, exposing more than 54% of the lake bed.

The ecological and economic consequences of this decline are well documented, with the latter estimated at more than $2 billion in annual losses.

But a more insidious crisis is also rising as the lake vanishes: Dust from the exposed lake bed, picked up and blown by the wind, appears to be having a measurable mental health impact on the state’s residents.

Our recent research established a desiccated lake–to–mental health pathway, linking declining Great Salt Lake water levels to increased concentrations of hazardous, fine-grained particulate matter (PM2.5) in the air and, ultimately, to a higher prevalence of major depressive episodes (MDEs). In this context, lake desiccation acts as a potent threat multiplier. It does not merely create a new environmental hazard; it compounds existing social vulnerabilities, transforming a hydrological crisis into a chronic public health burden.

The water level of the Great Salt Lake dropped substantially over the past several decades, as shown by these composite images taken by Landsat satellites in June 1985 and July 2022. Credit: NASA Earth Observatory, Public Domain

Previous studies documented important parts of this pathway separately, including links between drying lakes and dust or degraded air quality, and broader associations between PM2.5 exposure and mental health outcomes. Our study brought those links together by analyzing and combining information from various open-access, long-standing datasets collected by different agencies to study changing mental health conditions in Utah between 2006 and 2018.

This integration required more than data assembly. It also required a fundamental shift in how scientists from different fields framed the problem and spoke to one another.

The Friction of Interdisciplinary Collaboration

We had to assemble a research team representing a variety of specializations. Once the team formed, we faced immediate barriers regarding language and standards of evidence.

Our study began with a bold hypothesis: Air pollution from the Great Salt Lake might be affecting both physical and mental health. To investigate this idea, we had to assemble a research team representing a variety of specializations across hydrology, atmospheric science, and mental health—a challenging task considering some potential collaborators indicated they thought the research was too speculative or too far outside conventional disciplinary boundaries to pursue.

Once the team formed, we faced immediate barriers regarding language and standards of evidence. An early challenge involved weighing how different disciplines frame the concept of “ground truth.” In the geosciences, ground truth often refers to calibrated physical measurements from, say, a lake gauge, a monitoring station, or a satellite-validated observation. In mental health research, the evidence base often relies on self-reported symptoms, survey-derived prevalence estimates, and clinical interpretations. Bridging those traditions required trust and a shared understanding that no single dataset could capture the full picture.

We also had to reconcile the ways different disciplines consider a phenomenon’s time frame and impact. Physical scientists are trained to notice anomalies, such as sharp spikes in PM2.5 levels and abrupt departures from recognized patterns in climatology. But depression and other mental health disorders are rarely explained by a single environmental event. More often, depression emerges in the context of multiple events and experiences in someone’s life, as well as of genetic vulnerabilities and epigenetic influences. That understanding led us away from focusing only on short-lived pollution extremes and toward metrics that better captured sustained exposures from multiple environmental factors.

A third challenge involved scale. We had to harmonize high-resolution environmental observations with mental health estimates available only at broad geographic and temporal scales (because public health data are necessarily aggregated and deidentified to protect privacy). This integration forced us to consider what kinds of comparisons we could make responsibly and what kinds of claims the data could genuinely support.

Overcoming these research challenges shaped our study in fundamental ways. Geoscientists are accustomed to looking at environmental variables as direct drivers of change, hence the framing of our initial hypothesis. In public health, however, causality is notoriously difficult to prove when multiple confounding variables from socioeconomic status to personal medical history are at play.

We thus reframed our entire approach to address the question of whether an ecological relationship plausibly exists between pollution and depression based on ecosocial models and data on mental illnesses.

This reframing wasn’t just semantic; it changed our analytical methodology. For example, instead of using simple tests of direct cause-and-effect relationships, we needed statistical approaches that could evaluate grouped differences, main effects, and interaction effects across multiple datasets. For this, we used analysis of variance models to test whether social vulnerability modified the relationship between PM2.5 exposure and major depressive episodes—in other words, whether the same pollution burden translated into different mental health outcomes in counties with different levels of vulnerability.

Reconciling Incompatible Data

The technical backbone of our study involved merging massive public datasets representing several fields of study:

  • Hydrology: daily lake level and volume measurements at Great Salt Lake collected by the U.S. Geological Survey (USGS)
  • Atmospheric science: daily EPA measurements of PM2.5 concentrations collected by ground stations across each county in Utah, as well as monthly PM2.5 data from NASA’s MERRA-2 (Modern-Era Retrospective Analysis for Research and Applications, version 2) reanalysis product to isolate the contribution to overall PM2.5 levels of Great Salt Lake–derived dust
  • Sociology: the Centers for Disease Control and Prevention (CDC) Social Vulnerability Index (SVI), a county-level measure released biennially that summarizes community vulnerability to external stressors on the basis of factors such as poverty, disability, minority status, housing, and transportation access
  • Mental health: annual, deidentified records of MDE prevalence from the Substance Abuse and Mental Health Services Administration (SAMHSA) harmonized in our analysis to the county level

Figuring out how to use these datasets together presented a significant hurdle because they were never designed to be interoperable.

Figuring out how to use these datasets together presented a significant hurdle because they were never designed to be interoperable and because of temporal and spatial measurement gaps in the datasets. Raw, daily data on fluctuating PM2.5 levels do not easily map onto representations of mental health trends in annual surveys, especially the slow-burning, cumulative experiences of depressive episodes.

We used multiple approaches to solve this incompatibility problem.

We screened the EPA station records of PM2.5 and the MERRA-2 time series for statistical outliers using Z scores. This screening filters out extreme contributions to PM2.5 pollution, such as wildfire-driven spikes, to ensure that any correlations between pollution and MDEs reflected chronic exposure to lake desiccation–derived dust rather than to temporary anomalies.

We also moved beyond raw particulate concentration data and identified a pollution metric that reflects harm to humans. We looked to two key regulatory benchmark thresholds that are based on extensive scientific evidence linking PM2.5 exposure to serious respiratory and cardiovascular health risks: the EPA’s National Ambient Air Quality Standards 24-hour PM2.5 standard of 35 micrograms per cubic meter and the World Health Organization’s more stringent 24-hour guideline of 15 micrograms per cubic meter. (These thresholds are not specific to mental health outcomes, a gap that points to the need for future work evaluating mental health–relevant PM2.5 thresholds more directly.)

By applying these thresholds to the daily PM2.5 data, we determined the number of exceedance days—days during which the 24-hour average exceeded these safety limits—on a county-by-county basis. This metric allowed us to quantify annual county-level doses of exceedance days. It also created a common denominator with the health surveys, making it possible to statistically compare the occurrence of high dust levels resulting from environmental degradation of the Great Salt Lake to population-level mental health outcomes.

Detailing a Dose-Response Relationship

The results of our study revealed a concerning dose-response relationship. Mental health outcomes in our analysis came from grouped county-level SAMHSA estimates of MDE prevalence, which we analyzed and classified into five categories of severity ranging from “very low” to “very high.” We found that higher MDE categories were associated with exposure to more PM2.5 exceedance days. Annual average exceedance days rose from about 9.7 days for the very low MDE group to about 21.7 days for the very high group. Seasonal effects were also apparent, with average exceedance days for those in the high MDE group in winter exceeding 35 days.

Salt Lake City sits just southeast of Great Salt Lake. Credit: Ken Lund/Flickr, CC BY-SA 2.0

The frequency of high-pollution exceedance days was highest in Salt Lake County, which is home to Salt Lake City and more than 1.2 million people and lies directly downwind of Great Salt Lake. Duchesne County, farther east but also notably downwind, also had a high frequency of exceedance days.

In many cities, socioeconomic vulnerability is a strong predictor of an area’s pollution exposure. In Utah, looking at a natural rather than human-made source of pollution, we found the opposite.

Another important finding challenged a traditional environmental justice assumption. In many cities, socioeconomic vulnerability—as gauged by the SVI, for example—is a strong predictor of an area’s pollution exposure because lower-income neighborhoods are often located near industrial centers, transportation corridors, and other emissions sources. In Utah, looking at a natural rather than human-made source of pollution, we found the opposite: The most socially vulnerable counties, such as rural San Juan County in the state’s southeast, saw the lowest PM2.5 exposures because they are far from the lake bed.

Yet social vulnerability still mattered. Our interaction model revealed that social vulnerability significantly modified how exposure to PM2.5 lake dust related to mental health outcomes. In plain terms, the model tested whether the relationship between PM2.5 exceedance days and county-level prevalence of MDEs was the same across counties with different levels of social vulnerability.

Although social vulnerability by itself did not directly affect MDE prevalence to a significant extent, it significantly modified the PM2.5-MDE relationship, indicating that for a given level of pollution exposure, more socially vulnerable counties experienced a disproportionately higher prevalence of MDEs. This trend may arise because these populations have less access to protective buffers that shield against dust exposure and its effects, such as high-efficiency air filtration, stable housing, health care, and coping resources to limit outdoor exposure during peak pollution events, than affluent populations do.

Protecting Public Health

Our findings revealed that the desiccation of the Great Salt Lake is not merely an ecological crisis. It is also a compounding public health challenge that demands responses across sectors and scales. Depression is expected to become the world’s largest disease burden by 2030. And it is already more common among the most vulnerable in society, the very populations that will have the hardest time finding protections against climate change.

A few visitors stand along the shoreline of the Great Salt Lake in 2021. Credit: Farragutful/Wikimedia Commons, CC BY-SA 4.0

At the community level, one approach to the challenge is to deploy interventions to shield vulnerable communities. Current air quality alerts are framed mainly around respiratory and cardiovascular health risks. Expanding these systems to include mental health considerations would better reflect the full range of potential harms associated with repeated dust exposure. Beyond alerts, local governments and health departments can also consider targeted interventions to help those least able to avoid exposure. These interventions could include opening indoor clean-air shelters during severe pollution events—much like cooling centers used during heat waves—and subsidizing air filtration systems and home weatherization.

Regionally, public health cannot be separated from hydrological stability. Shielding people from, and treating the symptoms of, dust exposure without addressing the shrinking lake bed of the Great Salt Lake (or other changes in blue spaces) is an incomplete strategy. Reversing the lake’s decline will require difficult conversations among stakeholders about watershed management, including the possibility of reducing consumptive water use and rethinking the balance between immediate gains from continued diversions and longer-term benefits of ecological preservation. Accounting for the compounding costs of public health crises, infrastructure degradation, and lost ecological services suggests that preserving the Great Salt Lake is not simply an environmental priority but also a long-term investment in regional resilience.

This research demonstrates the critical value of long-term, open-access public data infrastructure while also highlighting a major practical barrier: Environmental and health datasets remain difficult to integrate.

On a broader scale, physical scientists, public health researchers, clinicians, policymakers, and others—who each still largely work in silos—must work across disciplines if we are to anticipate, measure, and reduce the cascading risks posed by climate-driven environmental change.

Our capabilities for tracking environmental cascades—from drought to lake bed desiccation or from wildfire to smoke exposure, for example—have grown increasingly precise. What remains far less developed is our ability to translate physical signals into a fuller understanding of the public health burden presented by these cascades. That disconnect limits both understanding and response and points to the need for integrative approaches that treat environmental change and health as connected parts of a system of exposure, vulnerability, and human consequences.

Further, this research demonstrates the critical value of long-term, open-access public data infrastructure while also highlighting a major practical barrier: Environmental and health datasets remain difficult to integrate across temporal and spatial scales. The challenge we faced in aligning daily atmospheric data with annual health surveys underscores the need to improve interoperability across data systems maintained by agencies such as NASA, NOAA, USGS, EPA, CDC, SAMHSA, and others.

Greater alignment across these datasets—for example, through satellite imaging of blue spaces and air quality alongside exposure sampling in regions of concern—would make it easier to connect environmental change with health outcomes. It would also help to translate knowledge of emerging risks into actionable public health strategies to protect the mental and physical health of the residents of Utah and beyond.

Author Information

Maheshwari Neelam (maheshwari.neelam@nasa.gov), Universities Space Research Association and NASA Marshall Space Flight Center, Huntsville, Ala.; and Kamaldeep Bhui, Department of Psychiatry and Wadham College, University of Oxford, Oxford, U.K.

Citation: Neelam, M., and K. Bhui (2026), Lessons from linking Great Salt Lake desiccation and depression, Eos, 107, https://doi.org/10.1029/2026EO260113. Published on 10 April 2026. Text © 2026. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Machine Learning Could Enhance Earth System Modeling

Fri, 04/10/2026 - 12:00
Editors’ Highlights are summaries of recent papers by AGU’s journal editors. Source: AGU Advances

Machine learning (ML)-based models hold great potential to enhance and perhaps transform simulations of the Earth’s weather and climate across the range from synoptic to seasonal to annual to multi-decadal time scales. However, ML-based models should also produce results consistent with the physical laws of the Earth system. While ML-based models have been tested for weather forecasting, it remains uncertain whether they can produce reasonable responses in long-term simulations under forcings relevant across weather to climate time scales. Therefore, it is essential to perform a broad evaluation across different timescales. In addition, it is important to understand how well the emergent ML techniques can complement conventional physics-based models.

Chen et al. [2026] perform a series of tests that cover systems at the synoptic scale, interannual scale, and under long-term out-of-distribution forcings. This study uses a hybrid model called NeuralGCM, which combines traditional Earth system modeling with ML approaches. For a set of idealized experiments, NeuralGCM produces performs similarly to conventional physics-based Earth system models. However, some limitations were found in simulating extratropical cyclone strength, atmospheric wave responses, and stratospheric warming and circulation responses. In general, the combination of ML with established physics-based modeling represents a promising path forward in achieving weather and climate analyses that require less computing time.

Schematic diagram summarizing the NeuralGCM and Earth System Models. The panels illustrate the core structure of the NeuralGCM model and a simplistic representation of processes included in an ensemble of analyses using an Earth System Model. Credit: Chen et al. [2026], Figure 1 (top panels)

Citation: Chen, Z., Leung, L. R., Zhou, W., Lu, J., Lubis, S. W., Liu, Y., et al. (2026). Hierarchical testing of a hybrid machine learning-physics global atmosphere model. AGU Advances, 7, e2025AV002075. https://doi.org/10.1029/2025AV002075

—Don Wuebbles, Editor, AGU Advances

Text © 2026. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Fatal landslides in March 2026

Fri, 04/10/2026 - 10:29

In March 2026 I recorded 61 fatal landslides causing 520 fatalities, the highest March total on record.

This is my regular update for the number of fatal global landslides, focusing on March 2026. AAs usual, this data has been collected in line with the methodology described in Froude and Petley (2018) and in Petley (2012). References are listed below – please cite these articles if you use this analysis. Data presented in these updates should be treated as being provisional at this stage.

The headline figures are as follows:

March 2026: 61 fatal landslides causing 520 fatalities;

This is very surprising total once again – 61 fatal landslides is the highest March total in my long term dataset – the previous record was 49 events in 2024. The baseline mean (2004-2016) is c.23 fatal landslides.

Loyal readers will know that my preferred way to present the annual data is using the cumulative total number of fatal landslides calculated in pentads (five day blocks). To make this easier to interpret, I have converted the pentads into day numbers through the year (so 1 January is day number 1, 31 December is day number 365).

This is the data for 2026 to the end of March:-

The cumulative total number of fatal landslides through to March 2026, plotted with the long term mean number and the exceptional year of 2024 for comparison.

The factors that are driving this very high level of recorded fatal landslides are not clear to me at this point. Perhaps it is a change in the quality of information I’m collating, although this seems unlikely to be the sole cause. Perhaps it is associated with the rapid degradation that is occurring in mountain areas (more on this to come). Perhaps it is the result of climate change. Interestingly, March 2026 was exceptionally warm compared to the long term record, globally, but it was “only” the fourth warmest March on record. March 2024 was the warmest on record.

This all requires more detailed analysis, which I have yet to do. But, at the moment, 2026 is proving to be a bad year for fatal landslides. A major caveat though is that the early months of the year are not a good predictor of what might happen through the Northern Hemisphere summer months, driven mainly by the SW monsoon in South Asia, the summer monsoon in East Asia and patterns of tropical cyclones.

References

Froude, M. and Petley, D.N. 2018.  Global fatal landslide occurrence from 2004 to 2016.  Natural Hazards and Earth System Sciences 18, 2161-2181.

Petley, D.N. 2012. Global patterns of loss of life from landslidesGeology 40 (10), 927-930.

Return to The Landslide Blog homepage Text © 2026. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Alaska’s Wildfires Heat the Planet, but Canada’s Cool It

Thu, 04/09/2026 - 12:37

When it comes to wildfires, the story may seem straightforward: As forests burn, they release greenhouse gases like carbon dioxide, carbon monoxide, and methane that warm the planet. But in the far northern parts of North America, wildfires don’t always follow the same script.

In a new study published in Nature Geoscience, researchers found that forest fires in Alaska tend to have a warming effect on Earth’s atmosphere but those in western Canada can contribute to net cooling.

“The most surprising aspect is that if you take away this permafrost component, fires in general in Alaska would switch” from a net warming to cooling effect.

Geography and permafrost help explain the discrepancy. When forest fires burn in Alaska, they not only burn the forest but also thaw permafrost. Both of these phenomena release carbon into the atmosphere. Northern Canada also has permafrost, and blazes there also burn trees and the soil layer that anchors them. However, as reported in an influential 2006 study, these fires are more likely to leave behind open spaces that can be blanketed by bright snow in winter. This brighter surface reflects more sunlight, triggering a net cooling effect.

“The most surprising aspect is that if you take away this permafrost component, fires in general in Alaska would switch” from a net warming to cooling effect, said Max J. van Gerrevink, a climate scientist at Vrije Universiteit Amsterdam in the Netherlands and lead author of the study.

Missing Permafrost

Van Gerrevink’s research builds on the landmark 2006 study, which provided an innovative approach to assessing the climate-warming potential of boreal wildfires but didn’t address a key contributing factor: carbon emissions from permafrost. This exclusion meant that while the 2006 finding held true for some boreal regions, it couldn’t be generalized across the board.

“We know that there’s more carbon released than was actually implemented in that study,” van Gerrevink said.

Van Gerrevink and his team tracked the satellite data of all wildfires in Alaska and western Canada from 2001 to 2019. They accounted for possible warming processes such as greenhouse gases released during a fire and permafrost thawing after a fire. They also considered possible cooling processes, including snow-covered landscapes or atmospheric aerosols reflecting sunlight and forest regrowth absorbing carbon dioxide.

“We also trained models, first on historical climate data making the models quite robust and then substituting climate data with future projections,” van Gerrevink added.

They found that even a small number of fires that burn intensely and thaw the carbon-rich permafrost can have a large warming effect. Importantly, as climate warms and snow cover declines, even fires that have a cooling effect may increasingly shift toward a warming in the future.

A 360° View of Wildfires

“Every fire is really ecosystem dependent. When a fire burns, it’s going to burn differently depending on what the surrounding ecosystem structure is,” said Kimberley Miner, an Earth scientist at the NASA Jet Propulsion Laboratory who was not involved in the study. “What this study is pointing out is that’s true in the Arctic too.”

In the new paper, van Gerrevink and his coauthors found that “climate-warming fires occur preferentially in dry, high-elevation, steep permafrost landscapes,” while “climate-cooling fires are driven by longer spring snow exposure and occur more frequently in continental regions near the tree line.”

“I think the study motivates us to think of fires as being more complex than [just] good or bad.”

Dense permafrost layers in some areas of the Northern Hemisphere, Miner explained, mean “we have to think about fires in a really different way, in a much more complete, almost 360° way—not just what’s happening aboveground,” but below the surface too.

Christopher Williams, an Earth system scientist at Clark University in Worcester, Mass., who also was not involved with the study, said its consideration of the relationship between permafrost and wildfire-related emissions could reshape the way scientists think about the ecological effects of fires.

“I think the study motivates us to think of fires as being more complex than [just] good or bad,” he said.

—Saugat Bolakhe (@scigat.bsky.social), Science Writer

Citation: Bolakhe, S. (2026), Alaska’s wildfires heat the planet, but Canada’s cool it, Eos, 107, https://doi.org/10.1029/2026EO260112. Published on 9 April 2026. Text © 2026. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Resolved Storm-Environment Interactions: Linking Local to Global Scales

Thu, 04/09/2026 - 12:00
Editors’ Highlights are summaries of recent papers by AGU’s journal editors. Source: Journal of Advances in Modeling Earth Systems

Thunderstorms play a central role in tropical weather as they do not only produce local, extreme rainfall, but also interact with their environment. These interactions, from local to large-scale, can strongly influence both the mean climate and its variability. A new generation of kilometer-scale Global Storm-Resolving Models (GSRMs) is expected to represent these multi-scale processes more realistically by explicitly resolving deep convection. Understanding how storms interact with environmental moisture and temperature, and how these interactions shape the climate system’s internal variability, remains a central challenge for GSRMs.

In a new study, Takasuka et al. [2026] analyze multi-year simulations from three GSRMs (ICON, IFS, and NICAM) to examine how these next-generation models represent convective storms and how these representations relate to their different approaches to modeling convection. Although the models capture the timing of peak rainfall over ocean well, they tend to simulate storms that are too numerous and too small. Moreover, the models differ in the lifecycle of convection, particularly in the transition from shallow to deep convection and in the storage of atmospheric moisture, resulting in different large-scale mean state (e.g. precipitation) and variability (e.g. the Madden-Julian oscillation).

The study highlights how mesoscale coupling between convection and the thermodynamic environment shapes larger-scale tropical weather and climate characteristics, while revealing persisting challenges in representing complex storm processes in GSRMs and identifying key areas where a more realistic representation of convective–environment interactions could lead to more reliable simulations.

Time-height evolution of moisture (color shading) and temperature (blue contours) from 48 hours before to 48 hours after the peak of deep convective storm events over the tropical ocean, shown for reanalysis data (a; observational reference) and three kilometer-scale global storm-resolving models: (b) ICON, (c) IFS, and (d) NICAM. Both moisture and temperature are expressed as deviations from the ±48-hour mean. Credit: Takasuka et al. [2026], Figure 6 (a-d)

Citation: Takasuka, D., Becker, T., & Bao, J. (2026). Precipitation characteristics and thermodynamic-convection coupling in global kilometer-scale simulations. Journal of Advances in Modeling Earth Systems, 18, e2025MS005343. https://doi.org/10.1029/2025MS005343

—Jiwen Fan, Editor, JAMES

Text © 2026. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Distant Cousins? How Field Work on Earth Could Help Us to Better Understand Titan

Thu, 04/09/2026 - 12:00

While Saturn has 274 confirmed moons in its orbit, its largest moon, Titan, is of particular interest to researchers due to its similarities to Earth. A new article in Reviews of Geophysics explores the geophysical parallels between Earth and Titan, and how scientists could use field work on Earth to learn more about both worlds. Here, we asked the lead author to give an overview of Titan…

Source

Curiosity Stumbles Upon Evidence of Ancient Martian Winds

Wed, 04/08/2026 - 14:50

Researchers have found evidence of a sandstorm on Mars that occurred about 3.6 billion years ago, marking the first time a sandstorm has been recognized in the Martian stratigraphic record. They published their findings in Geology. It’s not that scientists didn’t know that wind once blew on Mars. It does so now, and features on the planet’s surface, like dry riverbeds…

Source

Asteroid Hosts All Ingredients for DNA and RNA

Wed, 04/08/2026 - 12:44

The basic ingredients for life as we know it are common in the cosmos. Scientists are still learning which of those ingredients were present on primordial Earth, and how they combined to make life remains an unsolved mystery. However, many researchers now think many of the molecules necessary for life were already present in the nebula that grew into our solar system, which would mean the…

Source

An Ancient Landscape Beneath the East Antarctic Ice Sheet

Wed, 04/08/2026 - 12:00

Earth’s ice sheets are changing rapidly in response to anthropogenic climate change, and these changes are modulated by their basal topography. Visualizing the landscape that lies beneath the East Antarctic Ice Sheet not only allows glaciologists to improve model projections of future ice sheet change, but also provides a glimpse of a landscape hidden beneath ice. Paxman et al. [2026] used…

Source

Raknehaugen in Norway: an Iron Age memorial to a landslide

Wed, 04/08/2026 - 10:25

An Iron Age burial mound in Norway has been reinterpreted as being a memorial for a catastrophic landslide during a period of climatic instability. There is a very interesting article (Gustavsen 2026) in the European Journal of Archaeology that re-examines an Iron Age mound known as Raknehaugen (Rakni’s Mound) in Norway. This mound has, until now, been interpreted as being the burial…

Source

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer