Feed aggregator

Synergistic Integration of Flood Inundation Modeling Methods

EOS - Fri, 04/10/2026 - 17:16
Editors’ Vox is a blog from AGU’s Publications Department.

Flood inundation models are tools that predict where water flows, how deep it gets, how fast it moves and how long it remains during a flood event. But despite recent advances in flood inundation models, some flood modeling paradigms are being used beyond their range of applicability rather than leveraging the strengths of different methods.

A new article in Review of Geophysics explores the strengths and limitations of different flood modeling methods and calls for an integrated approach to flood modeling. Here, we asked the authors to give an overview of flood inundation models, the challenges of “siloing,” and future directions for research. 

In simple terms, how do flood inundation models work and why are they important?

Flood inundation models take inputs, such as rainfall, ground elevation, river flow, and infrastructure data, and simulate how flooding develops across a given area. In some ways, they function as a replica of the physical world, allowing modelers to approximate how a flood scenario may evolve.

The models matter because they support decisions across a wide range of sectors. Emergency managers use them to plan evacuations and allocate resources. Engineers rely on them to design flood control infrastructure such as levees, bridges, and drainage systems. Regulatory agencies, like FEMA in the United States, use the models to delineate flood zones, which determine where properties are subject to flood risk. Flood inundation models also inform decisions related to public health, agriculture, insurance markets, transportation infrastructure, and environmental management, among many others.

Applications of flood inundation models in different sectors. Credit: Nazari et al. [2026], Figure 1

How have flood inundation models evolved since they first started being developed?

Flood inundation models have evolved significantly over the past century, driven primarily by advances in mathematics, computational power, and data availability. Early models were relatively simple and could only track water moving in one direction along a channel. As the field advanced, models expanded to simulate how floodwater propagates across the broader landscape, including areas far beyond waterbodies.

The availability of high-resolution terrain data, remote sensing, and satellite imagery further transformed the field. Modelers could work with detailed representations of the landscape at regional, continental and even global scales, a scale that was computationally out of reach just decades earlier. High-performance computing made it possible to run complex simulations faster and over much larger areas.

Rather than these different approaches growing together and complementing each other, they increasingly develop in isolation.

More recently, the rise of data-driven approaches, artificial intelligence and machine learning, introduced an additional modeling paradigm, one that learns patterns from observed data rather than solving physical equations. These methods often offer computational efficiency in data-rich environments. However, this rapid diversification has also introduced a challenge. Rather than these different approaches growing together and complementing each other, they increasingly develop in isolation, each evolving within its own methodological boundaries. This divergence and what it means for the future of the field is a defining concern in flood modeling today.

What are the flood inundation modeling methods described in your review article?

Our review groups flood inundation modeling into four broad methods. First are computational models, which are physics-based models that numerically solve equations representing conservation of mass and momentum and are often very robust for representing flood dynamics. Second, with the rise of big data, artificial intelligence and machine learning algorithms proliferated. These methods can be fast and efficient, but they often rely heavily on data, lack physical constraints and offer limited generalizability beyond their training conditions, which is particularly concerning since those “unseen” conditions could be the very extreme events that matter far more than data-rich frequent and milder scenarios. Third are observational and experimental methods, which use field measurements, satellite data, and laboratory studies to describe or analyze flooding; these can help with calibration and validation but usually have limited predictive skill on their own. Fourth are conceptual models, which simplify flood behavior into transparent and efficient rules. These can be useful for planning and broad analyses, but they overlook important hydraulic details.

What is “siloing” in flood inundation modeling and why does it occur?

In our review, “siloing” refers to the tendency of different modeling approaches evolving independently within their own methodological boundaries, with limited exchange or integration across paradigms. A concern is substantial investment on methods with a limited scope, assuming that methods can ultimately overcome their own simplifications and replace other methods. This has particularly been observed in the push to use data-driven and remote sensing paradigms to replace physics-based models, rather than integrating their strengths. This can be due to several reasons. Different applications demand different levels of accuracy, efficiency, predictive skill, and computing power. Some methods are easier to use or better matched with available data. In other cases, modelers may be more familiar with one method than with alternatives, so they continue refining that method even when another approach could solve part of the problem better. Siloing also grows when simplified methods are adopted for convenience or justified by data limitations and computing power constraints, gradually being treated as full replacements for more physically grounded models.

What are some of the challenges that siloing presents?

Siloing slows progress by underusing the strengths of complementary methods.

Siloing creates both scientific and practical problems. One major challenge is that models may be pushed beyond the scope they were designed for. For example, some simplified or data-driven methods can miss key flood dynamics, such as backwater effects, transient flow behavior, meaning how floods change rapidly over time, or infrastructure controls, yet still be used in consequential decisions. Another problem is that siloing slows progress by underusing the strengths of complementary methods. Siloing also makes it difficult to objectively evaluate model assumptions because each modeling community tends to focus on improving its own methods rather than testing where those methods perform best and where they fall short.

What are the pathways for future research in flood inundation modeling?

The main pathway we propose is synergistic integration of various modeling; moving away from developing modeling methods in isolation and toward integrating them so that each method contributes what it does best. This means, for example, using simple or data-driven models to identify where detailed hydrodynamic modeling is most needed, leveraging satellite and field observations to improve other models’ inputs and calibration, and incorporating machine learning in ways that are guided by physical constraints rather than data alone. It also means investing more in physics-based models, experiments and data collection, such as detailed surveys of ground elevation and physical infrastructure, rather than defaulting to simplification as a substitute for that investment. Advances in high-performance computing make this level of integration increasingly feasible.

The goal is not to sacrifice physics just to arrive at faster or more convenient approaches.

The goal is not to sacrifice physics just to arrive at faster or more convenient approaches, but to develop actionable models that are physically grounded, reliable across a range of conditions, and informative for the decisions that depend on them. Advances across all these fronts can help close the gap between physical realism and computational efficiency, making integrated modeling not just an aspiration but an achievable practice.

—Behzad Nazari (behzadnazari@gmail.com, 0009-0000-5568-4735), The University of Texas at Arlington, United States; Ebrahim Ahmadisharaf (eahmadisharaf@eng.famu.fsu.edu, 0000-0002-9452-7975), Florida State University: Tallahassee, United States

Editor’s Note: It is the policy of AGU Publications to invite the authors of articles published in Reviews of Geophysics to write a summary for Eos Editors’ Vox.

Citation: Nazari, B., and E. Ahmadisharaf (2026), Synergistic integration of flood inundation modeling methods, Eos, 107, https://doi.org/10.1029/2026EO265015. Published on 10 April 2026. This article does not represent the opinion of AGU, Eos, or any of its affiliates. It is solely the opinion of the author(s). Text © 2026. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Yellowstone's magma source may be closer than thought, reshaping hazard models

Phys.org: Earth science - Fri, 04/10/2026 - 16:40
Supereruptions are extremely large volcanic eruptions that eject more than 1,000 cubic kilometers of magma, rock and ash. They are among the most hazardous geological events on Earth and have profound impacts on the environment, climate, and human society. For this reason, understanding the subsurface processes behind supereruptions is essential for improving volcanic hazard assessments and mitigating risks.

Back-to-back Amazon droughts trigger record forest stress

Phys.org: Earth science - Fri, 04/10/2026 - 15:20
Two back-to-back droughts in 2023 and 2024 caused the most severe decline in forest moisture and biomass (the total mass of living vegetation such as leaves, trunks and branches) in the Amazon since 1992, according to a study published in the journal PNAS. And many of the hardest hit areas are unlikely to recover before the next major drought arrives.

The Cascadia Subduction Zone isn't shutting down—but it's more complicated than previously thought

Phys.org: Earth science - Fri, 04/10/2026 - 15:20
Recent seismic imaging off Vancouver Island has revealed something extraordinary: a tear in the subducting oceanic plate beneath the Cascadia Subduction Zone. The finding briefly raised the public's hopes that Cascadia might be "shutting down," potentially lowering earthquake risk in North America's Pacific Northwest.

Unlocking Earth's 4.5-billion-year secret: The case of the missing lead

Phys.org: Earth science - Fri, 04/10/2026 - 14:40
Geoscientists have long relied on different forms of lead to understand Earth's geological history and how it was created over billions of years. However, there is a mystery that has been puzzling scientists for decades: Earth is missing a massive amount of lead that ought to be in the planet's crust, and no one knows where it has gone to.

Glaciers rapidly declining, with extreme losses in 2025

Phys.org: Earth science - Fri, 04/10/2026 - 14:20
Earth's glaciers are continuing to shrink at alarming rates, with new international research revealing that 2025 was among the worst years on record for global ice loss. Published in the Climate Chronicles collection of Nature Reviews Earth & Environment, the study provides the latest global assessment of glacier mass change, showing an accelerating trend driven by rising temperatures.

Lessons from Linking Great Salt Lake Desiccation and Depression

EOS - Fri, 04/10/2026 - 14:01

The Great Salt Lake is disappearing. Driven by decades of water diversions for agriculture, development, and mining, as well as by the warming climate, Utah’s famed lake has lost roughly 73% of its volume since 1850, exposing more than 54% of the lake bed.

The ecological and economic consequences of this decline are well documented, with the latter estimated at more than $2 billion in annual losses.

But a more insidious crisis is also rising as the lake vanishes: Dust from the exposed lake bed, picked up and blown by the wind, appears to be having a measurable mental health impact on the state’s residents.

Our recent research established a desiccated lake–to–mental health pathway, linking declining Great Salt Lake water levels to increased concentrations of hazardous, fine-grained particulate matter (PM2.5) in the air and, ultimately, to a higher prevalence of major depressive episodes (MDEs). In this context, lake desiccation acts as a potent threat multiplier. It does not merely create a new environmental hazard; it compounds existing social vulnerabilities, transforming a hydrological crisis into a chronic public health burden.

The water level of the Great Salt Lake dropped substantially over the past several decades, as shown by these composite images taken by Landsat satellites in June 1985 and July 2022. Credit: NASA Earth Observatory, Public Domain

Previous studies documented important parts of this pathway separately, including links between drying lakes and dust or degraded air quality, and broader associations between PM2.5 exposure and mental health outcomes. Our study brought those links together by analyzing and combining information from various open-access, long-standing datasets collected by different agencies to study changing mental health conditions in Utah between 2006 and 2018.

This integration required more than data assembly. It also required a fundamental shift in how scientists from different fields framed the problem and spoke to one another.

The Friction of Interdisciplinary Collaboration

We had to assemble a research team representing a variety of specializations. Once the team formed, we faced immediate barriers regarding language and standards of evidence.

Our study began with a bold hypothesis: Air pollution from the Great Salt Lake might be affecting both physical and mental health. To investigate this idea, we had to assemble a research team representing a variety of specializations across hydrology, atmospheric science, and mental health—a challenging task considering some potential collaborators indicated they thought the research was too speculative or too far outside conventional disciplinary boundaries to pursue.

Once the team formed, we faced immediate barriers regarding language and standards of evidence. An early challenge involved weighing how different disciplines frame the concept of “ground truth.” In the geosciences, ground truth often refers to calibrated physical measurements from, say, a lake gauge, a monitoring station, or a satellite-validated observation. In mental health research, the evidence base often relies on self-reported symptoms, survey-derived prevalence estimates, and clinical interpretations. Bridging those traditions required trust and a shared understanding that no single dataset could capture the full picture.

We also had to reconcile the ways different disciplines consider a phenomenon’s time frame and impact. Physical scientists are trained to notice anomalies, such as sharp spikes in PM2.5 levels and abrupt departures from recognized patterns in climatology. But depression and other mental health disorders are rarely explained by a single environmental event. More often, depression emerges in the context of multiple events and experiences in someone’s life, as well as of genetic vulnerabilities and epigenetic influences. That understanding led us away from focusing only on short-lived pollution extremes and toward metrics that better captured sustained exposures from multiple environmental factors.

A third challenge involved scale. We had to harmonize high-resolution environmental observations with mental health estimates available only at broad geographic and temporal scales (because public health data are necessarily aggregated and deidentified to protect privacy). This integration forced us to consider what kinds of comparisons we could make responsibly and what kinds of claims the data could genuinely support.

Overcoming these research challenges shaped our study in fundamental ways. Geoscientists are accustomed to looking at environmental variables as direct drivers of change, hence the framing of our initial hypothesis. In public health, however, causality is notoriously difficult to prove when multiple confounding variables from socioeconomic status to personal medical history are at play.

We thus reframed our entire approach to address the question of whether an ecological relationship plausibly exists between pollution and depression based on ecosocial models and data on mental illnesses.

This reframing wasn’t just semantic; it changed our analytical methodology. For example, instead of using simple tests of direct cause-and-effect relationships, we needed statistical approaches that could evaluate grouped differences, main effects, and interaction effects across multiple datasets. For this, we used analysis of variance models to test whether social vulnerability modified the relationship between PM2.5 exposure and major depressive episodes—in other words, whether the same pollution burden translated into different mental health outcomes in counties with different levels of vulnerability.

Reconciling Incompatible Data

The technical backbone of our study involved merging massive public datasets representing several fields of study:

  • Hydrology: daily lake level and volume measurements at Great Salt Lake collected by the U.S. Geological Survey (USGS)
  • Atmospheric science: daily EPA measurements of PM2.5 concentrations collected by ground stations across each county in Utah, as well as monthly PM2.5 data from NASA’s MERRA-2 (Modern-Era Retrospective Analysis for Research and Applications, version 2) reanalysis product to isolate the contribution to overall PM2.5 levels of Great Salt Lake–derived dust
  • Sociology: the Centers for Disease Control and Prevention (CDC) Social Vulnerability Index (SVI), a county-level measure released biennially that summarizes community vulnerability to external stressors on the basis of factors such as poverty, disability, minority status, housing, and transportation access
  • Mental health: annual, deidentified records of MDE prevalence from the Substance Abuse and Mental Health Services Administration (SAMHSA) harmonized in our analysis to the county level

Figuring out how to use these datasets together presented a significant hurdle because they were never designed to be interoperable.

Figuring out how to use these datasets together presented a significant hurdle because they were never designed to be interoperable and because of temporal and spatial measurement gaps in the datasets. Raw, daily data on fluctuating PM2.5 levels do not easily map onto representations of mental health trends in annual surveys, especially the slow-burning, cumulative experiences of depressive episodes.

We used multiple approaches to solve this incompatibility problem.

We screened the EPA station records of PM2.5 and the MERRA-2 time series for statistical outliers using Z scores. This screening filters out extreme contributions to PM2.5 pollution, such as wildfire-driven spikes, to ensure that any correlations between pollution and MDEs reflected chronic exposure to lake desiccation–derived dust rather than to temporary anomalies.

We also moved beyond raw particulate concentration data and identified a pollution metric that reflects harm to humans. We looked to two key regulatory benchmark thresholds that are based on extensive scientific evidence linking PM2.5 exposure to serious respiratory and cardiovascular health risks: the EPA’s National Ambient Air Quality Standards 24-hour PM2.5 standard of 35 micrograms per cubic meter and the World Health Organization’s more stringent 24-hour guideline of 15 micrograms per cubic meter. (These thresholds are not specific to mental health outcomes, a gap that points to the need for future work evaluating mental health–relevant PM2.5 thresholds more directly.)

By applying these thresholds to the daily PM2.5 data, we determined the number of exceedance days—days during which the 24-hour average exceeded these safety limits—on a county-by-county basis. This metric allowed us to quantify annual county-level doses of exceedance days. It also created a common denominator with the health surveys, making it possible to statistically compare the occurrence of high dust levels resulting from environmental degradation of the Great Salt Lake to population-level mental health outcomes.

Detailing a Dose-Response Relationship

The results of our study revealed a concerning dose-response relationship. Mental health outcomes in our analysis came from grouped county-level SAMHSA estimates of MDE prevalence, which we analyzed and classified into five categories of severity ranging from “very low” to “very high.” We found that higher MDE categories were associated with exposure to more PM2.5 exceedance days. Annual average exceedance days rose from about 9.7 days for the very low MDE group to about 21.7 days for the very high group. Seasonal effects were also apparent, with average exceedance days for those in the high MDE group in winter exceeding 35 days.

Salt Lake City sits just southeast of Great Salt Lake. Credit: Ken Lund/Flickr, CC BY-SA 2.0

The frequency of high-pollution exceedance days was highest in Salt Lake County, which is home to Salt Lake City and more than 1.2 million people and lies directly downwind of Great Salt Lake. Duchesne County, farther east but also notably downwind, also had a high frequency of exceedance days.

In many cities, socioeconomic vulnerability is a strong predictor of an area’s pollution exposure. In Utah, looking at a natural rather than human-made source of pollution, we found the opposite.

Another important finding challenged a traditional environmental justice assumption. In many cities, socioeconomic vulnerability—as gauged by the SVI, for example—is a strong predictor of an area’s pollution exposure because lower-income neighborhoods are often located near industrial centers, transportation corridors, and other emissions sources. In Utah, looking at a natural rather than human-made source of pollution, we found the opposite: The most socially vulnerable counties, such as rural San Juan County in the state’s southeast, saw the lowest PM2.5 exposures because they are far from the lake bed.

Yet social vulnerability still mattered. Our interaction model revealed that social vulnerability significantly modified how exposure to PM2.5 lake dust related to mental health outcomes. In plain terms, the model tested whether the relationship between PM2.5 exceedance days and county-level prevalence of MDEs was the same across counties with different levels of social vulnerability.

Although social vulnerability by itself did not directly affect MDE prevalence to a significant extent, it significantly modified the PM2.5-MDE relationship, indicating that for a given level of pollution exposure, more socially vulnerable counties experienced a disproportionately higher prevalence of MDEs. This trend may arise because these populations have less access to protective buffers that shield against dust exposure and its effects, such as high-efficiency air filtration, stable housing, health care, and coping resources to limit outdoor exposure during peak pollution events, than affluent populations do.

Protecting Public Health

Our findings revealed that the desiccation of the Great Salt Lake is not merely an ecological crisis. It is also a compounding public health challenge that demands responses across sectors and scales. Depression is expected to become the world’s largest disease burden by 2030. And it is already more common among the most vulnerable in society, the very populations that will have the hardest time finding protections against climate change.

A few visitors stand along the shoreline of the Great Salt Lake in 2021. Credit: Farragutful/Wikimedia Commons, CC BY-SA 4.0

At the community level, one approach to the challenge is to deploy interventions to shield vulnerable communities. Current air quality alerts are framed mainly around respiratory and cardiovascular health risks. Expanding these systems to include mental health considerations would better reflect the full range of potential harms associated with repeated dust exposure. Beyond alerts, local governments and health departments can also consider targeted interventions to help those least able to avoid exposure. These interventions could include opening indoor clean-air shelters during severe pollution events—much like cooling centers used during heat waves—and subsidizing air filtration systems and home weatherization.

Regionally, public health cannot be separated from hydrological stability. Shielding people from, and treating the symptoms of, dust exposure without addressing the shrinking lake bed of the Great Salt Lake (or other changes in blue spaces) is an incomplete strategy. Reversing the lake’s decline will require difficult conversations among stakeholders about watershed management, including the possibility of reducing consumptive water use and rethinking the balance between immediate gains from continued diversions and longer-term benefits of ecological preservation. Accounting for the compounding costs of public health crises, infrastructure degradation, and lost ecological services suggests that preserving the Great Salt Lake is not simply an environmental priority but also a long-term investment in regional resilience.

This research demonstrates the critical value of long-term, open-access public data infrastructure while also highlighting a major practical barrier: Environmental and health datasets remain difficult to integrate.

On a broader scale, physical scientists, public health researchers, clinicians, policymakers, and others—who each still largely work in silos—must work across disciplines if we are to anticipate, measure, and reduce the cascading risks posed by climate-driven environmental change.

Our capabilities for tracking environmental cascades—from drought to lake bed desiccation or from wildfire to smoke exposure, for example—have grown increasingly precise. What remains far less developed is our ability to translate physical signals into a fuller understanding of the public health burden presented by these cascades. That disconnect limits both understanding and response and points to the need for integrative approaches that treat environmental change and health as connected parts of a system of exposure, vulnerability, and human consequences.

Further, this research demonstrates the critical value of long-term, open-access public data infrastructure while also highlighting a major practical barrier: Environmental and health datasets remain difficult to integrate across temporal and spatial scales. The challenge we faced in aligning daily atmospheric data with annual health surveys underscores the need to improve interoperability across data systems maintained by agencies such as NASA, NOAA, USGS, EPA, CDC, SAMHSA, and others.

Greater alignment across these datasets—for example, through satellite imaging of blue spaces and air quality alongside exposure sampling in regions of concern—would make it easier to connect environmental change with health outcomes. It would also help to translate knowledge of emerging risks into actionable public health strategies to protect the mental and physical health of the residents of Utah and beyond.

Author Information

Maheshwari Neelam (maheshwari.neelam@nasa.gov), Universities Space Research Association and NASA Marshall Space Flight Center, Huntsville, Ala.; and Kamaldeep Bhui, Department of Psychiatry and Wadham College, University of Oxford, Oxford, U.K.

Citation: Neelam, M., and K. Bhui (2026), Lessons from linking Great Salt Lake desiccation and depression, Eos, 107, https://doi.org/10.1029/2026EO260113. Published on 10 April 2026. Text © 2026. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Machine Learning Could Enhance Earth System Modeling

EOS - Fri, 04/10/2026 - 12:00
Editors’ Highlights are summaries of recent papers by AGU’s journal editors. Source: AGU Advances

Machine learning (ML)-based models hold great potential to enhance and perhaps transform simulations of the Earth’s weather and climate across the range from synoptic to seasonal to annual to multi-decadal time scales. However, ML-based models should also produce results consistent with the physical laws of the Earth system. While ML-based models have been tested for weather forecasting, it remains uncertain whether they can produce reasonable responses in long-term simulations under forcings relevant across weather to climate time scales. Therefore, it is essential to perform a broad evaluation across different timescales. In addition, it is important to understand how well the emergent ML techniques can complement conventional physics-based models.

Chen et al. [2026] perform a series of tests that cover systems at the synoptic scale, interannual scale, and under long-term out-of-distribution forcings. This study uses a hybrid model called NeuralGCM, which combines traditional Earth system modeling with ML approaches. For a set of idealized experiments, NeuralGCM produces performs similarly to conventional physics-based Earth system models. However, some limitations were found in simulating extratropical cyclone strength, atmospheric wave responses, and stratospheric warming and circulation responses. In general, the combination of ML with established physics-based modeling represents a promising path forward in achieving weather and climate analyses that require less computing time.

Schematic diagram summarizing the NeuralGCM and Earth System Models. The panels illustrate the core structure of the NeuralGCM model and a simplistic representation of processes included in an ensemble of analyses using an Earth System Model. Credit: Chen et al. [2026], Figure 1 (top panels)

Citation: Chen, Z., Leung, L. R., Zhou, W., Lu, J., Lubis, S. W., Liu, Y., et al. (2026). Hierarchical testing of a hybrid machine learning-physics global atmosphere model. AGU Advances, 7, e2025AV002075. https://doi.org/10.1029/2025AV002075

—Don Wuebbles, Editor, AGU Advances

Text © 2026. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Fatal landslides in March 2026

EOS - Fri, 04/10/2026 - 10:29

In March 2026 I recorded 61 fatal landslides causing 520 fatalities, the highest March total on record.

This is my regular update for the number of fatal global landslides, focusing on March 2026. AAs usual, this data has been collected in line with the methodology described in Froude and Petley (2018) and in Petley (2012). References are listed below – please cite these articles if you use this analysis. Data presented in these updates should be treated as being provisional at this stage.

The headline figures are as follows:

March 2026: 61 fatal landslides causing 520 fatalities;

This is very surprising total once again – 61 fatal landslides is the highest March total in my long term dataset – the previous record was 49 events in 2024. The baseline mean (2004-2016) is c.23 fatal landslides.

Loyal readers will know that my preferred way to present the annual data is using the cumulative total number of fatal landslides calculated in pentads (five day blocks). To make this easier to interpret, I have converted the pentads into day numbers through the year (so 1 January is day number 1, 31 December is day number 365).

This is the data for 2026 to the end of March:-

The cumulative total number of fatal landslides through to March 2026, plotted with the long term mean number and the exceptional year of 2024 for comparison.

The factors that are driving this very high level of recorded fatal landslides are not clear to me at this point. Perhaps it is a change in the quality of information I’m collating, although this seems unlikely to be the sole cause. Perhaps it is associated with the rapid degradation that is occurring in mountain areas (more on this to come). Perhaps it is the result of climate change. Interestingly, March 2026 was exceptionally warm compared to the long term record, globally, but it was “only” the fourth warmest March on record. March 2024 was the warmest on record.

This all requires more detailed analysis, which I have yet to do. But, at the moment, 2026 is proving to be a bad year for fatal landslides. A major caveat though is that the early months of the year are not a good predictor of what might happen through the Northern Hemisphere summer months, driven mainly by the SW monsoon in South Asia, the summer monsoon in East Asia and patterns of tropical cyclones.

References

Froude, M. and Petley, D.N. 2018.  Global fatal landslide occurrence from 2004 to 2016.  Natural Hazards and Earth System Sciences 18, 2161-2181.

Petley, D.N. 2012. Global patterns of loss of life from landslidesGeology 40 (10), 927-930.

Return to The Landslide Blog homepage Text © 2026. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Universal growth of magnetic energy during the nonlinear phase of subsonic and supersonic small-scale dynamos

Physical Review E (Plasma physics) - Fri, 04/10/2026 - 10:00

Author(s): Neco Kriel, Mark R. Krumholz, Patrick J. Armstrong, James R. Beattie, and Jennifer Schober

Small-scale dynamos (SSDs) amplify magnetic fields in turbulent plasmas. Theory predicts nonlinear magnetic energy growth Emag∝tpnl, but this scaling has not been tested across flow regimes. Using a large ensemble of SSD simulations spanning subsonic to supersonic turbulence, we measure linear growt…


[Phys. Rev. E 113, 045208] Published Fri Apr 10, 2026

Theory of beam-driven nonlinear plasma wake and interior waves

Physical Review E (Plasma physics) - Fri, 04/10/2026 - 10:00

Author(s): M. Lamač, P. Valenta, U. Chaulagain, J. Nejdl, D. Čáp, O. Morvai, and S. V. Bulanov

A beam of relativistic charged particles propagating in a plasma can drive plasma electrons to oscillate and together form a wave whose phase velocity matches the velocity of the driving beam. These plasma waves realize state-of-the-art compact accelerators through plasma wakefield acceleration. Her…


[Phys. Rev. E 113, 045209] Published Fri Apr 10, 2026

Investigating solid-fluid phase coexistence in dc plasma bilayer crystals: The role of particle pairing and mode coupling

Physical Review E (Plasma physics) - Fri, 04/10/2026 - 10:00

Author(s): Siddhartha Mangamuri, L. Couëdel, and S. Jaiswal

This article presents a detailed investigation of solid-fluid phase coexistence in a bilayer dusty plasma crystal subjected to varying confinement ring bias voltages in a dc glow discharge argon plasma. Melamine formaldehyde particles were employed to form a stable, hexagonally ordered bilayer cryst…


[Phys. Rev. E 113, 045210] Published Fri Apr 10, 2026

Nonmodal growth and optimal perturbations in magnetohydrodynamic shear flows

Physical Review E (Plasma physics) - Fri, 04/10/2026 - 10:00

Author(s): Adrian E. Fraser, Alexis K. Kaminski, and Jeffrey S. Oishi

In astrophysical shear flows, the Kelvin-Helmholtz (KH) instability is generally suppressed by magnetic tension, provided a sufficiently strong streamwise magnetic field. This is often used to infer upper (or lower) bounds on field strengths in systems where shear-driven fluctuations are (or are not…


[Phys. Rev. E 113, L043201] Published Fri Apr 10, 2026

GeoPVI: a Python Package for Geoscientific Inversion using Parametric Variational Inference

Geophysical Journal International - Fri, 04/10/2026 - 00:00
SummaryIn many fields of geoscience, researchers study the Earth’s properties by solving inverse or inference problems. Probabilistic approaches have gained increased attention over the past decade because they address the non-linearity and non-uniqueness properties of many naturally-inspired inverse problems and allow uncertainties in the solutions to be estimated. However, implementing such methods is computationally expensive and requires expertise in inverse and inference theory, high performance computing, and the geoscientific theory to be inverted. This makes the methods inaccessible to many geoscientists. In this paper, we first review the theoretical background of a particular suite of probabilistic algorithms referred to as parametric variational inference (PVI), and introduce GeoPVI, an open-source Python package designed to facilitate the implementation of these methods. With GeoPVI, users can model uncertainties in their geophysical parameter estimates efficiently given their expertise in inverse theory. It differs from sampling-based, non-parametric variational methods in that the probabilistic solution – the posterior or post-inversion probability distribution function that describes uncertainty in the model parameters of interest – is parametrised by explicit mathematical expressions. These expressions allow for the efficient storage and transfer, and for the evaluation of the posterior probability density for any set of parameter values. We demonstrate how to use the package to solve a set of problems, including tomographic imaging using travel time data, full waveform inversion, surface wave dispersion inversion, and vertical electrical sounding. We provide built-in forward functions to simulate first arrival travel times and full acoustic waveform data (in two spatial dimensions), and external forward functions can be incorporated into the package easily. We also demonstrate how to change prior information efficiently post-inversion, using the method of variational prior replacement. Contributions from the community are welcome, to make the package more broadly applicable.

Uncertainty in Hydrogeophysics: Electrical Resistivity Tomography with Variational Inference

Geophysical Journal International - Fri, 04/10/2026 - 00:00
SummaryElectrical resistivity tomography (ERT) is a widely used and effective tool for hydrogeological investigations. Conventional ERT inversion approaches are based on gradient-based algorithms, which typically provide deterministic optimal solutions, which are subject to uncertainty. Such uncertainty could have significant impact on hydrogeological interpretation using ERT. Model appraisal is a critical step after inversion, however, conventional appraisal methods are qualitative and thus subjective. To address these limitations, this study introduces a probabilistic variational inference (VI) method, referred to as Stein variational gradient descent (SVGD), to quantify both resistivity distributions and associated uncertainties in ERT inversions. Synthetic examples are conducted to investigate the effects of configurations and noise, and to compare the performance of SVGD with conventional inversion and model appraisal techniques. A field case study and its model validation are also presented to demonstrate the practical advantages of uncertainty quantification in field. The results indicate that SVGD can effectively reduce artifacts introduced by regularization and provide more comprehensive quantitative insights into subsurface structures compared to conventional approaches. The study also reveals limitations in the interpretation of basic statistics of uncertainty estimates, highlighting the need to examine the entire posterior distributions of parameter values. Additionally, this study demonstrates that the final uncertainty arises from a trade-off among multiple factors, such as geometry of subsurface structures, measurement techniques and data noise levels. Finally, we also discuss some comparisons with other probabilistic frameworks in hydrogeophysics, highlighting its potential to improve uncertainty and probability quantification in ERT, and possible future developments in hydrogeophysical coupled inversion.

Spatial uncertainty constraints reduce overfitting for potential field geophysical inversion

Geophysical Journal International - Fri, 04/10/2026 - 00:00
SummaryUnderstanding the internal structure of the Earth is achieved using geophysical data and inversion is a powerful mathematical technique used by resource explorers to do so. Inherent ambiguity means that an infinite number of petrophysical models exist that can explain the geophysical data, so constraints such as geological models and petrophysical data have been employed to reduce the solution space. The constraints, like the data, are subject to noise and error, resulting in uncertainty propagating to the final model because inversion is designed to use the algorithm and constraints to find the single ‘best’ solution. Current practice assumes the best solution is found by optimising for the lowest misfit between the data and model; however, if the data is uncertain, the model fit to that data is likewise uncertain and potentially misrepresentative. Optimising misfit also means that inversion is subject to overfitting. Overfitting occurs when a model achieves the lowest misfit values by inadvertently fitting to data noise. Overfitting inversion occurs when the model has too many free parameters with no constraints, resulting in near-surface anomalies that can be mistakenly identified as legitimate targets for exploration rather than model artefacts. This contribution describes the use of spatial uncertainty calculated from geophysical data, providing free parameter constraints to reduce overfitting for geophysical inversion. The spatial uncertainty estimate is taken from a geostatistical model calculated using Integrated Nested Laplacian Approximation (INLA). A region in the East Kimberley, northern Western Australia, is subject to gravity inversion using Tomofast-x, an open-source inversion platform. Inversion is conducted using different configurations. Inversion is run without spatial uncertainty constraints, as is current practice, and then with spatial uncertainty constraints to test their effect on the resulting petrophysical model. The geostatistical model offers different percentiles from the geophysical model representing the extrema of estimated gravimetry values in the 10th and 90th percentiles. Inversions are run using these ‘extrema’ alongside the current practice of using the 50th percentile (or ‘mean’) gravity models as the observed field. Examination of inversion using and not using spatial uncertainty constraints shows that overfitting can be reduced. Using the extrema percentiles as the observed field has lesser benefits to reduce overfitting.

Characterization of Full vector Magnetic recording potential of ɛ-Fe2O3 in clinkers

Geophysical Journal International - Fri, 04/10/2026 - 00:00
SummaryThe Earth’s ancient magnetic field is challenging to constrain from the rock record in large part due to the presence of non-ideal magnetic recorders in addition to processes, like alteration, that affect the ability of a material to reliably record field strength. Of the magnetic minerals present on Earth’s surface, magnetite is one of the most commonly used to simultaneously recover palaeomagnetic direction and intensity. Recent work on archaeological artifacts and clinker deposits (sedimentary rocks baked by coal seam fires) has identified a potential new mineral capable of recording the full-vector magnetic field: ɛ-Fe2O3, a high-T metastable phase of hematite. The palaeomagnetic potential of ɛ-Fe2O3, specifically regarding palaeointensity, has not been studied in depth. Further, recent work on synthetic ɛ-Fe2O3 has raised questions about the reliability of this phase for palaeointensity recording. To understand whether ɛ-Fe2O3 is a trustworthy full-vector magnetic recorder, more work is needed to assess this phase in its natural form. Here, we present results from Thellier-style palaeointensity experiments using a lab-induced thermoremanent magnetization (TRM) on natural ɛ-Fe2O3 present in Quaternary age clinker samples from the Custer National Forest, Montana, USA. The experimental setup was designed in attempt to isolate the ɛ-Fe2O3 phase from other magnetic carriers. The results of our study suggest that natural ɛ-Fe2O3 can reliably record palaeointensity and palaeodirections, yielding palaeointensity estimates within 5% and directions consistent with the applied laboratory TRM field. These new results suggest that ɛ-Fe2O3 bearing artifacts and clinkers can be robust full-vector magnetic recorders. Overall, this study adds confidence to previously obtained archaeomagnetic data and to a novel palaeomagnetic recorder, clinkers, opening the door to a more detailed characterization of the recent field.

Rheological heterogeneities of the upper plate control the deformation diversity in different continental collision systems along the Tethyan tectonic belt

Geophysical Journal International - Fri, 04/10/2026 - 00:00
SummaryContinental collision is prevalent along the Tethyan tectonic belt, characterized by diverse deformation patterns across regions, including concentrated deformation in the Alps, integral deformation throughout the Tibetan Plateau, and separate deformation within the Iranian Plateau. However, the mechanisms governing the diversity of deformation in different collisional orogens along the Tethyan tectonic belt remain poorly understood. Accretion of continental terranes during the closure of the Paleo- and Neo-Tethys oceans generated a highly heterogeneous lithosphere along the southern margin of Eurasia, a crucial factor in interpreting continental deformation. This study employs 2D thermo-mechanical numerical modeling to assess how tectonic inheritance-induced rheological heterogeneities govern deformation patterns in continental collision orogens. Our simulation results reveal three end-member deformation patterns resulting from variations in the rheology of the upper plate within the collision system. When the upper plate is uniformly strong, it prevents deformation from propagating into the interior of the continent, resulting in concentrated deformation in the collision front. If the upper plate is uniformly weak, deformation occurs throughout the entire upper plate, resulting in an integral deformation pattern. When a rheologically weak block is embedded in the strong upper plate, deformation concentrates in the collision zone and the weak block, resulting in separate deformation within the upper plate. Changes in the rheology of the bounding plates, the convergence rate, and the total convergence amount would not alter the basic deformation pattern of the continental collision system, if the rheology of the upper plate remain unchanged. Based on our simulation results, we suggest that the rheological characteristics of the upper plate govern the deformation patterns in continental collision systems. Our simulation results provide first-order explanations for the observed diversity of deformation in different continental collision systems along the Tethyan tectonic belt.

Dispersion of Scholte Waves in Horizontally Layered VTI Media

Geophysical Journal International - Fri, 04/10/2026 - 00:00
SummaryThe dispersion of Scholte waves provides a fundamental basis for inverting shallow seafloor elastic parameters. With the expansion of marine exploration, an isotropic seabed approximation has become increasingly inadequate. Therefore, in this study, Scholte-wave dispersion was analyzed in vertically transversely isotropic (VTI) media and the sensitivities of key parameters were quantified. Using a reduced delta-matrix formulation, a numerically stable dispersion equation for fluid-solid-coupled VTI media was derived and validated with elastic wavefield modelling and frequency-velocity spectra. Sensitivity tests on three representative seabed models [velocity increasing with depth (VID), a low-velocity layer (LVL), and a high-velocity layer (HVL)] show that anisotropy amplifies phase-velocity sensitivity to P-wave velocity (VP), especially for higher modes. In contrast, sensitivities to Thomsen parameters ε and δ are secondary but non-negligible. As mode order increases, the sensitive frequency band broadens and penetrates to greater depths. For the HVL model, dispersion is particularly sensitive to the overburden above the high-velocity layer. By contrast, for the LVL model, sensitivity concentrates within the low-velocity layer itself and above it. These sensitivity patterns reflect the influences of different parameters on inversion results and support the development of dispersion curve inversion for anisotropic shallow seafloor.

Spatiotemporal correlation-based AI developed for bias correction of atmospheric and oceanic variables

Phys.org: Earth science - Thu, 04/09/2026 - 23:40
Daily travel plans and early warnings for extreme weather all rely on traditional numerical weather prediction. However, both traditional numerical weather prediction and AI forecasting large models have long suffered from systematic biases, which compromise forecast accuracy.

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer