Feed aggregator

Temperature corrections boost accuracy of coastal ocean color satellites

Phys.org: Earth science - Fri, 10/17/2025 - 16:45
Ocean color satellites provide essential insights into water quality and ecosystem dynamics by estimating chlorophyll, suspended matter, and dissolved organic material. Atmospheric correction, the process of removing scattering and absorption from satellite signals, is central to these analyses.

AI-driven mapping captures daily global land changes

Phys.org: Earth science - Fri, 10/17/2025 - 16:44
Accurate land cover mapping underpins biodiversity protection, climate adaptation, and sustainable land use. Despite advances in remote sensing, satellite-only approaches remain limited by cloud cover, revisit intervals, and the lack of ground-truth data. Dynamic products such as Dynamic World have improved timeliness but still struggle to capture sudden transitions or validate their results.

Universities Reject Trump Funding Deal

EOS - Fri, 10/17/2025 - 16:09
body {background-color: #D2D1D5;} Research & Developments is a blog for brief updates that provide context for the flurry of news regarding law and policy changes that impact science and scientists today.

The “Compact for Academic Excellence in Higher Education,” developed by the Trump administration and sent to nine universities on 1 October, proposes that the institutions agree to a series of criteria in exchange for preferential treatment in funding decisions.

The compact’s provisions ask universities to: 

  • Ban the consideration of any demographic factors, including sex, ethnicity, race, sexual orientation, and religion in any admissions decisions, financial aid decisions, or hiring decisions.
  • Commit to “institutional neutrality,” create an “intellectually open campus environment,” and abolish “institutional units that purposefully punish, belittle, and even spark violence against conservative ideas.”
  • Require all employees to abstain from actions or speech related to social and political events unless such events have a direct impact on their university or they are acting in their individual capacity rather than as university representatives. 
  • Interpret the words “woman,” and “man” according to “reproductive function and biological processes.”
  • Stop charging tuition for any admitted student pursuing “hard science” programs. (This only applies for universities with endowments over $2 million per undergraduate student.)
  • Disclose foreign funding and gifts.
Compact-for-Academic-Excellence-in-Higher-Education-10.1Download

The proposed deal was sent to the University of Pennsylvania, the University of Virginia, the University of Arizona, the University of Texas at Austin, the University of Southern California, Vanderbilt University, Dartmouth University, Brown University, and the Massachusetts Institute of Technology. 

 
Related

“Any university that refuses this once-in-a-lifetime opportunity to transform higher education isn’t serving its students or their parents—they’re bowing to radical, left-wing bureaucrats,” Liz Huston, a White House spokesperson, told Bloomberg

Simon Marginson, a professor of higher education at Oxford University, told Time that if successful, the compact would “establish a level of federal control of the national mind that has never been seen before.” 

On 12 October, President Trump opened up the offer to all institutions of higher education in a post on social media website Truth Social.

As of 20 October, the following schools have responded to Trump’s offer:

  • Massachusetts Institute of Technology: MIT was the first to reject Trump’s offer. In a 10 October letter to the administration, MIT President Sally Kornbluth wrote that MIT’s practices “meet or exceed many standards outlined in the document,” but that the compact “also includes principles with which we disagree, including those that would restrict freedom of expression and our independence as an institution.”
  • Brown University: In a 15 October letter to the administration, Brown University President Christina H. Paxson declined the deal. She wrote that Brown “would work with the government to find solutions if there were concerns about the way the University fulfills its academic mission,” but that, like Kornbluth, she was “concerned that the Compact by its nature and by various provisions would restrict academic freedom and undermine the autonomy of Brown’s governance.”
  • University of Southern California: In a 16 October statement, USC Interim President Beong-Soo Kim informed the university community that he had declined the deal, and wrote that the university takes legal obligations seriously and is diligently working to streamline administrative functions, control tuition rates, maintain academic rigor, and ensure that students develop critical thinking skills. “Even though the Compact would be voluntary, tying research benefits to it would, over time, undermine the same values of free inquiry and academic excellence that the Compact seeks to promote,” he wrote.
  • University of Pennsylvania: In a 16 October statement, UPenn President J. Larry Jameson informed the university community that he had declined to sign the compact. “At Penn, we are committed to merit-based achievement and accountability. The long-standing partnership between American higher education and the federal government has greatly benefited society and our nation. Shared goals and investment in talent and ideas will turn possibility into progress,” he wrote.
  • University of Virginia: In a 17 October letter to the administration, UVA Interim President Paul Mahoney declined to sign the compact. “We seek no special treatment in exchange for our pursuit of those foundational goals,” the letter said. “The integrity of science and other academic work requires merit-based assessment of research and scholarship. A contractual arrangement predicating assessment on anything other than merit will undermine the integrity of vital, sometimes lifesaving, research and further erode confidence in American higher education.”
  • Dartmouth University: In a 18 October letter to the administration, Dartmouth President Sian Leah Beilock declined the deal. “I do not believe that the involvement of the government through a compact—whether it is a Republican- or Democratic-led White House—is the right way to focus America’s leading colleges and universities on their teaching and research mission,” Beilock wrote.
  • University of Arizona: In a 20 October announcement, President Suresh Garimella said he had declined to agree to the proposal and had instead submitted a Statement of Principles to the U.S. Department of Education informed by “hundreds of U of A stakeholders and partner organizations.” “This response is our contribution toward a national conversation about the future relationship between universities and the federal government. It is critical for the University of Arizona to take an active role in this discussion and to work toward maintaining a strong relationship with the federal government while staying true to our principles,” Garimella wrote.

—Grace van Deelen (@gvd.bsky.social), Staff Writer

20 October: This article was updated to include the University of Virginia and Dartmouth University.

21 October: This article was updated to include the University of Arizona.

These updates are made possible through information from the scientific community. Do you have a story about how changes in law or policy are affecting scientists or research? Send us a tip at eos@agu.org. Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

When the Earth Moves: 25 Years of Probabilistic Fault Displacement Hazards

EOS - Fri, 10/17/2025 - 16:08
Editors’ Vox is a blog from AGU’s Publications Department.

Earthquake surface ruptures can cause severe damage to infrastructure and, while preventative measures can be taken to allow the structures to adapt in the case of an earthquake, one of the best methods is to avoid unnecessary risks in the first place.

A new article in Reviews of Geophysics explores the history of Probabilistic Fault Displacement Hazard Assessments (PFDHA) and recent efforts to improve them with modern methods. Here, we asked the authors to give an overview of PFDHAs, how scientists’ methods have evolved over time, and future research directions.

What is fault displacement and what kinds of risks are associated with it?

Fault displacement occurs when an earthquake breaks the ground surface along a fault. This displacement along the fault can shift the ground horizontally and/or vertically, by several meters for the largest earthquakes. Such ruptures pose serious risks to infrastructures located across faults—such as pipelines, transportation systems, dams and power generation facilities—because they may be torn apart or severely damaged. While some facilities can be engineered to tolerate limited movements, many critical systems are highly vulnerable, making it essential to evaluate this hazard.

This figure shows the Trans-Alaska Pipeline crossing the Denali Fault, which ruptured during the 2002 earthquake. Photos and diagrams illustrate how the pipeline was designed to bend and slide, allowing it to survive several meters of fault movement without breaking. Credit: Valentini et al. [2025], Figure 5

In simple terms, what are Probabilistic Fault Displacement Hazard Assessments (PFDHA)?

A Probabilistic Fault Displacement Hazard Assessment (PFDHA) is a quantitative analysis based on a method that estimates the likelihood that an earthquake will rupture the surface at a specific site and anticipate the magnitude of the displacement. Instead of giving a single answer, PFDHA provides probabilities of different displacement levels for different reference periods of interest. This allows engineers and planners to evaluate risks in a structured way and make informed decisions about building designs or land use near faults.

This diagram explains how scientists estimate the expected amount of displacement due to an earthquake and at a specific site. It shows the main steps and data used in a Probabilistic Fault Displacement Hazard Assessment (PFDHA). Credit: Valentini et al. [2025], Figure 8

How have Fault Displacement Hazard Assessments evolved over time?

The first systematic PFDHA was developed in the early 2000s for the Yucca Mountain nuclear waste repository in the USA. Since then, the methodology has expanded from normal faults to include strike-slip and reverse faults worldwide. Over the past 25 years, new global databases of surface ruptures supporting statistical analysis, advances in statistical modeling, and international benchmark exercises have significantly improved the reliability and comparability of PFDHA approaches. In the future, the field should integrate remote sensing data, artificial intelligence, and physics-based modeling to better capture the complexity of earthquake ruptures.

What are the societal benefits of developing PFDHAs?

By quantifying the hazard of surface fault rupture, PFDHAs provide critical input for the safe design of infrastructures. This helps to avoid catastrophic failures such as pipeline leaks, dam collapses and resulting flooding, or road and railway disruption. Beyond engineering, PFDHAs also support land-use planning by identifying areas where construction should be avoided. Ultimately, these assessments reduce economic losses, improve resilience, and protect human lives in earthquake-prone regions.

What are some real-life examples of PFDHAs being developed and implemented?

One of the earliest and most influential applications was at Yucca Mountain, Nevada, where PFDHA helped assess the safety of a proposed nuclear waste repository. More recently, PFDHA approaches have been adopted internationally, including in Japan and Italy, for assessing risks to dams, tunnels, and other critical infrastructure.

What are some of the most exciting recent developments in this field?

These photos show how earthquakes can damage critical infrastructure such as bridges, dams, railways, and pipelines. The images highlight both principal and distributed fault ruptures, underscoring why engineers and planners must consider both when assessing earthquake hazards. Credit: Valentini et al. [2025], Figure 4

Recent years have seen major advances thanks to new global databases such as the worldwide and unified database of surface ruptures (SURE) and the Fault Displacement Hazard Initiative (FDHI), which collect tens of thousands of observations of past surface ruptures. Remote sensing techniques now allow scientists to map fault ruptures with unprecedented detail. Importantly, these techniques have also awakened the geological and seismological community to the relevance of moderate earthquakes. Since the 2000s and 2010s, it has become clear that earthquakes smaller than magnitude 6.5 can also produce significant surface ruptures, a threat that was often overlooked before these technological advances. Additionally, international collaborations, such as the International Atomic Energy Agency benchmark project, are helping to unify approaches and ensure that PFDHAs are robust and reproducible across different regions.

What are the major unsolved or unresolved questions and where are additional research, data, or modeling efforts needed?

Several challenges remain. A key issue is the limited number of well-documented earthquakes outside North America and Japan, leaving other regions underrepresented in global databases. Another challenge is how to model complex, multi-fault ruptures, which are increasingly observed in large earthquakes. Understanding the controls on off-fault deformation, as revealed by modern geodetic techniques during large to moderate events, is another critical open question. This knowledge could improve our ability to predict rupture patterns and displacement amounts.

Similarly, the role of near-surface geology in controlling the location, size, and distribution of surface ruptures for a given earthquake magnitude remains poorly constrained and deserves further study. Standardizing terminology and methods is also essential for consistent hazard assessments. Looking forward, more high-quality data, integration of physics-based models, and improved computational frameworks will be crucial to advance the field.


—A. Valentini (alessandro.valentini@univie.ac.at, 0000-0001-5149-2090), University of Vienna, Austria; Francesco Visini (0000-0001-9582-6443), Istituto Nazionale di Geofisica e Vulcanologia, Italy; Paolo Boncio (0000-0002-4129-5779),  Università degli Studi “G. d’Annunzio,” Italy; Oona Scotti (0000-0002-6640-9090), Autorité de Sureté Nucléaire et de Radioprotection, France; and Stéphane Baize (0000-0002-7656-1790), Autorité de Sureté Nucléaire et de Radioprotection, France

Editor’s Note: It is the policy of AGU Publications to invite the authors of articles published in Reviews of Geophysics to write a summary for Eos Editors’ Vox.

Citation: Valentini, A., F. Visini, P. Boncio, O. Scotti, and S. Baize (2025), When the earth moves: 25 years of probabilistic fault displacement hazards, Eos, 106, https://doi.org/10.1029/2025EO255033. Published on 17 October 2025. This article does not represent the opinion of AGU, Eos, or any of its affiliates. It is solely the opinion of the author(s). Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Sedimentary rocks reveal ancient ocean floor cooling

Phys.org: Earth science - Fri, 10/17/2025 - 15:56
Rocks store information from long ago. For instance, their composition can reveal the environmental conditions during their formation. This makes them extremely important in climate research. This led a research team at the University of Göttingen and the GFZ Helmholtz Center for Geosciences to investigate the following: do "cherts"—sedimentary rocks that form when silica-rich sediment mud is buried hundreds of meters deep—reveal anything about the climate of the past?

Scientists Must Join Forces to Solve Forecasting’s Predictability Desert

EOS - Fri, 10/17/2025 - 11:55

Should I wear a jacket to work today, or will I be too warm? Will that hurricane miss my town, or should I prepare to evacuate? We rely on accurate short-term weather forecasts both to make mundane daily decisions and to warn us of extreme events on the horizon. At the same time, Earth system scientists focus on understanding what drives variations in temperature, precipitation, and extreme conditions over periods spanning months, decades, and longer.

Between those two ends of the forecasting spectrum are subseasonal-to-seasonal (S2S) predictions on timescales of 2 weeks to 2 months. S2S forecasts bridge the gap between short-term weather forecasts and long-range outlooks and hold enormous potential for supporting effective advance decisionmaking across sectors ranging from water and agriculture to energy, disaster preparedness, and more. Yet these timescales represent an underdeveloped scientific frontier where our predictive capabilities are weakest. Indeed, the S2S range is often referred to as the predictability desert.

Forecasts at 3- to 4-week lead times, for example, remain inconsistent. Sometimes, so-called windows of opportunity arise when models provide strikingly accurate, or skillful, guidance at this timescale. But these windows of skillful S2S forecasting are themselves unpredictable. Why do they occur when they do? Do they have recognizable precursors? And how does predictability depend on the quantity (e.g., temperature versus precipitation) being predicted?

Three interlocking puzzle pieces represent the integration of weather prediction (left) and long-term outlooks (right) with the “missing middle” of S2S predictability (center). The center piece highlights key applications—agriculture, water availability, and disaster preparedness—and the tools needed to advance S2S skill, including modeling, data assimilation (DA), artificial intelligence (AI), and multiscale process understanding. Credit: Simmi Readle/NSF NCAR

These questions are more than academic curiosities. Answering them would transform our ability to gauge the value of S2S forecasts in real time and to anticipate and respond to high-impact events such as heat waves, flooding rains, drought onset, and wildfires.

Tackling this challenge requires traditionally siloed communities—scientists focused on predicting near-term weather and those focused on projecting long-term changes in the Earth system—to coordinate efforts. Together, these communities can advance scientific understanding and predictive capabilities across scales.

Discovering Windows of Opportunity

The challenges of subseasonal-to-seasonal (S2S) prediction reflect the complex and interconnected dynamics of the Earth system.

The challenges of S2S prediction reflect the complex and interconnected dynamics of the Earth system. At these lead times, forecast skill relies not only on the accuracy of initial input atmospheric conditions—always a vital element for weather forecasts—but also on model treatments of slowly evolving components of the Earth system. These components—including the ocean state, land surface conditions, snow cover, atmospheric composition, and large-scale patterns of variability such as the Madden-Julian Oscillation (MJO), El Niño–Southern Oscillation, stratospheric quasi-biennial oscillation, and sudden stratospheric warmings—interact in ways that enhance or degrade forecast performance. Volcanic eruptions can further influence these interactions, altering circulation patterns and modulating surface climate on S2S timescales.

Researchers have made substantial progress in understanding these individual Earth system components. But we still cannot reliably anticipate when models will yield skillful forecasts because their accuracy at S2S timescales is episodic and state dependent, meaning it comes and goes and depends on various interacting conditions at any given time. A model might perform well for a given region in one season—yielding a window of opportunity—but struggle in another region or season.

So how might we get better at anticipating such windows? For starters, rather than viewing the predictive capability of models as fixed, we can treat it as a dynamic property that changes depending on evolving system conditions. This paradigm shift could help scientists focus on developing tools and metrics that help them anticipate when forecasts will be most reliable. It could also suggest a need to rethink strategies for collecting environmental observations.

Just as predictability is episodic, so too might be the value of strategically enhanced observations. For example, targeted observations of sea surface temperatures, soil moisture, or atmospheric circulation during periods when these conditions strongly influence forecast skill could be far more valuable than the same measurements made at other times. Such adaptive, or state-aware, observing strategies, say, intensifying atmospheric sampling ahead of a developing MJO event, would mean concentrating resources where and when they will matter most. By feeding these strategically enhanced observations into forecast models, scientists could improve both the forecasts themselves and the ability to evaluate their reliability.

Aligning Goals Across Disciplines

S2S timescales fall at the intersection of weather forecasts and seasonal to decadal outlooks, and the communities working on those different types of predictions have different focuses and research strengths.

To drive needed technical advances supporting improved S2S predictability, we also need a cultural shift to remove barriers between scientific disciplines. S2S timescales fall at the intersection of weather forecasts and seasonal to decadal outlooks, and the communities working on those different types of predictions have different focuses and research strengths. Weather prediction emphasizes initial condition accuracy, data assimilation, and high-resolution modeling of fast atmospheric processes. Studying Earth system behavior and variability over longer timescales focuses on modeling slowly evolving boundary conditions (e.g., the ocean) and coupled component interactions (e.g., between the land and the atmosphere).

Historically, these communities have operated along parallel tracks, each with its own institutions, funding structures, and research priorities. The challenge of identifying windows of opportunity at S2S timescales offers a unifying scientific problem.

Earth system features that offer potentially promising signals of S2S predictability, such as the MJO, are already shared terrain, studied through the lenses of both weather and longer-term change. Extreme events are another area of convergence: Weather models focus on forecasting specific short-lived, high-impact events, whereas Earth system models explore the conditions and teleconnections that influence the likelihood and persistence of extremes. Together, these complementary perspectives can illuminate not only what might happen but why and when skillful forecasts are possible.

The path to unlocking S2S predictability involves more than simply blending models, though. It requires aligning the communities’ scientific goals, model performance evaluation strategies, and approaches for dealing with uncertainty. These approaches include the design of model ensembles, data assimilation strategies that quantify uncertainty in initial conditions, probabilistic evaluation methods, and ways of communicating forecast confidence to users.

The path forward also entails building modeling systems that capitalize on the weather community’s expertise in initialization and the Earth system modeling community’s insights into boundary forcing and component coupling. Accurate initialization must capture all Earth system components—from soil moisture, ocean heat content, and snow cover, for example, to the state of the atmosphere, including the stratosphere. However, observations and data assimilation for several key variables, especially in the ocean, stratosphere, and other data-sparse regions, remain limited, constraining our ability to represent their influences in prediction systems.

A near-term opportunity for aligning goals and developing models lies in improving prediction of MJO-related extreme rainfall events, which arise from tropical ocean–atmosphere interactions and influence regional circulation and precipitation. This improvement will require that atmospheric convection be better represented in models, a long-standing challenge in both communities.

Emerging kilometer-scale models and machine learning offer shared innovation and collaboration spaces. Kilometer-scale models can explicitly resolve convection, validate and refine model parameterizations, and elucidate interactions between large-scale circulation and small-scale processes. Machine learning provides new avenues to emulate convection-permitting simulations, represent unresolved processes, and reduce systematic model errors.

Success with this challenge could yield immediate value for science and decisionmaking by, for example, enabling earlier warnings for flood-prone areas and supporting more informed planting and irrigation decisions in agriculture.

From Forecast Skill to Societal Resilience

The societal need for more skillful S2S prediction is urgent and growing. Communities worldwide are increasingly vulnerable to extreme conditions whose impacts unfold on weekly to monthly timescales. In scenarios such as a prolonged dry spell that turns into drought, a sudden warming trend that amplifies wildfire risk, or a stalled precipitation pattern that leads to flooding, insights from S2S forecasting could provide foresight and opportunities to prepare in affected areas.

Officials overseeing water management, energy planning, public health, agriculture, and emergency response are all seeking more reliable guidance for S2S time frames. In many cases, forecasts providing a few additional weeks of lead time could enable more efficient resource allocation, preparedness actions, and adaptation strategies. Imagine if forecasts could reliably indicate prolonged heat waves 3–4 weeks in advance. Energy providers could prepare for surges in cooling demand, public health officials could implement heat safety campaigns, and farmers could adjust planting or irrigation schedules to reduce losses.

The resilience of infrastructure, ecosystems, and economies hinges on knowing not only what might happen but also when we can trust our forecasts. By focusing on understanding when and where we have windows of opportunity with S2S modeling, we open the door to developing new, intermediate-term forecasting systems that are both skillful and useful—forecast systems that communicate confidence dynamically and inform real-world decisions with nuance.

Realizing this vision will require alignment of research priorities and investments. S2S forecasting and modeling efforts have often fallen between the traditional mandates of agencies concerned with either weather or longer-term outlooks. As a result, the research and operational efforts of these communities have not always been coordinated or sustained at the scale required to drive progress.

Coordination and Collaboration

With growing public attention on maintaining economic competitiveness internationally and building disaster resilience, S2S prediction represents an untapped opportunity space. And as machine learning and artificial intelligence offer new ways to explore predictability with models and to extract meaningful patterns from model outputs, now is the time to advance the needed coordination.

The many use cases for S2S prediction underscore that it isn’t just a scientific challenge, it’s a strategy for achieving resilience and prosperity.

The many use cases for S2S prediction underscore that it isn’t just a scientific challenge, it’s a strategy for achieving resilience and prosperity. We call on a variety of communities and enterprises to collaborate and rally around the challenge of illuminating windows of opportunity in S2S modeling.

Scientists from traditionally distinct disciplines should codesign research strategies to jointly investigate when, where, and why S2S skill emerges. For example, they could examine weather regimes (e.g., the Pacific or Alaska ridges) and their links to modes of variability (e.g., the North Atlantic Oscillation) and leverage data assimilation to better understand how these phenomena evolve across timescales.

The scientific community could also identify and evaluate critical observational gaps that limit progress in modeling and data assimilation. And they could develop strategies to implement adaptive observing approaches that, for example, target soil moisture, surface energy fluxes, and boundary layer profiles to better capture land-atmosphere interactions at S2S timescales. Such approaches would help to fill gaps and advance understanding of key Earth system processes.

Modeling centers could build flexible prediction systems that allow for advanced data assimilation and incorporate robust coupling of Earth system components—drawing from the weather and Earth system modeling communities, respectively—to explore how initial conditions and boundary forcing jointly influence S2S skill. Using modular components—self-contained pieces of code that represent individual Earth system processes, such as atmospheric aerosols and dynamic vegetation—within these systems could help isolate sources of predictability and improve process-level understanding.

To sustain progress initiated by scientists and modeling centers, agencies and funders must recognize S2S prediction as a distinct priority and commit to investing in the needed modeling, observations, and institutional coordination.

Furthermore, it’s essential that scientists, decisionmakers, and end users codevelop forecast tools and information. Close integration among these groups would focus scientific innovation on user-defined needs of what is useful and actionable, allowing scientists to build tools that meet those needs.

S2S forecasting may never deliver consistent skill across all timescales and regions, but knowing when and where it is skillful could make it profoundly powerful for anticipating high-impact hazards. Can we reliably predict windows of opportunity to help solve the predictability desert? Let’s do the work together to find out.

Author Information

Jadwiga H. Richter (jrichter@ucar.edu) and Everette Joseph, National Science Foundation National Center for Atmospheric Research, Boulder, Colo.

Citation: Richter, J. H., and E. Joseph (2025), Scientists must join forces to solve forecasting’s predictability desert, Eos, 106, https://doi.org/10.1029/2025EO250389. Published on 17 October 2025. This article does not represent the opinion of AGU, Eos, or any of its affiliates. It is solely the opinion of the author(s). Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

A Flash, a Boom, a New Microbe Habitat

EOS - Fri, 10/17/2025 - 11:54

A sizable asteroid impact generally obliterates anything alive nearby. But the aftermath of such a cataclysm can actually function like an incubator for life. Researchers studying a Finnish impact structure found minerals whose chemistry implies that microbes were present roughly 4 million years after the impact. These findings, which were published in Nature Communications last month, shed light on how rapidly microscopic life colonizes a site after an asteroid impact.

A Special Lake

Finland is known for its myriad lakes used by boaters, fishers, swimmers, and other outdoor afficionados. Lake Lappajärvi is a particularly special Finnish lake with a storied past: Its basin was created roughly 78 million years ago when an asteroid slammed into the planet. In 2024, the United Nations Educational, Scientific and Cultural Organization (UNESCO) established a geopark in South Ostrobothnia, Finland, dedicated to preserving and sharing the history of the 23-kilometer-diameter lake and the surrounding region.

“It’s one of the places where you think that life could have started.”

Jacob Gustafsson, a geoscientist at Linnaeus University in Kalmar, Sweden, and his colleagues recently analyzed a collection of rocks unearthed from deep beneath Lake Lappajärvi. The team’s goal was to better understand how rapidly microbial life colonized the site after the sterilizing impact, which heated the surrounding rock to around 2,000°C (3,632°F).

There’s an analogue between this type of work and studies of the origin of life, said Henrik Drake, a geochemist at Linnaeus University and a member of the team. That’s because a fresh impact site contains a slew of temperature and chemical gradients and no shortage of shattered rocks with nooks and crannies for tiny life-forms. A similar environment beyond Earth would be a logical place for life to arise, Drake said. “It’s one of the places where you think that life could have started.”

Microbe-Sculpted Minerals

In 2022, Gustafsson and his collaborators traveled to Finland to visit the National Drill Core Archive of the Geological Survey of Finland.

There, in the rural municipality of Loppi, the team pored over sections of cores drilled from beneath Lake Lappajärvi in the 1980s and 1990s. The researchers selected 33 intervals of core that were fractured or shot through with holes. The goal was to find calcite or pyrite crystals that had formed in those interstices as they were washed with mineral-rich fluids.

“It’s amazing what we can find out in tiny crystals.”

The team used tweezers to pick out individual calcite and pyrite crystals from the cores. Gustafsson and his collaborators then estimated the ages of those crystals using uranium-lead dating and a technique known as secondary ion mass spectrometry to calculate the ratios of various carbon, oxygen, and sulfur isotopes within them. Because microbes preferentially take up certain isotopes, measuring the isotopic ratios preserved in minerals can reveal the presence of long-ago microbial activity and even identify types of microbes. “We see the products of the microbial process,” Drake said.

“It’s amazing what we can find out in tiny crystals,” Gustafsson added.

The researchers also used isotopic ratios of carbon, oxygen, and sulfur to estimate local groundwater temperatures in the distant past. By combining their age and temperature estimates, the team could trace how the Lake Lappajärvi impact site cooled over time.

A Slow Cool

Groundwater temperatures at Lake Lappajärvi had cooled to around 50°C (122°F) roughly 4 million years after the impact, the team found. That’s a far slower cooling rate than has been inferred for other similarly sized impact craters, such as Ries Crater in Germany, in which hydrothermal activity ceased after about 250,000 years, and Haughton Crater in Canada, where such activity lasted only about 50,000 years.

“Four million years is a very long time,” said Teemu Öhman, an impact geologist at the Impact Crater Lake–Lappajärvi UNESCO Global Geopark in South Ostrobothnia, Finland, not involved in the research. “If you compare Lappajärvi with Ries or Haughton, which are the same size, they cooled way, way, way faster.”

That difference is likely due to the type of rocks that predominate at the Lappajärvi impact site, Gustafsson and his collaborators proposed. For starters, there’s only a relatively thin layer of sedimentary rock at the surface. “Sedimentary rocks often don’t fully melt during impact because of their inherent water and carbon dioxide content,” Drake explained. And Lappajärvi has a thick layer of bedrock (including granites and gneisses), which would have melted in the impact, sending temperatures surging to around 2,000°C, earlier research estimated.

About 4 million years after the impact is also when microbial activity in the crater began, according to Gustafsson and his collaborators. Those ancient microbes were likely converting sulfate into sulfide, the team proposed. And roughly 10 million years later, when temperatures had fallen to around 30°C (86°F), methane-producing microbes appeared, the researchers surmised on the basis of their isotopic analysis of calcite.

In the future, Gustafsson and his colleagues plan to study other Finnish impact craters and look for similar microbial features in smaller and older impact structures. In the meantime, the team is carefully packaging up their material from the Lappajärvi site. It’s time to return the core samples to the Geological Survey of Finland, Drake said. “Now we need to ship them back.”

—Katherine Kornei (@KatherineKornei), Science Writer

Citation: Kornei, K. (2025), A flash, a boom, a new microbe habitat, Eos, 106, https://doi.org/10.1029/2025EO250388. Published on 17 October 2025. Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Turbulence-generated stepped safety factor profiles in tokamaks with low magnetic shear

Physical Review E (Plasma physics) - Fri, 10/17/2025 - 10:00

Author(s): Arnas Volčokas, Justin Ball, Giovanni Di Giannatale, and Stephan Brunner

Nonlinear local and global gyrokinetic simulations of tokamak plasmas demonstrate that turbulence-generated currents flatten the safety factor profile near low-order rational surfaces when magnetic shear is low, even when the plasma β is small. A large set of flux tube simulations with different saf…


[Phys. Rev. E 112, L043201] Published Fri Oct 17, 2025

Study finds humans outweigh climate in depleting Arizona's water supply

Phys.org: Earth science - Fri, 10/17/2025 - 08:56
A study led by University of Arizona researchers shows that decades of groundwater pumping by humans has depleted Tucson-area aquifers far more than natural climate variation. Published in the journal Water Resources Research, the study provides the first multi-millennial reconstruction for the region that places human impacts on groundwater into long-term context.

Using remote-dynamic earthquake triggering as a stress meter: identifying potentially susceptible faults in the Lower Rhine Embayment near Weisweiler, Germany

Geophysical Journal International - Fri, 10/17/2025 - 00:00
AbstractTransient stress changes from seismic waves of distant earthquakes can promote local fault slip, a phenomenon referred to as remote dynamic triggering. This study examines the remote triggering susceptibility of faults in the Lower Rhine Embayment (LRE) in the Weisweiler region, Germany, a proposed site for geothermal energy production. Assessment of remote triggering can guide industrial operations to assess seismic hazard and mitigate risks associated with fault reactivation caused by small stress perturbations. We select a set of 23 candidate mainshocks from global earthquake catalogs that produce peak ground velocities (PGVs) that exceed 0.02 cm/s in the LRE. The magnitude of these mainshocks ranges from 5.4 to 9.1, epicentral distances range from 50 to 12,300 km, and back azimuth ranges from 16○ to 350○ with a maximum azimuthal gap of 91○. The candidate mainshocks generated PGVs locally from 0.02 to 0.28 cm/s (compared to typical threshold values ranging from 0.02 to 6 cm/s), corresponding to dynamic stress (σpd) values of 1.4 to 26 kPa. We use P-statistics and waveform data from local seismic stations to identify seismicity rate changes and uncatalogued earthquakes that were potentially triggered by the passing mainshock waves. The analysis reveals a statistically significant increase in seismicity rates following four mainshocks: the 1992 Mw5.4 Roermond, Netherlands, 2021 Mw8.2 Chignik, Alsaka, USA, 2023 Mw7.6 Kahramanmaraş, Republic of Türkiye, and 2025 Mw8.8 Kamchatka, Russia earthquakes. The 1992 Roermond mainshock triggered earthquakes within 50 km of its epicenter that were clustered between the Feldbiss and Sandgewand faults. The same area experienced a triggered earthquake sequence following the Chignik mainshock, suggesting that future detailed monitoring in this area may be warranted. The Roermond aftershock distribution can be divided two groups of events, including 61 that occur on the fault and in the near-field, which can be explained by static-stress increase and fluid diffusion. Another 32 remote aftershocks occurred that are consistent with secondary triggering promoted by aseismic slip propagation. The alignment of triggering mainshock back azimuths with the dominant strike direction of regional faults suggests that the orientation of incoming seismic waves is an important factor influencing susceptibility. Despite evidence of triggering, the majority of mainshocks (19 out of 23) were not followed by detectable seismicity-rate changes in the LRE, highlighting the complexity of conditions that lead to remote dynamic triggering. The study area does not respond to a triggering stress threshold, suggesting that non-linear, or a combination of linear and non-linear effects, dominate possible triggering mechanisms. Although the LRE does not respond to a clear triggering threshold, this study demonstrates that peak dynamic stress perturbations of approximately 1.4 kPa or greater can still trigger earthquakes. But, susceptibility is modulated by additional factors such as fault orientation, earthquake fault-zone properties, their state in the seismic cycle, and pre-existing stress state.

Multi-frequency wavefield solutions for variable velocity models using meta-learning enhanced low-rank physics-informed neural network

Geophysical Journal International - Fri, 10/17/2025 - 00:00
AbstractPhysics-informed neural networks (PINNs) face significant challenges in modeling multi-frequency wavefields in complex velocity models due to their slow convergence, difficulty in representing high-frequency details, and lack of generalization to varying frequencies and velocity scenarios. To address these issues, we propose Meta-LRPINN, a novel framework that combines low-rank parameterization using singular value decomposition (SVD) with meta-learning and frequency embedding. Specifically, we decompose the weights of PINN’s hidden layers using SVD and introduce an innovative frequency embedding hypernetwork (FEH) that links input frequencies with the singular values, enabling efficient and frequency-adaptive wavefield representation. Meta-learning is employed to provide robust initialization, improving optimization stability and reducing training time. Additionally, we implement adaptive rank reduction and FEH pruning during the meta-testing phase to further enhance efficiency. Numerical experiments, which are presented on multi-frequency scattered wavefields for different velocity models, demonstrate that Meta-LRPINN achieves much faster convergence speed and much higher accuracy compared to baseline methods such as Meta-PINN and vanilla PINN. Also, the proposed framework shows strong generalization to out-of-distribution frequencies while maintaining computational efficiency. These results highlight the potential of our Meta-LRPINN for scalable and adaptable seismic wavefield modeling.

Reflection seismic profiling of mantle structure under the contiguous United States from ambient noise cross-correlation

Geophysical Journal International - Fri, 10/17/2025 - 00:00
AbstractP-wave reflections from the 410- and 660-km mantle discontinuities are visible in stacks of ambient noise cross-correlation functions of USArray stations spanning the contiguous United States. The reflections are most visible on the vertical components at frequencies between 0.1 and 0.3 Hz during low-noise periods, which generally occur during the summer months in the northern hemisphere. Common reflection point stacking can be used to resolve apparent lateral differences in discontinuity structure across the continent and suggests the possible existence of sporadic reflectors at other depths. Visibility of the 660-km reflector is correlated with faster P-wave velocities at similar depth in a tomographic model for North America. However, the lack of clear agreement between these P-wave ambient noise features and prior mantle-transition-zone imaging studies using other methods suggests caution should be applied in their interpretation. Ambient noise sources from the southern oceans may not be distributed uniformly enough for cross-correlation stacks to provide unbiased estimates of the true station-to-station P-wave Green’s functions. However, the clear presence of 410- and 660-km reflections in the ambient noise data suggests that it should be possible to unravel the complexities associated with varying noise source locations to produce reliable P-wave reflection profiles, providing new insights into mantle structure under the contiguous United States.

Inhomogeneous plane waves in attenuative anisotropic porous media

Geophysical Journal International - Fri, 10/17/2025 - 00:00
AbstractSeismic wave propagation in poro-viscoelastic anisotropic media is of practical importance for exploration geophysics and global seismology. Existing theories generally utilize homogeneous plane wave theory, which considers only velocity anisotropy but neglects attenuation anisotropy and wave inhomogeneity arising from attenuation. As a result, it poses significant challenges to accurately analyze seismic wave dispersion and attenuation in poro-viscoelastic anisotropic media. In this paper, we investigate the propagation of inhomogeneous plane waves in poro-viscoelastic media, explicitly incorporating both velocity and attenuation anisotropy. Starting from classical Biot theory, we present a fractional differential equation describing wave propagation in attenuative anisotropic porous media that accommodates arbitrary anisotropy in both velocity and attenuation. Then, instead of relying on the traditional complex wave vector approach, we derive new Christoffel and energy balance equations for general inhomogeneous waves by employing an alternative formulation based on the complex slowness vector. The phase velocities and complex slownesses of inhomogeneous fast and slow quasi-compressional (qP1 and qP2) and quasi-shear (qS1 and qS2) waves are determined by solving an eighth-degree algebraic equation. By invoking the derived energy balance equation along with the computed complex slowness, we present explicit and concise expressions for energy velocities. Additionally, we analyze dissipation factors defined by two alternative measures: the ratio of average dissipated energy density to either average strain energy density or average stored energy density. We clarify and discuss the implications of these definitional differences in the context of general poro-viscoelastic anisotropic media. Finally, our expressions are reduced to give their counterparts of the homogeneous waves as a special case, and the reduced forms are identical to those presented by the existing poro-viscoelastic theory. Several examples are provided to illustrate the propagation characteristics of inhomogeneous plane waves in unbounded attenuative vertical transversely isotropic porous media.

Two-stage approach for earthquake detection using multiple clustering-based classification

Geophysical Journal International - Fri, 10/17/2025 - 00:00
AbstractDeep learning (DL) approach has gained attention for earthquake (EQ) detection. To alleviate the problem of training data shortage, transfer learning (TL) provides a useful framework to adapt pre-trained models, typically through tuning of model parameters. Nonetheless, the current practice still requires considerable data, which hinders its application where only a small number of data is available. Instead of TL, we propose a novel two-stage of model correction as a solution to this important and ubiquitous problem in EQ detection. In the proposed approach, a pre-trained DL model is directly applied to waveform data in the target domain (first stage), and the cases that are classified as an earthquake signal (i.e., positive cases) are further classified as positives and negatives using a non-DL classification method (second stage). Our classification method for the second stage is based on multiple clustering, which characterizes local waveform patterns in terms of amplitude scale in specific time segments that are inferred in a data-driven manner. This characterization captures complex high-dimensional waveform patterns in a low-dimensional space, which leads to the effective classification of true and false positives. Furthermore, the proposed method is useful when only true positive waveforms are labeled (PU classification). Both synthetic and real data analysis clearly demonstrated effectiveness of unsupervised waveform characterization of the proposed method.

Theoretical background for full-waveform inversion with distributed acoustic sensing and integrated strain sensing

Geophysical Journal International - Fri, 10/17/2025 - 00:00
AbstractFull-waveform inversion (FWI) is a powerful imaging technique that produces high-resolution subsurface models. In seismology, FWI workflows are traditionally based on seismometer recordings. The development of fibre-optic sensing presents opportunities for harnessing information from new types of measurements. With dense spatial and temporal sampling, fibre-optic sensing captures the seismic wavefield at metre-scale resolution along the cable. Applying FWI to fibre-optic measurements requires the reformulation of the forward and adjoint problems due to two fundamental differences to seismometer data: i) fibre-optic measurements are sensitive to strain rather than translational motion, and ii) they do not represent the motion at a single spatial point, but instead capture the average deformation over a pre-defined cable segment, known as the gauge length. Within this study, we derive the adjoint sources to perform FWI for data from distributed acoustic sensing (DAS) and integrated fibre-optic sensing (IFOS) that are based on moment tensors. Our formulation incorporates gauge-length effects, direction-dependent sensitivity and complex cable layouts. For the numerical simulations, we use a spectral-element solver that allows us to incorporate surface topography, and coupled viscoacoustic and viscoelastic rheologies. In illustrative examples, we present how our theoretical developments can be used in inversions of synthetic fibre-optic data generated for a realistically curved cable placed on irregular topography. As examples, we invert for source parameters, including moment tensor, location, and origin time for noise-free DAS data, noise-contaminated DAS data, and IFOS data. Further, we present the 3-D imaging results for the three data groups and further analyse the effect of scatterers on the FWI based on DAS data. In all example inversions, we compare how close the found model is to the known ground truth. The codes to produce these results are accessible and ready to be applied to real data inversions.

Insights into the structural properties of frozen rock from fitting a two-component model to broadband SIP laboratory data

Geophysical Journal International - Fri, 10/17/2025 - 00:00
AbstractHigh-frequency induced polarisation, which measures the complex electrical conductivity in a frequency range up to several hundred kHz, is potentially suitable to detect and quantify ice in the frozen subsurface. In order to estimate ice content from the electrical spectra, a two-component weighted power mean (WPM) model has been suggested and applied to field-scale data. In that model, ice is one of the components, whereas the solid phase, residual liquid water and potentially air form the second component, called “matrix”. Here, we apply the model to laboratory data previously discussed in the literature, with the aim to assess the applicability of the model and to understand the behaviour of the frequency-dependent electrical conductivity. The data were measured on an unconsolidated sediment sample with 20.8% water content from the European Alps, and a consolidated sandstone with 16.6% porosity. Electrical spectra have been measured over a temperature range from approx. - 41 ○C to +20 ○C and a frequency range from 0.01 Hz to 45 kHz. We extend the original WPM model to account for low-frequency polarisation in form of a constant phase angle model. The measured data were fitted with the model by a least-squares inversion algorithm. In order to reduce the ambiguity, we constrained several of the nine underlying parameters by literature values, in particular for the electrical properties of water ice, and the expected ice content according to porosity or water content of the unfrozen sample. Both data sets can be well matched, corroborating the hypothesis that the model is in principle suitable to explain measured data of frozen samples in that frequency range. One important observation is that the mixing parameter, i.e. the power in the WPM model, which is controlled by the geometric arrangement of the two components, depends on temperature. For the unconsolidated sample it even becomes negative at the coldest temperature, which is important because negative shape factors relate to specific geometries. A second observation is that relatively large permittivities of the matrix are required to fit the data, suggesting that processes at the interface between solid/liquid phase and ice, which are not included in the volumetric mixing model, might be relevant and should be considered in future extensions of the model.

Decoding dangers of Arctic sea ice with radar, seismic methods and fiber-optic sensing

Phys.org: Earth science - Thu, 10/16/2025 - 19:55
Sea ice coverage in the Arctic Ocean is at one of its lowest levels on record, yet there's no unanimity on when that ice will disappear completely during summer months.

Coral skeletons left by a medieval tsunami whisper a warning for Caribbean region

Phys.org: Earth science - Thu, 10/16/2025 - 19:25
Sometime between 1381 and 1391, an earthquake exceeding magnitude 8.0 rocked the northeastern Caribbean and sent a tsunami barreling toward the island of Anegada.

Baltic Sea emerges as model for understanding consequences of climate change on coasts

Phys.org: Earth science - Thu, 10/16/2025 - 18:10
Under the lead of the Leibniz Institute for Baltic Sea Research Warnemünde (IOW), a review article outlined the state of the Baltic Sea coast and its expected development as a result of climate change. The article shows that the Baltic Sea can serve as a model for the consequences of climate change and that interdisciplinary research is needed to investigate changes in its shallow coastal zones. The focus is on researching the interactions between the coastal area and the open ocean and the aim is to develop a basis for marine conservation measures. The feature article was recently published in the journal Estuarine, Coastal and Shelf Science.

Large fluctuations in sea level throughout the last ice age challenge understanding of past climate

Phys.org: Earth science - Thu, 10/16/2025 - 18:00
Large changes in global sea level, fueled by fluctuations in ice sheet growth and decay, occurred throughout the last ice age, rather than just toward the end of that period, a study published in the journal Science has found.

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer