Author(s): Ian M. DesJardin, Christine M. Hartzell, and Jonathan Wrieden
Ion acoustic solitons can be generated by a charged object immersed in an electrostatic quasineutral two-temperature plasma flow. These are often described by the forced Korteweg–de Vries equation. Two-fluid simulations of this scenario are conducted and compared to numerical solutions of the forced…
[Phys. Rev. E 111, 025204] Published Wed Feb 19, 2025
Author(s): Hong Qin, Elijah J. Kolmes, Michael Updike, Nicholas Bohlsen, and Nathaniel J. Fisch
Phase space engineering by rf waves plays important roles in both thermal D-T fusion and nonthermal advanced fuel fusion, but not all phase space manipulation is allowed; certain fundamental limits exist. In addition to Liouville's theorem, which requires the manipulation to be volume preserving, Gr…
[Phys. Rev. E 111, 025205] Published Wed Feb 19, 2025
SummaryIn this paper, we present a catalogue of relocated seismic events in the North Sea spanning 1961 to 2022. Data from all relevant agencies were combined, incorporating all available seismic phase readings, thereby enhancing station coverage. As a result, our updated locations reveal a more clustered and aligned seismicity pattern compared with the original catalogue. Even with our combined dataset, only 157 of the 7,089 relocated events have azimuthal gaps of less than 90 degrees. Additionally, the distances between onshore stations and offshore events are considerable. Both of these factors lead to relatively poorly constrained hypocentres for most events. We therefore evaluate the performance of 1D velocity models routinely used by different North Sea adjacent monitoring agencies for earthquake location estimations in the North Sea. The variations in assessments due to the seismic velocity model used are significantly larger than the uncertainty ellipses calculated in the relocation, demonstrating that arithmetic uncertainties systematically underestimate location uncertainties in this setting. Obtaining a realistic estimate of location uncertainty is however crucial, particularly for distinguishing between natural and induced seismicity. This is fundamental to safe monitoring of the North Sea offshore industries, including geological CO2 storage. To overcome these discrepancies between the uncertainty ellipses and our multiple relocations, we introduce an alternative method that accounts for variability in the 1D velocity models. This approach enhances the reliability of the earthquake catalogue, and provides a more robust assessment of seismic activity in the North Sea.
SummaryElastic full-waveform inversion has recently been utilized to estimate the physical properties of the upper tens of meters of the subsurface, leveraging its capability to exploit the complete information contained in recorded seismograms. However, due to the non-linear and ill-posed nature of the problem, standard approaches typically require an optimal starting model to avoid producing non-physical solutions. Additionally, conventional optimization methods lack a robust uncertainty quantification, which is essential for subsequent informed decision-making.Bayesian inference offers a framework for estimating the posterior probability density function through the application of Bayes’ theorem. Methods based on Markov Chain Monte Carlo processes use multiple sample chains to quantify and characterize the uncertainty of the solution.However, despite their ability to theoretically handle any form of distribution, these methods are computationally expensive, limiting their usage in large-scale problems with computationally expensive forward modelings, as in the case of full-waveform inversion. Variational Inference provides an alternative approach to estimating the posterior distribution through a parametric or non-parametric proposal distribution. Among this class of methods, Stein Variational Gradient Descent stands out for its ability to iteratively refine a set of samples, usually referred to as particles, to approximate the target distribution through an optimization process. However, mode and variance-collapse issues affect this approach when applied to high-dimensional inverse problems.To address these challenges, in this work we propose to utilize an annealed variant of the Stein Variational Gradient Descent algorithm and apply this method to solve the elastic full-waveform inversion of surface waves. We validate our proposed approach with a synthetic test, where the velocity model is characterized by significant lateral and vertical velocity variations. Then, we invert a field dataset from the InterPACIFIC project, proving that our method is robust against cycle-skipping issues and can provide reasonable uncertainty estimations with a limited computational cost.
SummaryProbabilistic forecasts of earthquakes caused by anthropogenic changes in subsurface stresses require seismicity models that link rupture nucleation to stress states in geological faults. The recently introduced time-dependent stress response (TDSR) model is based on an exponential dependence of the time-to-failure on stress and is a generalization of the well-known rate-and-state (RS) seismicity model. Unlike RS, TDSR can directly incorporate estimates of the initial stress distribution on affected faults in the seismogenic zone. For the case of the Groningen gas field in the Netherlands, we utilize detailed field and borehole studies to estimate the initial stress distribution and rock properties of the reservoir faults. Using these initial conditions, we show that TDSR outperforms the Coulomb failure model, which assumes instantaneous failure, as well as the RS model, which relies on simplified pre-stress assumptions. Furthermore, an instantaneous Coulomb failure model cannot explain the effect of seasonal gas production in Groningen on the timing of induced earthquakes, in contrast to the TDSR model, which shows a good agreement between prediction and observation. Pseudo-prospective tests show that the seismic response to the reduced production since 2014 could have been predicted as early as 2010 if the production scenario had been known.
Accurate assessment of the land surface damage (such as small-scale fracturing and inelastic deformation) from two major earthquakes in 2023 can help scientists assess future earthquake hazards and therefore minimize risk to people and infrastructure. However, attaining precise extensive measurements in earthquake zones remains challenging.
Traces of organisms detected in sediments from 7.5 kilometers below the ocean surface reveal how organisms living in the deep sea are engineering their own environments. Analyses of sediment cores from the Pacific Ocean's Japan Trench, presented in Nature Communications, uncover evidence of burrowing and feeding activity of these deep-sea dwellers.
Literal groundbreaking research by Dr. Giorgio Arriga enhances our understanding of the long-term evolution of seismogenic (earthquake-related) faults in the Apennines of Central Italy. Arriga's study examines the development of fault systems over millions of years and their impact on present-day seismic hazards. His research included an investigation of faults in the L'Aquila Basin, a region severely affected by a major earthquake in 2009 that claimed over 300 lives, leading to a significant discovery.
Las Vegas locals began a project in the 1990s to protect a geological marvel at the edge of town. They made educational signs and were joined by politicians including late Sen. Harry Reid and then-Interior Secretary Bruce Babbitt, but the area was vandalized soon after.
Extreme weather events are becoming more frequent as a result of climate change. River floods such as those along the Ahr and Meuse valleys in 2021, the Central European floods of last September and the recent floods in Valencia, Spain, are caused by so-called cut-off lows. The Wegener Center at the University of Graz has now for the first time investigated how these storms could change with climate change.
Imagine a world filled only with daisies. Light-colored daisies reflect sunlight, cooling down the planet, while darker daisies absorb sunlight, warming it up. Together, these two types of daisies work to regulate the planet's temperature, making the world more habitable for all of them.
Author(s): M. Galletti, L. Crincoli, R. Pompili, L. Verra, F. Villa, R. Demitra, A. Biagioni, A. Zigler, and M. Ferrario
We describe the generation of plasma filaments for application in plasma-based particle accelerators. The complete characterization of a plasma filament generated by a low-energy self-guided femtosecond laser pulse is studied experimentally and theoretically in a low-pressure nitrogen gas environmen…
[Phys. Rev. E 111, 025202] Published Tue Feb 18, 2025
Author(s): Jihoon Kim, Roopendra Rajawat, Tianhong Wang, and Gennady Shvets
We demonstrate an ion acceleration scheme capable of simultaneously focusing and accelerating a multispecies ion beam with monoenergetic spectra to a few micron radius. The focal length and ion mean energy can be independently controlled: the former by using a different front-surface shape and the l…
[Phys. Rev. E 111, 025203] Published Tue Feb 18, 2025
Abstract
The gravimetry measurements from the Gravity Recovery and Climate Experiment (GRACE) and its follow-on (GRACE-FO) mission provide an essential way to monitor changes in ocean bottom pressure (
\(p_b\)
), which is a critical variable in understanding ocean circulation. However, the coarse spatial resolution of the GRACE(-FO) fields blurs important spatial details, such as
\(p_b\)
gradients. In this study, we employ a self-supervised deep learning algorithm to downscale global monthly
\(p_b\)
anomalies derived from GRACE(-FO) observations to an equal-angle 0.25
\( ^{\circ }\)
grid in the absence of high-resolution ground truth. The optimization process is realized by constraining the outputs to follow the large-scale mass conservation contained in the gravity field estimates while learning the spatial details from two ocean reanalysis products. The downscaled product agrees with GRACE(-FO) solutions over large ocean basins at the millimeter level in terms of equivalent water height and shows signs of outperforming them when evaluating short spatial scale variability. In particular, the downscaled
\(p_b\)
product has more realistic signal content near the coast and exhibits better agreement with tide gauge measurements at around 80% of 465 globally distributed stations. Our method presents a novel way of combining the advantages of satellite measurements and ocean models at the product level, with potential downstream applications for studies of the large-scale ocean circulation, coastal sea level variability, and changes in global geodetic parameters.
Abstract
Data-driven technologies have shown promising potential for improving GNSS positioning, which can analyze observation data to learn the complex hidden characteristics of system models, without rigorous prior assumptions. However, in complex urban areas, the input observation data contain task-irrelevant noisy GNSS measurements arising from stochastic noise, such as signal reflections from tall buildings. Moreover, the problem of data distribution shift between the training and testing phases exists for dynamically changing environments. These problems limit the robustness and generalizability of the data-driven GNSS positioning methods in urban areas. In this paper, a novel deep reinforcement learning (DRL) method is proposed to improve the robustness and generalizability of the data-driven GNSS positioning. Specifically, to address the data distribution shift in dynamically changing environments, the robust Bellman operator (RBO) is employed into the DRL optimization to model the deviations in the data distribution and to enhance generalizability. To improve robustness against task-irrelevant noisy GNSS measurements, the long-term reward sequence prediction (LRSP) is adopted to learn robust representations by extracting task-relevant information from GNSS observations. Therefore, we develop a DRL method with robust augmented reward sequence prediction to correct the rough position solved by model-based methods. Moreover, a novel real-world GNSS positioning dataset is built, containing different scenes in urban areas. Our experiments were conducted on the public dataset Google smartphone decimeter challenge 2022 (GSDC2022) and the built dataset Guangzhou GNSS version 2 (GZGNSS-V2), which demonstrated that the proposed method can outperform model-based and state-of-the-art data-driven methods in terms of generalizability across different environments.
SummaryIn the past decade, six Mw ≥5.5 earthquakes struck the mountainous Golden Triangle region (Laos, Thailand, Myanmar) of the southeast India-Eurasia collision zone. The largest of them, the 2019 Mw 6.2 Sainyabuli earthquake in western Laos, shook river communities, dams, and a UNESCO World Heritage Site, prompting a need to understand regional earthquake potential. We used Interferometric Synthetic Aperture Radar (InSAR) data and modelling to solve for the 2019 mainshock source parameters, revealing right-lateral strike-slip along a 24 km-long NNW-trending fault which has limited topographic expression and was previously unmapped. InSAR modelling of its largest (Mw 5.5) aftershock in 2021 revealed a 7 km-long splay fault, also previously unrecognized. The 2022 Mw 5.9 Keng Tung earthquake in the northern Golden Triangle also ruptured an unknown, NW-trending right-lateral fault conjugate to longer, NE-trending faults nearby. Collectively, this shows that the region contains faults which are little evident in global digital topography and/or obscured by vegetation but long enough to generate sizeable earthquakes that should be accounted for in seismic hazard assessments. We relocated well-recorded aftershocks and other background seismicity (1978–2023) from across the Golden Triangle using the mloc software. Calibrated hypocenters span focal depths of 5–24 km and are distributed away from the main InSAR-modelled fault traces, another indication of fault structural immaturity. For the three 2019–2022 InSAR-constrained events, we also obtained moment tensor solutions from regional seismic waveform inversion. InSAR-derived peak slip depths and seismological centroid depths are mostly shallow (3–5 km), while focal depths are generally located in areas of low coseismic slip near the bottom of InSAR model faults. More broadly, we estimate a regional seismogenic thickness of ∼17 km (the 90% seismicity cut-off depth), a crucial parameter for seismic hazard calculations and building codes. Our integration of remote-sensing and seismologic analyses could be a blueprint for assessing earthquake potential of other regions with sparse instrumentation and limited topographic fault expression.
Nature Geoscience, Published online: 18 February 2025; doi:10.1038/s41561-025-01648-w
Global sea-level rise during Meltwater Pulse 1A followed sequential ice loss from the Laurentide, Eurasian and then West Antarctic ice sheets, according to a fingerprinting approach.
Abstract
Many components of the Earth system feature self-reinforcing feedback processes that can potentially scale up a small initial change to a fundamental state change of the underlying system in a sometimes abrupt or irreversible manner beyond a critical threshold. Such tipping points can be found across a wide range of spatial and temporal scales and are expressed in very different observable variables. For example, early-warning signals of approaching critical transitions may manifest in localised spatial pattern formation of vegetation within years as observed for the Amazon rainforest. In contrast, the susceptibility of ice sheets to tipping dynamics can unfold at basin to sub-continental scales, over centuries to even millennia. Accordingly, to improve the understanding of the underlying processes, to capture present-day system states and to monitor early-warning signals, tipping point science relies on diverse data products. To that end, Earth observation has proven indispensable as it provides a broad range of data products with varying spatio-temporal scales and resolutions. Here we review the observable characteristics of selected potential climate tipping systems associated with the multiple stages of a tipping process: This includes i) gaining system and process understanding, ii) detecting early-warning signals for resilience loss when approaching potential tipping points and iii) monitoring progressing tipping dynamics across scales in space and time. By assessing how well the observational requirements are met by the Essential Climate Variables (ECVs) defined by the Global Climate Observing System (GCOS), we identify gaps in the portfolio and what is needed to better characterise potential candidate tipping elements. Gaps have been identified for the Amazon forest system (vegetation water content), permafrost (ground subsidence), Atlantic Meridional Overturning Circulation, AMOC (section mass, heat and fresh water transports and freshwater input from ice sheet edges) and ice sheets (e.g. surface melt). For many of the ECVs, issues in specifications have been identified. Of main concern are spatial resolution and missing variables, calling for an update of the ECVS or a separate, dedicated catalogue of tipping variables.
Abstract
The Gravity field and steady-state Ocean Circulation Explorer (GOCE) mission, launched by the European Space Agency, provided high-quality gravitational gradient data with near-global coverage, excluding polar regions. These data have been instrumental in regional gravity field modelling through various methods. One approach involves a mathematical model based on Fredholm’s integral equation of the first kind, which relates surface gravity anomalies to satellite gradient data. Solving this equation requires discretising a surface integral and applying further regularisation techniques to stabilise the numerical solution of a resulting system of linear equations. This study examines four methods for modifying the system of linear equations derived by discretising the Fredholm integral equation. The methods include direct inversion, remove-compute-restore, truncation reduction of the integral formula, and inversion of a modified integral for estimating surface gravity anomalies from satellite gradient data over a test area in Central Europe. Since the system of linear equations is ill-conditioned, the Tikhonov regularisation is applied to stabilise its numerical solution. To assess the precision and reliability of the estimated gravity anomalies, the study introduces mathematical models for estimation of biased and de-biased noise variance–covariance matrices of estimated surface gravity anomalies. The results indicate that the signal-to-noise ratio of reduced satellite gradient data in the remove-compute-restore method is smaller compared to other methods in the study, necessitating stronger stabilisation of the model to recover surface gravity anomalies. This, in turn, leads to a more optimistic uncertainty propagation than the other considered methods.