Updated: 23 hours 10 min ago
Tue, 05/13/2025 - 00:00
SummaryTeleseismic P-wave coda autocorrelation has been increasingly applied to subsurface structure characterization, given its potential to infer velocities. However, the inversion of coda autocorrelation data has not been extensively investigated regarding data processing (stacking and move-out correction), inversion approaches (Monte Carlo or metaheuristic), model parameterization, and applicability. Here, we propose an inversion method for teleseismic P-wave coda autocorrelation based on particle swarm optimization and a treatment of uncertainty. This inversion method utilizes the arrival time information of reflected (or converted) waves contained in the binned stack waveforms, demonstrating promising model adaptability and robustness. Synthetic data tests show that this method accurately inverted various geological models without prior information, such as the number of crustal layers, surface sedimentary layers, and low-velocity zones within the crust. The method was successfully applied to the QSPA station near the South Pole, revealing an ice sheet thickness of approximately 2900 m, with a 340 m thick low shear-wave velocity ice layer at the base, likely containing up to 15% water. Beneath the ice sheet, we infer a 400 m thick subglacial sediment layer. The uncertainties of the thickness of the low shear wave velocity ice and the sedimentary layer are 150 m and 10 m, respectively. These findings and the potential of the proposed method open up new directions for glacier dynamics research in the region. Additionally, we apply the method to the BOSA station near Kimberley, South Africa, which confirms clear Moho and intracrustal interfaces, consistent with receiver functions and deep seismic reflection data results. This study improves the inversion algorithm for teleseismic P-wave coda autocorrelation and expands its application scenarios.
Tue, 05/13/2025 - 00:00
SummaryGeophysical simulations for complex subsurface structures and material distributions require the evaluation of partial differential equations by means of numerical methods. However, the mentioned high complexity often yields computationally very costly simulations, especially for electromagnetic (EM) and seismic methods. When used in the context of parameter estimation or inversion studies, this aspect severely limits the number of simulations that are affordable. However, especially for structured model analysis methods, such as global sensitivity analyses or inversions, often thousands to millions of forward simulation runs are required. To address this challenge, we propose utilizing a physics-based machine learning method, namely the non-intrusive reduced basis method, aiming at constructing low-dimensional surrogate models to significantly reduce the computational cost associated with the numerical forward model while preserving the physical principles. We demonstrate the effectiveness and benefits of the surrogate models using broadband Magnetotelluric (MT) responses of a 2-D model that mimics a conceptual volcano-hosted geothermal system. Next to being a first such application, we also show how ML reduced basis method can be adapted to consistently treat complex-valued variables – an aspect that has been overlooked in previous studies. Additionally, reducing computation time by several orders of magnitude through the surrogate enables us to perform a global sensitivity analysis for MT applications. Despite additional insights, such an analysis has been normally deemed infeasible given the high computational burden. The methods developed here are presented in a generalized form, making this approach feasible for other electromagnetic techniques with a low-dimensional parameter space.
Tue, 05/13/2025 - 00:00
SummaryThe triple junction between the North American, Caribbean, and Cocos plates at the Guatemala-Mexico border is not well understood. It forms a broad region from around the active Tacaná volcano up to the Guatemala City graben. Tacaná is the westernmost active volcano of the Central American volcanic arc and is located at the intersection of four major active faults: the Polochic, Motagua, Jalpatagua, and Tonalá faults. Using seismicity around the Tacaná volcano, we show that there is moderate to low tectonic seismic activity between the Guatemala City graben and the Tacaná volcano, possibly related to the ancient extremes of the Motagua and Jalpatagua faults. Therefore, we speculate that the triple junction would be located onshore, around the Tacaná volcano.We located earthquakes around the Tacaná volcano between January 2017 and October 2018, a period that includes the large Mw8.2 Tehuantepec (Chiapas) earthquake of 8 September 2017, located ∼190 km away. We identified four distinct types of seismicity, interpreted as having tectonic, hydrothermal, intermediate depth magmatic, and deep magmatic origins. The tectonic seismicity occurred at depths between ∼5 km and ∼30 km b.s.l., and may be associated with three faults around the Tacaná volcanic complex. These faults are oriented in NE-SW, aligned with the four Tacaná volcanic edifices; NW-SE, consistent with the Jalpatagua fault; and approximately EW, corresponding to the Motagua fault. The hydrothermal seismicity is observed at shallow depths, from the subsurface to about 2 km b.s.l., predominantly in the western sector of the Tacaná summit, and partially beneath the San Antonio volcano, an area known for intense hydrothermal activity. This seismicity is spatially related to the shallow portions of the same three faults described above. The intermediate depth magmatic seismicity is detected at depths between 5 and 12 km b.s.l. and is interpreted to be related to the presence of a shallow magma chamber beneath the Tacaná volcanic complex. Finally, the deep magmatic seismicity is located in the eastern part of the Tacaná, at depths ranging from 15 km to about 22 km b.s.l. This seismicity is interpreted to be due to a vertical dike intrusion that connects a deep magma reservoir located between 30 km and 40 km depth, to the hypothesized shallower magma chamber associated with the intermediate depth seismicity.
Tue, 05/13/2025 - 00:00
SummaryAdvance detection is a pivotal technology in tunnel construction, designed to precisely invert the velocity distribution of geological formations, thereby ensuring both construction safety and operational efficiency. The unscented hybrid simulated annealing (UHSA) algorithm is a global optimization technique that has been successfully applied to the advance detection of tunnels. However, the UHSA exhibits a slow convergence rate under complex geological conditions and is prone to getting trapped in local optima. To address this issue, we propose an improved method based on the whale optimization algorithm (WOA), referred to as MEWOA. MEWOA incorporates a population-initialization method based on the tent map and reverse learning and integrates a nonlinear convergence factor, a frequency-fluctuation mechanism, and the low-soaring strategy of the red-tailed hawk algorithm (RTH). These enhancements notably improve the accuracy and convergence speed of the algorithm. We conducted a qualitative analysis of the enhancement mechanisms in MEWOA using two functions and performed quantitative experiments on four tunnel models. The experimental results demonstrate that MEWOA outperforms UHSA, achieving higher accuracy in solving inversion problems with multiple velocity models.
Sat, 05/10/2025 - 00:00
SummaryAs probabilistic tsunami hazard analysis (PTHA) focuses more on assessments for localized, populous regions, techniques are needed to identify a subsample of representative earthquake ruptures to make the computational requirements for producing high-resolution hazard maps tractable. Moreover, the greatest epistemic uncertainty in seismic PTHA is related to source characterization, which is often poorly defined and subjective. We address these two salient issues by applying streamlined earthquake rupture forecasts (ERFs), based on combinatorial optimization methods, to an unsupervised machine learning workflow for identifying representative ruptures. ERFs determine the optimal distribution of a millennia-scale sample of earthquakes by inverting the observed slip rate on major faults. We use two previously developed combinatorial optimization ERFs, integer programming and greedy sequential, to produce the optimal location of ruptures with seismic moments sampled from a regional Gutenberg-Richter magnitude-frequency distribution. These ruptures in turn are used to calculate peak nearshore tsunami amplitude, using computationally efficient tsunami Green's functions. An unsupervised machine learning workflow is then used to identify a small sub-sample of the earthquakes input to ERFs for onshore PTHA analysis. We eliminate epistemic uncertainty related to source distribution under traditional PTHA analysis; in its place, a quantifiable, less subjective, and generally smaller uncertainty related to the input to ERFs is included. The Nankai subduction zone is used as a test case, where previous ERFs have been conducted. Results indicate that the locations of representative earthquakes are sensitive to choice of magnitude-area relation and to whether a minimum cumulative stress objective is imposed on the fault. In general, incorporating ERFs into PTHA provide a physically self-consistent method to incorporate fault slip information in determining representative earthquakes for onshore PTHA, eliminating a major source of epistemic uncertainty.
Sat, 05/10/2025 - 00:00
SummaryCurrent efforts to correctly categorize natural events from suspected explosion sources with data that is collected by ground- or space-based sensors presents historical challenges that remain unaddressed by the Event Categorization Matrix (ECM) model. Smaller historical events (lower-yield explosions) may have data available from fewer measurement techniques than are available today, and therefore, a historical event record can lack a complete set of discriminants. The covariance structures can also differ between such observations of event (source-type) categories. Both obstacles are problematic for the classic Event Categorization Matrix model. Our work addresses this gap and presents a Bayesian update to the previous Event Categorization Matrix model, termed the Bayesian Event Categorization Matrix model, which can be trained on partial observations and does not rely on a pooled covariance structure. We further augment the Event Categorization Matrix model with Bayesian Decision Theory so that false negative or false positive rates of an event categorization can be reduced in an intuitive manner. To demonstrate improved categorization rates for the Bayesian Event Categorization Matrix model, we compare an array of Bayesian and classic models with multiple performance metrics using Monte Carlo experiments. We use both synthetic and real data. Our Bayesian models show consistent gains in overall accuracy and lower false negative rates relative to the classic Event Categorization Matrix model. We propose future avenues to improve Bayesian Event Categorization Matrix models’ decision making and predictive capability.
Fri, 05/09/2025 - 00:00
SummaryThe response of porous rocks to fluid flow and external loads is critical to a wide range of geophysical and geotechnical problems, and is described by the long-established theory of poroelasticity. Despite the wealth of literature existing on the subject, little investigation has been devoted to the analysis of the state of stress and pore pressure regimes within porous media under the action of gravity force. Here, we present analytical solutions for the effective stress and pore pressure in gravitationally loaded porous rocks, in both drained and undrained conditions, and compare them with results from Finite Element numerical models. We also apply our models to a ground deformation episode observed in the Campi Flegrei caldera, Italy. We find that the numerical results accurately reproduce our analytical solutions and show how accounting for gravity-induced stress and pore pressure regimes is critical to model stress and deformation in poroelastic media accurately. Specifically, we highlight how failing to assign realistic initial conditions, or neglecting gravity altogether, may lead to misleading results and interpretations of geophysical observables, such as ground deformation and rock failure.
Fri, 05/09/2025 - 00:00
SummaryPrecursory spurious arrivals, commonly observed in ambient noise correlations, are generated by near-field noise sources. We have developed an inversion method to evaluate the noise source distribution based on the precursory waves. This method is applied to the BASIN experiment in Los Angeles, revealing that the noise sources show coherent patterns with features like faults and structure boundaries. Our spectral analysis indicates that the energy of the noise source in generating the precursory signal is predominant at higher frequencies, suggesting a shallow source(<200m). We conclude that near-field noise is primarily produced by scattering from geological structures with significant velocity contrasts, such as faults at shallow depths. This method offers a new way to map faults using ambient noise correlations.
Fri, 05/09/2025 - 00:00
SummaryThere are currently few studies in the literature on source and background seismic noise power distribution in Italy. In this research, the seismic noise recorded by 233 broadband (BB) seismic stations of the Italian Seismic Network (ISN), operating continuously between 2015 - 2018, was investigated. Starting from the average Power Spectral Density (PSD) calculated for each selected station, the seismic noise powers in four subsets from 0.025 to 30 Hz frequency bands were analysed. Using the Inverse Distance Weighted (IDW) interpolation method, the background noise distribution in the entire Italian territory was observed, producing noise interpolation maps for each of the four frequency bands. Furthermore, regional seismic noise models and local anomalies for each subset of the frequency bands were found by applying 2D moving average spatial filtering techniques. In addition, in this research, a first attempt of linear regression analysis is performed to discover possible relationships between seismic noise powers and geographical parameters (elevation site and minimal station-coast distance) and the role of meteorological parameters such as rainfall. The large dataset obtained allowed for the assessment of the main characteristics and sources of seismic noise at all sites. One can thus know the distribution of noise levels in the Italian territory and in particular study their main sources related to natural and anthropic ambient vibrations. To improve the analysis, a comparison between the seismic noise maps and the completeness magnitude map was carried out, which showed the effectiveness of the national seismic network.
Thu, 05/08/2025 - 00:00
SummaryThe Seoul Mega City (SMC), one of the most densely populated regions in the world, has experienced an increase in seismic activity, raising concerns about the potential presence of concealed faults that could trigger clustered earthquakes in the Seongbuk area (SB), located in the central part of the SMC. This study aims to identify such a fault through the interpretation of terrestrially measured gravity field. By employing dip-curvature analysis and the first vertical derivative calculation of the gravity field, supported by S-wave velocity models, reflection seismic profiles, geological data, and borehole data analyses, we provide compelling evidence for the existence of a deeply buried NS-trending fault (DF0) associated with the Dongducheon Fault system. This fault is likely responsible for the clustered seismic activities observed in the SB. The confirmation of DF0’s presence highlights the critical need for further geophysical investigations to better understand seismic risks in the SMC, which has a history of significant seismic events.
Tue, 05/06/2025 - 00:00
SummaryThe Lamb problem stands as a classic issue in theoretical seismology aimed at obtaining solutions for the Green's functions of point sources in elastic half-spaces. It serves as the foundation for studying vibrational signals from many sources such as walking and driving, bearing significant theoretical and practical value. While the analytical solutions exist for the Lamb problem when both excitation and reception occur on the ground, the presence of singularity makes the numerical stability of calculating the Green's functions from these analytical solutions a challenge. In this study, we propose a stable algorithm that circumvents the impact of time singularity in the analytical solutions of the Lamb problem by introducing a tiny time parameter perturbation and judiciously selecting the starting position for time discretization sampling. This means that the zero time point (i.e. the excitation time of the source pulse) and the starting time of discretization sampling may not be coincident. The advantages of this method lie in its stability, simplicity, and practical accuracy, with the calculation results aligning consistently with the theoretical geometric decay of surface waves. Additionally, analysis of field data demonstrates that our stable algorithm effectively captures the amplitude characteristics of measured footstep responses and vehicle signals. Building upon the foundation of obtaining stable discrete solutions, we further elaborate on the process of transforming the discrete sampling starting point to approach the actual zero time point infinitely, even though this tiny time parameter perturbation does not affect the simulation results.
Mon, 05/05/2025 - 00:00
SummaryThe cross-correlation of the ambient noise recordings, also known as noise correlation functions (NCFs), can converge to Green’s functions (GFs) which describe wave propagation between a pair of stations. However, the NCFs are often biased from the true GFs due to the presence of random noise and spurious arrivals arising from non-diffuse wavefields. Additionally, the limited spatial and temporal coverage of recording stations can lead to large data gaps in the retrieved virtual shot gathers, particularly at large inter-station distances (far offsets). Both these factors impose great challenges to retrieving high-quality NCFs and conducting reliable subsurface imaging. In this study, we propose a multi-dimensional (4D) reconstruction method to compensate for the insufficient station coverage and simultaneously attenuate incoherent noise in the NCFs. We test the feasibility of the proposed method using a dense seismic array deployed in western Canada. Our results demonstrate that the reconstructed virtual common mid-point gather (VCMG) can greatly improve the stability and reliability of the surface-wave dispersion measurements and subsequent shear velocity inversions compared to the conventional approaches. The proposed ambient noise processing framework enables us to construct accurate 3D velocity model of the subsurface.
Mon, 05/05/2025 - 00:00
SummaryThis paper describes a refined version of the point acceleration approach, referred to as the refined acceleration approach, which makes use of K-band range-acceleration observations to derive high-precision monthly gravity field solutions. For overcoming shortcomings of the conventional approach, several refinements are made as follows. (1) The inter-epoch correlated errors caused by numerical differentiation are decorrelated by a decorrelation operator. (2) The satellite velocity is transformed into a function of satellite positions and dynamic parameters. (3) The effect of satellite position error is taken into consideration while building the range-acceleration observational equation. (4) An autoregression (AR) model is used for modelling the high-frequency error of K-band range-acceleration observations. Applying the proposed approach, GRACE-FO observation data spanning the period from January 2019 to December 2022 were processed and a time series of monthly gravity field solutions, referred to as SSM-ACC-GFO, was derived. This time series is comprehensively compared with three official time series, i.e. CSR RL06, JPL RL06 and GFZ RL06, both in spectral and spatial domain. Comparison results demonstrate that SSM-ACC-GFO performs comparably with JPL RL06 and GFZ RL06 indicating that the refined acceleration approach has the ability of deriving high-precision monthly gravity field solutions.
Mon, 05/05/2025 - 00:00
AbstractUnderstanding the mechanical behavior of a fractured geothermal reservoir during its operation phase, when a sustained circulation of fluid is taking place, is of crucial importance for the appraisal of this technology. This knowledge is also essential for understanding natural fault systems that exhibit fluid-induced seismicity, as the geothermal reservoir serves as a small-scale analog to these systems. Here we analyze the seismicity of a geothermal reservoir in France, which has been the primary target for heat exploitation over the last 8 years. Fluid circulation in the granite has been maintained along a main fractured zone through pathways with enhanced permeability thanks to the continuous injection of fluid from a single well. We show that the seismicity occurring during the operation of this reservoir exhibits a progressive expansion outpacing the zone initially activated during the hydraulic stimulation. We also show that most recorded earthquakes are clustered in time within discrete bursts that activate different portions of the fault system. The migration of the events included in these bursts indicates that they are likely related to aseismic transients developing over the creeping fault interfaces. It therefore demonstrates that the intermittency of the seismic activity characterizing earthquake swarms can arise naturally as the complex hydro-thermo-mechanical response of a system under continuous forcing conditions.
Fri, 05/02/2025 - 00:00
SummaryThe configuration of the Earth's magnetic field during the Middle Devonian (394.3–378.9 Ma) is poorly understood. The magnetic signals in Middle Devonian rocks are often overprinted during the Kiaman reverse superchron, obscuring their primary remanence. In other cases, available palaeomagnetic data are ambiguous, conflicting with tectonic reconstructions or dipolar geomagnetic field behaviour. Here, we study the palaeomagnetic signal of Middle Devonian pillow basalts from the Rhenish Massif in Germany. Our rock-magnetic experiments show that the pillow basalts can store and retain magnetisations over time. However, the pillow basalts have a somewhat low initial natural remanent magnetisation (NRM), which is not expected based on their magnetite content. The palaeomagnetic directions determined from alternating field demagnetisation, thermal demagnetisation, and a combination of both, fail to cluster around a common mean. Great circle analyses of these palaeomagnetic directions reveal traces of both Kiaman and present-day field overprints. Our palaeointensity measurements have a very low success rate of < 2 per cent, with only one sample yielding a result of 5.9 µT. This low intensity might explain the low initial NRM of the samples and the lack of interpretable directional data in this study. However, given the very low success rate, this result does not convincingly represent the palaeointensity of the Middle Devonian field. All together, the lack of signal in our Middle Devonian pillow lavas could be a sign of an (ultra-)low, or non-dipolar, or possibly even absent geomagnetic field during the time of formation.
Fri, 05/02/2025 - 00:00
SummaryThe dip angle is one of the fault parameters that most affect fault-related hazard analyses (ground shaking, tsunami) because it not only influences the inference of other fault parameters (e.g., down-dip width, earthquake maximum magnitude based on fault scaling relations) but also and most importantly, the dip angle controls: a) the fault-to-site distance values of ground motion estimates based on predictive models (Ground Motion Models); b) the ground shaking predicted by physics-based simulations; and c) the vertical component of static surface displacement, which determines the initial conditions for tsunami simulations when the seafloor is displaced. We present the results of a global survey of earthquake-fault dip angles (G-DIP, short for Global Dip) and analyse their empirical distribution for various faulting categories (normal, reverse, transcurrent crustal faulting, and subduction-interface reverse faulting). These new empirical statistics are derived from an extensive and homogeneous dataset of 597 uniquely determined fault plane dip angles corresponding to 269 individual earthquakes. As such, our statistics of fault dip occurrences separated by fault types at a global scale improve previous fault dip-angle distributions. We found significant differences between the average empirical fault dip-angle distributions and the values usually assumed based on Anderson's theory. Dip-slip crustal faults show the same mode at 40-50° for both normal and reverse mechanisms, whereas transcurrent faults have a large spread of values below the mode at 80-90°. Regarding reverse crustal faults, our result became evident after separating them from subduction interface faults, which show significantly lower dip values, with a mode at 10-20°. We remark on the importance of documented uniquely determined fault planes to develop dip-angle statistics. We also suggest that our results can effectively be used as distribution priors for characterising the geometry of poorly known seismogenic faults in earthquake hazard analyses and earthquake-fault modelling experiments.
Thu, 05/01/2025 - 00:00
SummaryThe development of reliable operational earthquake forecasts is dependent upon managing uncertainty and bias in the parameter estimations obtained from models like the Epidemic-Type Aftershock Sequence (ETAS) model. Given the intrinsic complexity of the ETAS model, this paper is motivated by the questions: âWhat constitutes a representative sample for fitting the ETAS model?” and âWhat biases should we be aware of during survey design?â. In this regard, our primary focus is on enhancing the ETAS model’s performance when dealing with short-term temporally transient incompleteness, a common phenomenon observed within early aftershock sequences due to waveform overlaps following significant earthquakes. We introduce a methodological modification to the inversion algorithm of the ETAS model, enabling the model to effectively operate on incomplete data and produce accurate estimates of the ETAS parameters. We build on a Bayesian approach known as inlabru, which is based on the Integrated Nested Laplace Approximation (INLA) method. This approach provides posterior distributions of model parameters instead of point estimates, thereby incorporating uncertainties. Through a series of synthetic experiments, we compare the performance of our modified version of the ETAS model with the original (standard) version when applied to incomplete datasets. We demonstrate that the modified ETAS model effectively retrieves posterior distributions across a wide range of mainshock magnitudes and can adapt to various forms of data incompleteness, whereas the original model exhibits bias. In order to put the scale of bias into context, we compare and contrast further biases arising from different scenarios using simulated datasets. We consider: (1) sensitivity analysis of the modified ETAS model to a time binning strategy; (2) the impact of including and conditioning on the historic run-in period; (3) the impact of combination of magnitudes and trade-off between the two productivity parameters K and α; and (4) the sensitivity to incompleteness parameter choices. Finally, we explore the utility of our modified approach on three real earthquake sequences including the 2016 Amatrice earthquake in Italy, the 2017 Kermanshah earthquake in Iran, and the 2019 Ridgecrest earthquake in the US. The outcomes suggest a significant reduction in biases, underlining a marked improvement in parameter estimation accuracy for the modified ETAS model, substantiating its potential as a robust tool in seismicity analysis.