Updated: 19 hours 32 min ago
Wed, 06/11/2025 - 00:00
SUMMARYArchaeomagnetic study serves as an effective way to investigate the detailed variations of the geomagnetic field during the Holocene. China has accumulated numerous high-quality archaeomagnetic data in recent years, but these data are primarily limited to the period younger than 1500 BCE, which limits our comprehensive understanding of the Holocene geomagnetic field evolution. Here we carried out archaeomagnetic research on 28 unoriented pottery shards and baked clays collected from two Neolithic archaeological sites, with ages spanning from ∼5350 to 5030 BCE, in Anhui province, Eastern China. Rock magnetic analyses show that pseudo-single domain magnetite is the main magnetic carrier among the samples with limited contributions from high coercivity minerals such as hematite. The ideal magnetic behaviour demonstrates the samples are suitable for palaeointensity experiments, as confirmed by the palaeointensity results. A total of 14 samples yielded high-fidelity palaeointensity results, indicating that a fast-changed geomagnetic field with virtual axial dipole moments varying from ∼56 to ∼91 ZAm2 over ∼300 years. This inference is further evidenced by intensities recovered from the double-remanent components of some samples through vector calculation. Our new data provide reliable anchors for the strength variations of the geomagnetic field in Eastern Asia during the poorly constrained time period before 4000 BCE. With the new data in this study, we updated the previous Chinese archaeointensity reference curve and named the new curve as ArchInt_China3, which extends the previous curve forward for ∼500 years. The newly reported archaeointensity data and released reference curve in this study will provide valuable insights into the regional and global geomagnetic field variations during the Holocene, and thus assist in understanding the dynamic processes in the Earth's interior.
Wed, 06/11/2025 - 00:00
SUMMARYInterpretation of palaeomagnetic data requires the detection of magnetofossils in sedimentary rocks and an understanding of their influence on magnetic properties. Subsamples collected from IODP site M0061 lost up to 90 per cent of their initial bulk magnetic susceptibility (MS) during cold-room storage of 4 months, which was attributed to the alteration of single-domain magnetosomal greigite (Fe3S4). To test if the magnetic susceptibility loss affected the anisotropy of MS (AMS) we resampled site M0061 with a Kullenberg piston corer (3 cores), took palaeomagnetic subsamples and undertook time-dependent AMS measurements over 1 year in a controlled cool, humidified environment and exposed to air. Most subsamples possessed an initial normal oblate AMS fabric predicted for laminated sediments (horizontal with respect to the bedding plane) but we also detected a negative trend between the degree of anisotropy (Pj) and MS. In accordance with previous observations, MS decreased over 1 year, which we accredit to oxidation of magnetosomal greigite and conversion into a less magnetic phase (probably FeO(OH)) that does not make a detectable contribution to AMS. These results allowed us to isolate, through application of an AMS tensor subtraction routine, the fabric of the magnetosomal greigite component that had decayed. In subsamples with the largest MS loss over one year, the decayed component had a prolate, inverse AMS fabric (defined as the principal susceptibility axis perpendicular to the bedding place) but relatively low Pj. We conclude that the initial (in-situ) AMS ellipsoid consisted of a mixture of a typical normal, oblate sedimentary fabric and the prolate, inverse magnetosomal fabric. The mostly inverse nature of the separated fabric indicates that the long axis of the magnetosomal greigite (as individual single-domain magnetofossils or chains) must be oriented parallel to the bedding plane, which implies that the magnetosomal greigite was deposited from the water column and contributes to a depositional remanent magnetisation (DRM). Our results indicate that greigite magnetofossils can (i) explain the inverse AMS fabrics that have been reported in similar sedimentary environments and (ii) carry DRM with a median destructive field (MDF) of approximately 20 mT, although this remanence is transient under ambient laboratory conditions and is prone to oxidation.
Tue, 06/10/2025 - 00:00
SummaryEstimating quantitative hydrogeological information from geophysical datasets remains a key challenge in hydrogeophysics, driving the development of innovative inversion methodologies. Among these, coupled hydrogeophysical inversion (CHI) is a promising approach that integrates hydrological and geophysical modeling to improve hydrological property estimations from geophysical observations. However, most CHI applications focus on time-lapse geophysical datasets, while applications using single-time geophysical datasets “historically far more common in hydrological studies” remain scarce. Moreover, CHI also depends on petrophysical relationships, whose accurate calibration is challenging, leading to uncertainties that significantly affect CHI results and must be accounted for. This work proposes Hybrid Bayesian Inversion (HBI) to implement CHI using single time geophysical dataset, which considers coupled hydrological-geophysical modelling constraints and petrophysical relationships, including several calibration uncertainties. HBI is based on solving the hybrid decomposition of subsurface geophysical properties. This decomposition is derived as the sum of the groundwater model, directly predicted by coupled geophysical-hydrological modeling, and the background model, which accounts for the residual geophysical properties not predicted by coupled modeling. The groundwater model is formulated as a stochastic model characterized by probability density functions (PDFs), which enable the derivation of estimations for posterior conditional PDFs of hydrological and geophysical properties. In contrast, the background model is characterized only by estimating its values (e.g. using maximum likelihood estimators) and its PDFs are not determined. The hybrid decomposition in HBI is solved using the Expectation-Maximization (EM) algorithm. This approach divides the iterative solution into sequential steps of Bayesian inversion (E-step) and classic least-squares geophysical inversion (M-step). This formulation allows Bayesian inversion, which is computationally intensive, to focus only on relevant variables linked to hydrological conceptualization (Groundwater model), while classic least-squares geophysical inversion is used to solve the remaining variables (Background model). The HBI methodology was tested using 2D ERT synthetic and experimental data from an unconfined aquifer. In these examples, ERT modelling is integrated with saturated groundwater flow modeling through uncalibrated Archie and CK (electrical conductivity to hydraulic conductivity) petrophysical relationships. Results indicate that even with significant calibration uncertainties in petrophysical relationships, HBI can recover valuable information regarding water table and water conductivity what is not directly derivable from classic least-squares inversion results. Additionally, results derived from experimental data show that HBI can be an effective method to discriminate between low resistivity caused by fine-grain content and the water-saturated zones.
Tue, 06/10/2025 - 00:00
SummaryRecent ground observations from Global Navigation Satellite Systems (GNSS) displacement time series have provided compelling evidence that the tectonic motion in many settings is ubiquitously non-steady-state. In some cases, these anomalous transient motions have been identified as potential precursors occurring months, days, or hours before large-magnitude earthquakes. However, effectively detecting these signals in daily geodetic time series at the earliest opportunity remains challenging due to the levels of high-frequency noise. Currently, there is a lack of established methodologies to reduce this noise in near-real-time thereby hindering our ability to promptly monitor tectonic transient motions. Precursors are typically modeled retrospectively, and the use of geodetic data for seismic hazard surveillance remains limited. To address this limitation, this study demonstrates an approach to model high-frequency noise in daily GNSS displacement time series, with the removal of this modeled noise allowing for tectonic transients to be potentially more clearly identified. Using Deep Neural Networks (DNNs), we develop a denoising approach that removes noise from GNSS displacement time series on a station-by-station basis. To more effectively train our DNN models, we generate a comprehensive and diverse dataset by combining synthetic trajectories with synthetic noise time series created using Generative Adversarial Networks (GAN). To train the GAN, we use noise time series extracted from ∼5000 GNSS displacement time series distributed globally. Validating our approach with real data confirms its capability to significantly reduce the high-frequency noise that characterizes GNSS time series. The flexibility of the method allows for near-real-time noise removal (with a latency of a few days), opening up the possibility of detecting and modeling small tectonic transients in a timely fashion. By introducing this novel approach, we present exciting opportunities to advance the geodetic surveillance of tectonic motions and usher in a new era of improved monitoring of seismic activity.
Mon, 06/09/2025 - 00:00
AbstractGeodetic observations of postseismic deformation due to afterslip and viscoelastic relaxation can be used to infer fault and lithosphere rheologies by combining the observations with mechanical models of postseismic processes. However, estimating the spatial distributions of rheological parameters remains challenging because it requires solving a nonlinear inverse problem with a high-dimensional parameter space and potentially computationally expensive forward model. Here we introduce an inversion method to estimate spatially varying fault and lithospheric rheological parameters in a mechanical model of postseismic deformation using geodetic time series. The forward model combines afterslip and viscoelastic relaxation governed by a velocity-strengthening frictional rheology and a power-law Burgers rheology, respectively, and incorporates the mechanical coupling between coseismic slip, afterslip, and viscoelastic relaxation. The inversion method estimates spatially varying fault frictional parameters, viscoelastic constitutive parameters, and coseismic stress change. We formulate the inverse problem in a Bayesian framework to quantify the uncertainties of the estimated parameters. To solve this problem with reasonable computational costs, we develop an algorithm to estimate the mean and covariance matrix of the posterior probability distribution based on an ensemble Kalman filter. We validate our method through numerical tests using a two-dimensional forward model and synthetic postseismic GNSS time series. The test results suggest that our method can estimate the spatially varying rheological parameters and their uncertainties reasonably well with tolerable computational costs. Our method can also recover spatially and temporally varying afterslip, viscous strain, and effective viscosities and can distinguish the contributions of afterslip and viscoelastic relaxation to observed postseismic deformation.
Mon, 06/09/2025 - 00:00
AbstractWe present a novel technique for the characterization of small-scale absorption and scattering properties from cross-correlation functions (CCFs) of seismic ambient noise. We use a continuous data set recorded over four years at the Piton de la Fournaise volcano. Attenuation properties are estimated in the frequency range from 0.5 to 4 Hz, by comparing energy envelopes from CCFs with those from the radiative transfer theory (RTT) and the diffusion approximation. Our technique exploits the different propagation regimes observed at long and short propagation distances, which allows us to quantify attenuation properties in two stages: firstly, we measure absorption from short propagation distances including auto-correlation functions (source-receiver collocated case) to profit from the long coda durations. This set of estimates also allows to observe spatial variation of absorption either from RTT or the diffusion approximation. Once absorption is estimated, we proceed to characterize scattering from long propagation distances where scattering effects dominate absorption. Our inversion strategy to characterize scattering is called the ’ball-diff’ ratio because we propose to use the ratio of the integrated energies contained in the ballistic and early diffuse regimes. This technique can considerably reduce the effect of the uneven distribution of noise sources. Finally, in order to validate our method, the scattering and absorption properties estimated from CCFs of seismic noise are compared with those from earthquake data, for which we used magnitudes between 1.5 and 2.5. Good agreement was found between the estimates of these two approaches.
Mon, 06/09/2025 - 00:00
AbstractDetermining when and where the next big earthquake will occur is a fundamental challenge in earthquake forecasting. Although it is reasonable to assume that the next major earthquake will occur in regions where stress has been increased by previous events, the most common and reliable earthquake forecasting models assume that the magnitude of next earthquakes is independent from what happen before and, implicitly, from the stress state. In this study, we investigate the correlation between stress distribution and the occurrence of large earthquakes using a realistic physical model. Our findings reveal that the next big earthquake is more likely to occur on the periphery of previous large earthquakes, where stress has accumulated but not yet been relaxed. Additionally, we explore how stress redistribution influences the magnitude distribution of aftershocks. These results can inform the introduction of correlations between large earthquakes in existing seismic forecasting models, potentially enhancing their accuracy and reliability.
Mon, 06/09/2025 - 00:00
AbstractIt is shown that the SPOCK equation of state is equivalent to the Variable Polytrope Index equation of state.
Mon, 06/09/2025 - 00:00
SummaryA brief reply to the comment by Ruedas.
Fri, 06/06/2025 - 00:00
AbstractSlow Slip Events (SSEs) play an important role in the seismic cycle, participating in the moment budget of active faults. SSEs can be monitored via space geodesy (e.g., Global Navigation Satellite System, GNSS). One of the major challenges when studying geodetic data is that they record the deformation due to many active sources (e.g., tectonic, hydrological, volcanic, and anthropogenic). Here I present a procedure to automatically reconstruct the spatio-temporal history of SSEs in the Cascadia subduction region. The solution is updated daily and made publicly available. These results constitute the base for future prospective SSEs forecasting experiments.
Thu, 06/05/2025 - 00:00
SummaryThis study investigates the complex tectonic interactions and crustal deformation within the Weihe Basin and its surrounding regions, encompassing the northeastern Tibetan Plateau, Ordos Craton, and Qinling Orogenic Belt. By conducting a detailed analysis of GNSS data and employing a refined tectonic model, we explore relative motion patterns and fault activities in the area. Our findings highlight nuanced movement patterns, with a clockwise rotation observed in the western and central parts of basin, contrasting with an anticlockwise rotation in the eastern part. Secondary block motion decreases from west to east, with the western region showing southeastward motion and the eastern region exhibiting subtle eastward deflection. Fault activities within the Weihe Basin generally feature low slip rates, often below 1 mm/a. Intriguingly, faults in the northern basin predominantly exhibit dextral and extensional movement, while those in the southern region display sinistral and compressive movement. The Weihe fault is identified as a critical boundary between the Ordos block and the Qinling Orogenic Belt. This study offers valuable insights into the tectonic complexities of the Weihe Basin, enhancing our understanding of its kinematic behavior.
Wed, 06/04/2025 - 00:00
SummaryWe analyze data from 48 seismic stations located in the western part of the Makran Subduction Zone to gain a detailed knowledge of the crustal and uppermost mantle structure in that region. The Makran is a flat subduction zone with a very thick accretionary wedge. It is a major tsunami hazard of the Indian Ocean but remains one of the world's least studied subduction zones. Its structure and evolution is increasingly becoming a subject of research interest as it can help to better understand the dynamics of flat subduction zones. Our P- and S-wave receiver function analyses reveal that the Arabian oceanic plate is currently dipping north-ward beneath the onshore accretionary wedge at a very low angle of 3°. The depth of the oceanic Moho in the coastal region is ∼30 km due to the presence of ∼22-24 km of sedimentary cover. It increases to ∼60 km beneath the Jazmurian Depression and further deepens to ∼80 km beneath the Bazman and Taftan volcanoes. The change from a relatively flat to a steeper subduction occurs just south of the Qasr-e Qand thrust fault. From the combined results of the receiver function stacking and joint inversion of P-wave receiver functions and Rayleigh wave group dispersion data, we infer that the continental Moho varies within a depth range of 40 to 56 km, with the shallowest part beneath the Sistan Suture Zone and the deepest beneath the Taftan volcano. Based on shear-wave velocity models, the sedimentary cover thickness in the onshore accretionary wedge varies from Coastal Makran to 34 km in Inner Makran. The lower-than-normal mantle wedge shear-wave velocities suggest that the mantle wedge might have undergone at least 25 per cent serpentinization. From the velocity models we conclude that the crust of the Jazmurian Depression is more likely of continental origin.
Wed, 06/04/2025 - 00:00
AbstractA Slow-Slip Event (SSE) is a slow release of tectonic stress along a fault zone, over periods ranging from hours to months. SSEs have been recorded in most of the geodetically well-instrumented subduction zones. Although these transient events observed by geodesy are typically excluded from probabilistic seismic hazard analysis (PSHA), they might play a crucial role in the seismic cycle by reducing the seismic slip rate (slip rate discounting the aseismic process). This effective reduction implies that incorporating SSEs into PSHA may improve the reliability of hazard assessments. Costa Rica, located at the southern end of the Middle American Trench, hosts large earthquakes as well as SSEs. Shallow and deep SSEs have long been detected at the Nicoya peninsula, in northern Costa Rica, and recently, also in the southern part of the country at the Osa peninsula. In this study, we first collect geodetic and SSE observations in Costa Rica. Then, we propose a method to incorporate them into PSHA, based on identifying regions where SSEs occur, inferring slip deficits and estimating seismic slip rates in each subduction segment. Next, we analyze the implications for PSHA and its epistemic uncertainty, using these seismic slip rates, the resulting seismic moment rate budgets, and determining earthquake rates and maximum magnitudes with different approaches. Finally, we compute a countrywide PSHA following the 2022 Costa Rica Seismic Hazard Model (CRSHM 2022) but modifying the seismic source characterization using geodetic information for the regions where SSEs occur. Compared to the CRSHM 2022, this approach leads to reductions of the resulting peak ground acceleration at return period of 475 years (PGA-475) of up to ∼15 per cent in the Nicoya peninsula, but also to an increase up to ∼40 per cent in the Central Pacific region and ∼30 per cent in the Osa peninsula. Moreover, we find that, under a geodetic-based approach and disregarding SSEs, the PGA-475 would increase by up to ∼10 per cent. Our novel approach underscores the relevance of incorporating geodetic observations and particularly SSEs into PSHA, especially in subduction margins near the coast.
Tue, 06/03/2025 - 00:00
SummaryInfrasonic signals of interest can occur during periods with persistent, coherent, background noise, which may be natural or anthropogenic. For high signal-to-noise (SNR) ratio transient signals, an “overprinting” of the coherent background may occur, and the signal may still be detected. However, this approach fails for low SNR signals of interest, which may be obscured by coherent noise. An infrasound beamforming method based on generalized least squares (GLS) is investigated for detecting transient signals of interest in the presence of coherent and incoherent background noise. This approach relies on an estimate of the noise covariance, captured in a covariance matrix, to effectively null contributions to the array response from noisy directions of arrival. Synthetic array data is used to investigate the performance of the GLS beamformer compared to the Bartlett beamformer when coherent and incoherent backgrounds are present. Additionally, the effects of array element number and relative strength of the interfering signal on the GLS estimates is investigated. GLS empirical area under the curve estimates suggest that the beamformer can recover coherent power for a signal of interest lower in amplitude than the coherent background, but this effectiveness degrades more quickly with SNR for a four element array compared to a six or eight element infrasound array. Finally, infrasound from the Forensic Surface Experiment, a bolide signal observed at IMS array I37NO, and a volcanic signal recorded at the Alaska Volcano Observatory array ADKI are used to evaluate GLS performance on recorded data. A ten minute window was used to capture the background noise, and the coherent background signal was nulled in all three examples.
Mon, 06/02/2025 - 00:00
SummarySeismic data acquisition can innovatively be implemented on the surface and within underground infrastructure to illuminate subsurface targets. In the seismic data processing and imaging phases, prior subsurface information, such as approximate interface dipping angles, can enhance reflection imaging in a target-oriented manner. We leverage a unique field dataset from an unconventional seismic acquisition setup to image a volcanogenic massive sulphide (VMS) deposit at the Neves-Corvo mining site in southern Portugal. The setup involved seismic sources positioned in a tunnel at a depth of approximately 650 m, from which the wavefields were recorded by surface receivers deployed along a 2D line directly above the tunnel. The data were marred by strong noise and limited acquisition aperture due to the tunnel length, resulting in significant smearing artifacts in images generated from conventional migration techniques, which impeded a detailed delineation of the deposit. By utilizing directional information from illumination vectors, derived from the gradients of source-side and receiver-side traveltime fields, we implemented a controlled-illumination strategy within the Kirchhoff prestack depth migration workflow. This approach resulted in enhanced imaging of the targeted Lombador VMS deposit. The improved image revealed a subtle discontinuity in the Lombador reflector, indicating a possible fault, which is also present in the area. The reflection imaging results highlight the advantages of employing underground infrastructure, such as tunnels, for seismic applications in supporting detailed in-mine exploration and drilling programs for resource estimations.
Mon, 06/02/2025 - 00:00
SummaryInterpolating scatter data obtained from discrete observations is essential for the continuous representation of subsurface media. Traditional interpolation algorithms typically rely on weighting the relationship between interpolation points and nearby known points, which makes it more difficult to incorporate multi-source data and prior constraints as the amount of information increases. This study explores the use of deep neural networks to replace traditional interpolants, constructing a deep learning-based scatter interpolation workflow that integrates prior information through isotropic or anisotropic smoothness loss functions, addressing traditional methods’ limitations. To enhance the capability of the deep neural network for sparse scatter interpolation, we synthesized a large number of scatter-velocity model pairs to pre-train it using supervised learning. The pre-trained network is further adapted to specific interpolation tasks by physics-guided unsupervised fine-tuning to achieve stable interpolation results. Due to the flexibility of incorporating multi-source information through input or supervised loss and imposing constraints of geophysical laws through unsupervised loss, our DL-based interpolation can be easily extended to solve geophysical inversion problems that jointly fits both data and geophysical laws. Our experiments validate the effectiveness of this workflow and demonstrate its potential in multi-information-constrained geophysical scatter interpolation, which forms the basis for multi-information inversion. This work not only advances machine learning algorithms for geophysical scatter interpolation but also provides valuable insights for deep learning geophysical inversion involving multiple data sources, and physical laws.
Fri, 05/30/2025 - 00:00
SummaryWe revisit the budget of 20th century true polar wander (∼1°/Myr in the direction of 70°W) using a state-of-the-art adjoint-based reconstruction of mantle convective flow and predictions of ongoing glacial isostatic adjustment that adopt two independent models of Pleistocene ice history. Both calculations are based on a mantle viscosity profile that simultaneously fits a suite of data sets related to glacial isostatic adjustment (Fennoscandian Relaxation Spectrum, post-glacial decay times) and a set of present-day observations associated with mantle convection (long-wavelength gravity-anomalies, plate motions, excess ellipticity of the core-mantle boundary). Our predictions reconcile both the magnitude and direction of the observed true polar wander rate, with convection and glacial isostatic adjustment contributing signals that are 25-30% and ∼75% of the observed rate, respectively. The former assumes that large-scale seismic velocity heterogeneities are purely thermal in origin, and we argue that our estimate of the convection signal likely represents an upper bound due to the neglect of hypothesized compositional variations within the large low shear velocity provinces in the deep mantle.
Wed, 05/28/2025 - 00:00
AbstractThe earthquake size distribution is well described by the Gutenberg Richter Law, controlled by the b-value parameter. In recent decades, a great variety of methods for estimating the b-value have been proposed by the scientific community, despite the simplicity of this relationship. All these methods underlie the different views of individual modelers and, therefore, often generate inconsistent results. In this study, we perform a seismological experiment in which we compare different, commonly adopted, methodologies, to estimate the completeness magnitude and the b-value, for seismicity in Central Italy. The inter-method differences are on average equal to 0.4 and 0.3, for Mc and b, respectively, but reach much larger values, especially during more intense seismic activity. This shows that epistemic uncertainty in the b-value plays a more crucial role than intra-method uncertainties, opening new perspectives in the interpretation of discrepant, single studies.
Fri, 05/23/2025 - 00:00
SummaryGeological Carbon Storage (GCS) is one of the most viable climate-change mitigating net-negative CO2-emission technologies for large-scale CO2 sequestration. However, subsurface complexities and reservoir heterogeneity demand a systematic approach to uncertainty quantification to ensure both containment and conformance, as well as to optimize operations. As a step toward a Digital Twin for monitoring and control of underground storage, we introduce a new machine-learning-based data-assimilation framework validated on realistic numerical simulations. The proposed Digital Shadow combines Simulation-Based Inference (SBI) with a novel neural adaptation of a recently developed nonlinear ensemble filtering technique. To characterize the posterior distribution of CO2 plume states (saturation and pressure) conditioned on multimodal time-lapse data, consisting of imaged surface seismic and well-log data, a generic recursive scheme is employed, where neural networks are trained on simulated ensembles for the time-advanced state and observations. Once trained, the Digital Shadow infers the state as time-lapse field data become available. Unlike ensemble Kalman filtering, corrections to predicted states are computed via a learned nonlinear prior-to-posterior mapping that supports non-Gaussian statistics and nonlinear models for the dynamics and observations. Training and inference are facilitated by the combined use of conditional invertible neural networks and bespoke physics-based summary statistics. Starting with a probabilistic permeability model derived from a baseline seismic survey, the Digital Shadow is validated against unseen simulated ground-truth time-lapse data. Results show that injection-site-specific uncertainty in permeability can be incorporated into state uncertainty, and the highest reconstruction quality is achieved when conditioning on both seismic and wellbore data. Despite incomplete permeability knowledge, the Digital Shadow accurately tracks the subsurface state throughout a realistic CO2 injection project. This work establishes the first proof-of-concept for an uncertainty-aware, scalable Digital Shadow, laying the foundation for a Digital Twin to optimize underground storage operations.