Motivated by the development of deep convolution neural networks (DCNNs), aircraft detection has gained tremendous progress. State-of-the-art DCNN-based detectors mainly belong to top-down approaches, which enumerate massive potential locations of aircraft with the form of rectangular regions, and then identify whether they are objects or not. Compared with these top-down detectors, this article shows that aircraft detection via a type of bottom-up method can have better performances in the era of deep learning. In this article, we propose a novel bottom-up detector named X-LineNet. It formulates the aircraft detection task as prediction and clustering of paired intersecting line segments inside each target. Aircraft detection is then a purely appearance-based line segments estimation problem, without any rectangular regions classification or implicit features learning. With simple postprocessing, X-LineNet can simultaneously provide multiple representation forms of the detection result: the horizontal bounding box, the oriented bounding box, and the pentagonal mask. The pentagonal mask is a more accurate representation form of aircraft which has less redundancy than that of a rectangular box. Experiments show that X-LineNet outperforms prevalent top-down and region-based detectors on UCAS-AOD, NWPU VHR-10, and DIOR public data sets in the field of aircraft detection.
Airborne light detection and ranging (LiDAR) data are widely applied in building reconstruction, with studies reporting success in typical buildings. However, the reconstruction of curved buildings remains an open research problem. To this end, we propose a new framework for curved building reconstruction via assembling and deforming geometric primitives. The input LiDAR point clouds are first converted into contours where individual buildings are identified. After recognizing geometric units (primitives) from building contours, we get initial models by matching the basic geometric primitives to these primitives. To polish assembly models, we employ a warping field for model refinements. Specifically, an embedded deformation (ED) graph is constructed via downsampling the initial model. Then, the point to model displacements is minimized by adjusting node parameters in the ED graph based on our objective function. The presented framework is validated on several highly curved buildings collected by various LiDAR in different cities. The experimental results, as well as accuracy comparison, demonstrate the advantage and effectiveness of our method. The new insight attributes to an efficient reconstruction manner. Moreover, we prove that the primitive-based framework significantly reduces the data storage to 10%–20% of classical mesh models.
The 3-D information collected from sample plots is significant for forest inventories. Terrestrial laser scanning (TLS) has been demonstrated to be an effective device in data acquisition of forest plots. Although TLS is able to achieve precise measurements, multiple scans are usually necessary to collect more detailed data, which generally requires more time in scan preparation and field data acquisition. In contrast, mobile laser scanning (MLS) is being increasingly utilized in mapping due to its mobility. However, the geometrical peculiarity of forests introduces challenges. In this article, a test backpack-based MLS system, i.e., backpack laser scanning (BLS), is designed for forest plot mapping without a global navigation satellite system/inertial measurement unit (GNSS-IMU) system. To achieve accurate matching, this article proposes to combine the line and point features for calculating transformation, in which the line feature is derived from trunk skeletons. Then, a scan-to-map matching strategy is proposed for correcting positional drift. Finally, this article evaluates the effectiveness and the mapping accuracy of the proposed method in forest sample plots. The experimental results indicate that the proposed method achieves accurate forest plot mapping using the BLS; meanwhile, compared to the existing methods, the proposed method utilizes the geometric attributes of the trees and reaches a lower mapping error, in which the mean errors and the root square mean errors for the horizontal/vertical direction in plots are less than 3 cm.
Classification of airborne laser scanning (ALS) point clouds is needed in digital cities and 3-D modeling. To efficiently recognize objects in ALS point clouds, we propose a novel hierarchical aggregated deep feature representation method, which can adequately employ spatial association of multilevel structures and deep feature discrimination. In our method, a 3-D deep learning model is constructed to represent the discriminative feature of each point cluster in a hierarchical structure by decreasing the within-class distance and increasing the between-class distance. Our method aggregates the discriminative deep features in different levels into a hierarchical aggregated deep feature that considers the spatial hierarchy and feature distinctiveness. Lastly, we build a multichannel 1-D convolutional neural network to classify the unknown points. Our tests demonstrate that the proposed hierarchical aggregated deep feature method can enhance point cloud classification results. Comparing with seven state-of-the-art methods, those results also verified the superior performance of our method.
In order to improve the accuracy of the International Reference Ionosphere (IRI)-2016 model for application in the Antarctic region, total electron content (TEC) data from the Global Navigation Satellite Systems (GNSS) observation data in 2018 are assimilated into the IRI-2016 model by updating the effective ionospheric parameter, IG12 index on a daily basis. The functional relationship between the IG12 index and the longitude, latitude, and the day of year (DOY) is fitted by using the spherical crown harmonic function and the polynomial, and finally establish an updated experiential model of IG12 indices over the Antarctic region. Conclusions that were reached were: 1) the updated IG12 index varies greatly over different geographical locations and 2) it is also apparent that the accuracy of the IRI-2016 model is worse in the perpetual night than that in the perpetual day. In order to verify our method, the TEC calculated by the IRI-2016 model driven by the updated IG12 index and that calculated by the original IRI-2016 model are compared with the GNSS-TEC, and the results show that the updated IRI-2016 model has improved the accuracy of the BIAS and root mean square (RMS) of the TEC calculation by 97% and 87%, respectively, on the fitting moments, while 75% and 54% on the predicting moments. In addition, compared with the original IRI-2016 model, it is found that the updated IRI-2016 model improves the accuracy of the NmF2 calculation by approximately 23% on average for the fitting time and 8% for the predicting time.
Seismic inversion problems often involve strong nonlinear relationships between model and data so that their misfit functions usually have many local minima. Global optimization methods are well known to be able to find the global minimum without requiring an accurate initial model. However, when the dimensionality of model space becomes large, global optimization methods will converge slow, which seriously hinders their applications in large-dimensional seismic inversion problems. In this article, we propose a new method for large-dimensional seismic inversion based on global optimization and a machine learning technique called autoencoder. Benefiting from the dimensionality reduction characteristics of autoencoder, the proposed method converts the original large-dimensional seismic inversion problem into a low-dimensional one that can be effectively and efficiently solved by global optimization. We apply the proposed method to seismic impedance inversion problems to test its performance. We use a trace-by-trace inversion strategy, and regularization is used to guarantee the lateral continuity of the inverted model. Well-log data with accurate velocity and density are the prerequisite of the inversion strategy to work effectively. Numerical results of both synthetic and field data examples clearly demonstrate that the proposed method can converge faster and yield better inversion results compared with common methods.
With the dramatic growth and complexity of seismic data, manual seismic facies analysis has become a significant challenge. Machine learning and deep learning (DL) models have been widely adopted to assist geophysical interpretations in recent years. Although acceptable results can be obtained, the uninterpretable nature of DL (which also has a nickname “alchemy”) does not improve the geological or geophysical understandings on the relationships between the observations and background sciences. This article proposes a noble interpretable DL model based on 3-D (spatial–spectral) attention maps of seismic facies features. Besides regular data-augmentation techniques, the high-resolution spectral analysis technique is employed to generate multispectral seismic inputs. We propose a trainable soft attention mechanism-based deep dilated convolutional neural network (ADDCNN) to improve the automatic seismic facies analysis. Furthermore, the dilated convolution operation in the ADDCNN generates accurate and high-resolution results in an efficient way. With the attention mechanism, not only the facies-segmentation accuracy is improved but also the subtle relations between the geological depositions and the seismic spectral responses are revealed by the spatial–spectral attention maps. Experiments are conducted, where all major metrics, such as classification accuracy, computational efficiency, and optimization performance, are improved while the model complexity is reduced.
Seismic data are usually contaminated by various noises. Noise suppression plays an important role in seismic processing. In this article, we propose a new denoising method based on the nonlocal weighted robust principal component analysis (RPCA). First, seismic data are divided into many patches and grouped based on the nonlocal similarity. For each group, then, we establish a similar block matrix and set up the objective function of the RPCA. Next, we introduce the iterative log-thresholding algorithm into the augmented Lagrangian method to solve the problem. Furthermore, varying weights are specified to different singular values when minimizing the objective function. Finally, aggregating all recovered matrices can obtain the denoised seismic data. The proposed method considers the nonlocal similarity and adaptively sets weights with local noise variance. It performs well also owing to the superiority of the iterative log-thresholding method. The presented method is assessed using a synthetic seismic section with several crossover events. We also apply this novel approach to a real seismic data, which shows good results. Comparison with other approaches reveals the effectiveness of the proposed approach.
Noise attenuation is a very important step in seismic data processing, which facilitates accurate geologic interpretation. Random noise is one of the main factors that lead to reductions in the signal-to-noise ratio (SNR) of seismic data. It is necessary for seismic data, including complex geological structures, to develop a number of new noise attenuation technologies. In this article, we concern with a new variational regularization method for random noise attenuation of seismic data. Considering that seismic reflection events often have spatially varying directions, we first employ the gradient structure tensor (GST) to estimate the spatially varying dips point by point and propose the structure-oriented directional total generalized variation (DTGV) (SODTGV) functional. Then, we employ the SODTGV as a regularizer to establish an $ell _{2}$ -SODTGV model and develop the primal-dual algorithm for solving this model. Next, the choice of the model parameters is discussed. Finally, the proposed model is applied to restore noisy synthetic and field data to verify the effectiveness of the proposed workflow. For contrastive methods, we select the structure adaptive median filtering (SAMF), anisotropic total variation (ATV), total generalized variation (TGV), DTGV, median filtering, KL transform, SVD transform, and curvelet transform. The synthetic and real seismic data examples indicate that our proposed method can preferably improve the vertical resolution of seismic profiles, enhance the lateral continuity of reflection events, and preserve local geologic features while improving the SNR. Moreover, the proposed regularization method can also be applied to other inverse problems, such as image processing, medical imaging, and remote sensing.
Many popular deconvolution methods based on Robinson’s convolutional model have played an important role in improving the temporal resolution of seismic data. However, the outcomes of applying these deconvolution methods to real land seismic data are not always desirable due to the effect of noise in the deconvolution process. Although the noise in the seismogram can be minimized during the recording process, the effect of residual noise on deconvolution operators can result in poor deconvolution output. To address the shortcomings of conventional deconvolution methods, we developed a new deconvolution method based on a multichannel statistical principle. In the proposed method, we have extended the surface-consistent convolutional model to include a noise component, thus including the noise effect on deconvolution operators in the deconvolution process. According to the proposed multichannel statistical strategy, we first calculated the autocorrelation of the seismogram, in which the lateral variation effect on the wavelet is considered because of inhomogeneities in the vicinity of sources and receivers. Then, we adopted a local fitting technique to approximate the autocorrelation of the seismic wavelet. To obtain the seismic data with a broad bandwidth and low-noise level, we used the integral-Ricker wavelet as the desired output wavelet. Tests on synthetic data and real land seismic data demonstrate the effectiveness of the proposed method in increasing the resolution of seismic signals.
Reconstructing an accurate and high-resolution subsurface model is attractive in the fields of both geology and seismology. However, due to the band-limited characteristics of seismic data, the inversion greatly depends on the reliability of the initial model. A fairly acceptable initial model could lay a good foundation for seismic inversion. In this article, we first introduce a well-log interpolation method with the local slope as a constraint for building a high-fidelity starting model in prestack amplitude versus offset/angle (AVO/AVA) inversion. First, we briefly review the basic theory of general seismic inversion. Then, instead of using the conventional preconditioned least-squares method, we introduce shaping regularization theory into the geological structure-guided well-log interpolation to accelerate the convergence. We use the plane-wave destruction (PWD) algorithm to extract the slope attribute from seismic data, images, or velocity models. The slope is used as the constraint to solve the inverse problem based on the shaping regularization method. Numerical examples demonstrate that the proposed initial model building method performs better than the conventional ones. It greatly improves the accuracy of inversion results. Furthermore, we apply the proposed model building method to the inverse problems of AVO/AVA inversion and reservoir parameter estimation of several field data sets for the first time, which demonstrate encouraging performance.
In this study, two collection 6 (C6) Moderate Resolution Imaging Spectroradiometer (MODIS) level-2 land surface temperature (LST) products (MYD11_L2 and MYD21_L2) from the Aqua satellite were evaluated using temperature-based (T-based) and radiance-based (R-based) validation methods over barren surfaces in Northwestern China. The ground measurements collected at four barren surface sites from June 2012 to September 2018 during the Heihe Watershed Allied Telemetry Experimental Research (HiWATER) experiment were used to perform the T-based evaluation. Ten sand dune sites were selected in six large deserts in Northwestern China to carry out an R-based validation from 2012 to 2018. The T-based validation results indicate that the C6 MYD21 LST product has a better accuracy than the C6 MYD11 product during both daytime and nighttime. The LST is underestimated by the C6 MYD11 products at the four T-based sites during the daytime, with a mean bias of -2.82 K and a mean RMSE of 3.82 K, whereas the MYD21 LST product has a mean bias and RMSE of -0.51 and 2.53 K, respectively. The LST is also underestimated at night by the C6 MYD11 products at the four T-based sites, with a mean bias of -1.40 K and a mean RMSE of 1.72 K, whereas the MYD21 LST product has a mean bias and RMSE of 0.23 and 1.01 K, respectively. For the R-based validation, the MYD11 results are associated with large negative biases during both daytime and nighttime at three sand dune sites and biases within 1 K at the other seven sites, whereas the MYD21 results are more consistent at all ten sand dune sites, with a mean bias of 0.45 and 0.70 K for daytime and nighttime, respectively. The emissivities for these two products in MODIS bands 31 and 32 were compared with each other and then compared with the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) emissivity and laboratory emissivity. The results indicate that the emissivities in MODIS bands 31 and 32 of MYD11 at the four T-based and thr-
e of the R-based validation sites are overestimated and result in LST underestimation, whereas the emissivities of MYD21 are more consistent with the laboratory emissivity. Besides, an experiment was carried out to demonstrate that the physically retrieved dynamic emissivity of the MYD21 product can be utilized to improve the accuracy of the split-window (SW) algorithm for barren surfaces, making it a valuable data source for retrieving LST from different remote sensing data.
Due to the tradeoff between spatial and temporal resolutions commonly encountered in remote sensing, no single satellite sensor can provide fine spatial resolution land surface temperature (LST) products with frequent coverage. This situation greatly limits applications that require LST data with fine spatiotemporal resolution. Here, a deep learning-based spatiotemporal temperature fusion network (STTFN) method for the generation of fine spatiotemporal resolution LST products is proposed. In STTFN, a multiscale fusion convolutional neural network is employed to build the complex nonlinear relationship between input and output LSTs. Thus, unlike other LST spatiotemporal fusion approaches, STTFN is able to form the potentially complicated relationships through the use of training data without manually designed mathematical rules making it is more flexible and intelligent than other methods. In addition, two target fine spatial resolution LST images are predicted and then integrated by a spatiotemporal-consistency (STC)-weighting function to take advantage of STC of LST data. A set of analyses using two real LST data sets obtained from Landsat and moderate resolution imaging spectroradiometer (MODIS) were undertaken to evaluate the ability of STTFN to generate fine spatiotemporal resolution LST products. The results show that, compared with three classic fusion methods [the enhanced spatial and temporal adaptive reflectance fusion model (ESTARFM), the spatiotemporal integrated temperature fusion model (STITFM), and the two-stream convolutional neural network for spatiotemporal image fusion (StfNet)], the proposed network produced the most accurate outputs [average root mean square error (RMSE) <; 1.40 °C and average structural similarity (SSIM) > 0.971].
Provides instructions and guidelines to prospective authors who wish to submit manuscripts.
Presents the front cover for this issue of the publication.
Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.
Presents the table of contents for this issue of the publication.
This article describes the first results obtained from the Surface Waves Investigation and Monitoring (SWIM) instrument carried by the China France Oceanography Satellite (CFOSAT), which was launched on October 29, 2018. SWIM is a Ku-band radar with a near-nadir scanning beam geometry. It was designed to measure the spectral properties of surface ocean waves. First, the good behavior of the instrument is illustrated. It is then shown that the nadir products (significant wave height, normalized radar cross section, and wind speed) exhibit an accuracy similar to standard altimeter missions, thanks to a new retracking algorithm, which compensates a lower sampling rate compared to standard altimetry missions. The off-nadir beam observations are analyzed in detail. The normalized radar cross section varies with incidence and wind speed as expected from previous studies presented in the literature. We illustrate that, in order to retrieve the wave spectra from the radar backscattering fluctuations, it is crucial to apply a speckle correction derived from the observations. Directional spectra of ocean waves and their mean parameters are then compared to wave model data at the global scale and to in situ data from a selection of case studies. The good efficiency of SWIM to provide the spectral properties of ocean waves in the wavelength range [70–500 m] is illustrated. The main limitations are discussed, and the perspectives to improve the data quality are presented.
The detection of small metallic objects buried in mineralized soil poses a challenge for metal detectors, especially when the response from the metallic objects is orders of magnitude below the response from the soil. This article describes a new, handheld, detector system based on magnetic induction spectroscopy (MIS), which can be used to detect buried metallic objects, even in challenging soil conditions. Experimental results consisting of 1669 passes across either buried objects or empty soil are presented. Fourteen objects were buried at three different depths in three types of soil including nonmineralized and mineralized soils. A novel processing algorithm is proposed to demonstrate how spectroscopy can be used to detect metallic objects in mineralized soils. The algorithm is robust across all types of soil, objects, and depths used in this experiment and achieves a true positive rate over 99% at a false-positive rate of less than 5%, based on just a single pass over the object. It has also been shown that the algorithm does not have to be trained separately for each soil type. The data gathered in the experiment are also published to enable more research on the processing algorithms for MIS-based detectors.