Traditionally, clean reference images are needed to train the networks when applying the deep learning techniques to tackle image denoising tasks. However, this idea is impracticable for the task of synthetic aperture radar (SAR) image despeckling, since no real-world speckle-free SAR data exist. To address this issue, this article presents a noisy reference-based SAR deep learning filter, by using complementary images of the same area at different times as the training references. In the proposed method, to better exploit the information of the images, parameter-sharing convolutional neural networks are employed. Furthermore, to mitigate the training errors caused by the land-cover changes between different times, the similarity of each pixel pair between the different images is utilized to optimize the training process. The outstanding despeckling performance of the proposed method was confirmed by the experiments conducted on several multitemporal data sets, when compared with some of the state-of-the-art SAR despeckling techniques. In addition, the proposed method shows a pleasing generalization ability on single-temporal data sets, even though the networks are trained using finite input-reference image pairs at a different imaging area.
Urban road extraction has wide applications in public transportation systems and unmanned vehicle navigation. The high-resolution remote sensing images contain background clutter and the roads have large appearance differences and complex connectivities, which makes it a very challenging task for road extraction. In this article, we propose a novel end-to-end deep learning model for road area extraction from remote sensing images. Road features are learned from three levels, which can remove the distraction of the background and enhance feature representation. A direction-aware attention block is introduced to the deep learning model for keeping road topologies. We compare our method on public remote sensing data sets with other related methods. The experimental results show the superiority of our method in terms of road extraction and connectivity preservation.
Imaging hydraulic fractures is of paramount importance to subsurface resource extraction, geologic storage, and hazardous waste disposal. The use of electrically conductive proppants and current energized steel casing provides a promising approach to monitor the distribution of fractures. In this article, a borehole-to-surface system is employed to energize the steel casing and measure electric and magnetic fields on the ground. A convolutional neural network (CNN) is then trained to learn the relationship between the measured field pattern and the parameterized fracture, namely, the lateral extent and direction. To accelerate the generation of training data with limited accuracy loss, an approximate hollow casing is modeled by the impedance transition boundary condition with a tenfold magnified radius and reduced conductivity. Two training strategies are then presented with a grid search of the network’s hyperparameters. The well-trained CNN shows good generalization to unseen fracture conductivity, the true casing model, as well as white Gaussian noise. Finally, we apply the CNN to image irregular fractures and obtain reliable results even under strong noise, indicating a promising imaging technique for more complicated fractures.
Hyperspectral image (HSI) unmixing is an important issue of research due to its effect on the subsequent processing of HSIs. Recently, the sparse regression method with spatial information has been successfully applied in hyperspectral unmixing (HU). However, most sparse regression methods ignore the difference in spatial structure handling with only one sparse constraint. In fact, the pixels in detail regions are more likely to be severely mixed with more endmembers participated, and the sparsity degree of its corresponding abundances is relatively low. Considering the sparsity difference of abundances, a sketch-based region adaptive sparse unmixing applied to HSI is proposed in this article. Inspired by the vision computing theory, we use the region generation algorithm based on a sketch map to differentiate the homogeneous regions and detail regions. Then, the abundances of these two kind regions in HSIs are separately constrained by sparse regularizers of ${L}_{1/2}$ and ${L}_{1}$ with a proposed manifold constraint. Our method not only makes full use of the spatial information in HSIs but also exploits the latent structure of data. The encouraging experimental results on three data sets validate the effectiveness of our method for HU.
Ground-penetrating radar inspection of vertical structures, such as columns or pillars, is relevant in several applicative contexts. Unlike conventional subsurface prospecting, where the medium is accessible only from one side, the columns can be probed from various sides with measurement domains possibly encircling the structure. This makes it possible to retrieve more information about the scene, thanks to an increased view and data collection diversity. This article proposes an imaging approach for structures probed all around via vertical scans. The approach faces the imaging as a full 3-D electromagnetic inverse scattering problem and accounts for the vectorial nature of the scattering phenomenon. Moreover, the imaging approach is based on an approximate model of scattering and the inversion is regularized by means of the truncated singular value decomposition to produce stable and accurate results. The reconstruction capabilities of the proposed imaging approach are evaluated in terms of the achievable spatial resolution. To this end, a numerical analysis exploiting synthetic data allows investigating how the imaging quality depends on the number of vertical scans. Reconstruction results referred to data gathered in controlled conditions provide an experimental assessment of the achievable imaging capabilities.
Incoherent noise is one of the most common noise widely distributed in seismic data. To improve the interpretation accuracy of the underground structure, incoherent noise needs to be adequately suppressed before the final imaging. We propose a novel method for suppressing seismic incoherent noise based on the robust low-rank approximation. After the Hankelization, seismic data will show strong low-rank features. Our goal is to obtain the stable and accurate low-rank approximation of the Hankel matrix and then reconstruct the denoised data. We construct a mixed model of the nuclear norm and the $l_{1}$ norm to express the low-rank approximation of the Hankel matrix constructed in the frequency domain. Essentially, the adopted model is an optimization for the subspace similar to the online subspace tracking method, thus avoiding the time-consuming singular value decomposition (SVD). We introduce the orthonormal subspace learning to convert the nuclear norm to the $l_{1}$ norm to optimize the orthonormal subspace and the corresponding coefficient. Finally, two optimization strategies—the alternating direction method and the block coordinate descent method—are applied to obtain the optimized orthonormal subspace and the corresponding coefficient for representing the low-rank approximation of the Hankel matrix. We perform incoherent noise attenuation tests on synthetic and real seismic data. Compared with other denoising methods, the proposed method produces small signal errors while effectively suppressing the seismic incoherent noise and has a high computational efficiency.
With a small number of labeled samples for training, it can save considerable manpower and material resources, especially when the amount of high spatial resolution remote sensing images (HSR-RSIs) increases considerably. However, many deep models face the problem of overfitting when using a small number of labeled samples. This might degrade HSR-RSI retrieval accuracy. Aiming at obtaining more accurate HSR-RSI retrieval performance with small training samples, we develop a deep metric learning approach with generative adversarial network regularization (DML-GANR) for HSR-RSI retrieval. The DML-GANR starts from a high-level feature extraction (HFE) to extract high-level features, which includes convolutional layers and fully connected (FC) layers. Each of the FC layers is constructed by deep metric learning (DML) to maximize the interclass variations and minimize the intraclass variations. The generative adversarial network (GAN) is adopted to mitigate the overfitting problem and validate the qualities of extracted high-level features. DML-GANR is optimized through a customized approach, and the optimal parameters are obtained. The experimental results on the three data sets demonstrate the superior performance of DML-GANR over state-of-the-art techniques in HSR-RSI retrieval.
With the development of convolutional neural networks (CNNs), the semantic understanding of remote sensing (RS) scenes has been significantly improved based on their prominent feature encoding capabilities. While many existing deep-learning models focus on designing different architectures, only a few works in the RS field have focused on investigating the performance of the learned feature embeddings and the associated metric space. In particular, two main loss functions have been exploited: the contrastive and the triplet loss. However, the straightforward application of these techniques to RS images may not be optimal in order to capture their neighborhood structures in the metric space due to the insufficient sampling of image pairs or triplets during the training stage and to the inherent semantic complexity of remotely sensed data. To solve these problems, we propose a new deep metric learning approach, which overcomes the limitation on the class discrimination by means of two different components: 1) scalable neighborhood component analysis (SNCA) that aims at discovering the neighborhood structure in the metric space and 2) the cross-entropy loss that aims at preserving the class discrimination capability based on the learned class prototypes. Moreover, in order to preserve feature consistency among all the minibatches during training, a novel optimization mechanism based on momentum update is introduced for minimizing the proposed loss. An extensive experimental comparison (using several state-of-the-art models and two different benchmark data sets) has been conducted to validate the effectiveness of the proposed method from different perspectives, including: 1) classification; 2) clustering; and 3) image retrieval. The related codes of this article will be made publicly available for reproducible research by the community.
Accurate and up-to-date road maps are of great importance in a wide range of applications. Unfortunately, automatic road extraction from high-resolution remote sensing images remains challenging due to the occlusion of trees and buildings, discriminability of roads, and complex backgrounds. To address these problems, especially road connectivity and completeness, in this article, we introduce a novel deep learning-based multistage framework to accurately extract the road surface and road centerline simultaneously. Our framework consists of three steps: boosting segmentation, multiple starting points tracing, and fusion. The initial road surface segmentation is achieved with a fully convolutional network (FCN), after which another lighter FCN is applied several times to boost the accuracy and connectivity of the initial segmentation. In the multiple starting points tracing step, the starting points are automatically generated by extracting the road intersections of the segmentation results, which then are utilized to track consecutive and complete road networks through an iterative search strategy embedded in a convolutional neural network (CNN). The fusion step aggregates the semantic and topological information of road networks by combining the segmentation and tracing results to produce the final and refined road segmentation and centerline maps. We evaluated our method utilizing three data sets covering various road situations in more than 40 cities around the world. The results demonstrate the superior performance of our proposed framework. Specifically, our method’s performance exceeded the other methods by 7% and 40% for the connectivity indicator for road surface segmentation and for the completeness indicator for centerline extraction, respectively.
We present a machine learning approach to classify the phases of surface wave dispersion curves. Standard frequency-time analysis (FTAN) analysis of seismograms observed on an array of receivers is converted into an image, of which each pixel is classified as fundamental mode, first overtone, or noise. We use a convolutional neural network (U-Net) architecture with a supervised learning objective and incorporate transfer learning. The training is initially performed with synthetic data to learn coarse structure, followed by fine-tuning of the network using approximately 10% of the real data based on human classification. The results show that the machine classification is nearly identical to the human picked phases. Expanding the method to process multiple images at once did not improve the performance. The developed technique will facilitate the automated processing of large dispersion curve data sets.
Spatial regularization has been proved as an effective method for alleviating the boundary effect and boosting the performance of a discriminative correlation filter (DCF) in aerial visual object tracking. However, existing spatial regularization methods usually treat the regularizer as a supplementary term apart from the main regression and neglect to regularize the filter involved in the correlation operation. To address the aforementioned issue, this article introduces a novel object saliency-aware dual regularized correlation filter, i.e., DRCF. Specifically, the proposed DRCF tracker suggests a dual regularization strategy to directly regularize the filter involved with the correlation operation inside the core of the filter generating ridge regression. This allows the DRCF tracker to suppress the boundary effect and consequently enhance the performance of the tracker. Furthermore, an efficient method based on a saliency detection algorithm is employed to generate the dual regularizers dynamically and provide the regularizers with online adjusting ability. This enables the generated dynamic regularizers to automatically discern the object from the background and actively regularize the filter to accentuate the object during its unpredictable appearance changes. By the merits of the dual regularization strategy and the saliency-aware dynamical regularizers, the proposed DRCF tracker performs favorably in terms of suppressing the boundary effect, penalizing the irrelevant background noise coefficients and boosting the overall performance of the tracker. Exhaustive evaluations on 193 challenging video sequences from multiple well-known challenging aerial object tracking benchmarks validate the accuracy and robustness of the proposed DRCF tracker against 27 other state-of-the-art methods. Meanwhile, the proposed tracker can perform real-time aerial tracking applications on a single CPU with sufficient speed of 38.4 frames/s.
Most existing endmember extraction techniques require prior knowledge about the number of endmembers in a hyperspectral image. The number of endmembers is normally estimated by a separate procedure, whose accuracy has a large influence on the endmember extraction performance. In order to bridge the two seemingly independent but, in fact, highly correlated procedures, we develop a new endmember estimation strategy that simultaneously counts and extracts endmembers. We consider a hyperspectral image as a hyperspectral pixel set and define the subset of pixels that are most different from one another as the divergent subset (DS) of the hyperspectral pixel set. The DS is characterized by the condition that any additional pixel would increase the likeness within the DS and, thus, reduce its divergent degree. We use the DS as the endmember set, with the number of endmembers being the subset cardinality. To render a practical computation scheme for identifying the DS, we reformulate it in terms of a quadratic optimization problem with a numerical solution. In addition to operating as an endmember estimation algorithm by itself, the DS method can also co-operate with existing endmember extraction techniques by transforming them into a novel and more effective schemes. Experimental results validate the effectiveness of the DS methodology in simultaneously counting and extracting endmembers not only as an individual algorithm but also as a foundation algorithm for improving existing methods. Our full code is released for public evaluation.11
https://github.com/xuanwentao/DivergentSubset
The Landsat 1-5 multispectral scanner system (MSS) collected records of land surface mainly during 1972-1992. Investigations on MSS have been relatively limited compared with the numerous investigations on its successors, such as Thematic Mapper (TM) and Enhanced TM Plus (ETM+). The benefits of the Landsat program are not fully accomplished without the inclusion of MSS archives. Investigations on the Landsat 1-5 MSS channel reflectance characteristics wereperformed followed by derived vegetation spectral indices and the Tasseled Cap (TC) transformed features mainly using a collection of synthesized records. On average, the Landsat 4 MSS is generally comparable to the Landsat 5 MSS. The Landsat 1-3 MSSs show disagreement in channel reflectance compared with the Landsat 5 MSS, especially for the red channel (600-700 nm) and the near-infrared channel (700-800 nm). Meanwhile, the relative differences for vegetation spectral indices of the Landsat 3 MSS are mainly from -16% to -5% with the median about -11.5%, while those of the Landsat 2 MSS are mainly from -15% to -7%. Cross-validation tests and two case applications suggested that between-sensor consistency was improved generally through the transformation models generated by ordinary least-squares regression. To improve the consistency of the vegetation indices and the TC greenness, direct strategy employing respective transformation models was more effective than calculations based on the transformed channel reflectance. Considering the shortages of the Landsat MSS archives, further efforts are needed to improve its comparability with observations by other successive Landsat sensors.
Prospective authors are requested to submit new, unpublished manuscripts for inclusion in the upcoming event described in this call for papers.
These instructions give guidelines for preparing papers for this publication. Presents information for authors publishing in this journal.
Presents the institutional listings for this issue of the publication.
Correcting the impact of the isotope composition on the mixing ratio dependency of water vapour isotope measurements with cavity ring-down spectrometers
Yongbiao Weng, Alexandra Touzeau, and Harald Sodemann
Atmos. Meas. Tech., 13, 3167–3190, https://doi.org/10.5194/amt-13-3167-2020, 2020
We find that the known mixing ratio dependence of laser spectrometers for water vapour isotope measurements varies with isotope composition. We have developed a scheme to correct for this isotope-composition-dependent bias. The correction is most substantial at low mixing ratios. Stability tests indicate that the first-order dependency is a constant instrument characteristic. Water vapour isotope measurements at low mixing ratios can now be corrected by following our proposed procedure.
Correcting the impact of the isotope composition on the mixing ratio dependency of water vapour isotope measurements with cavity ring-down spectrometers
Yongbiao Weng, Alexandra Touzeau, and Harald Sodemann
Atmos. Meas. Tech., 13, 3167–3190, https://doi.org/10.5194/amt-13-3167-2020, 2020
We find that the known mixing ratio dependence of laser spectrometers for water vapour isotope measurements varies with isotope composition. We have developed a scheme to correct for this isotope-composition-dependent bias. The correction is most substantial at low mixing ratios. Stability tests indicate that the first-order dependency is a constant instrument characteristic. Water vapour isotope measurements at low mixing ratios can now be corrected by following our proposed procedure.
An extended radar relative calibration adjustment (eRCA) technique for higher-frequency radars and range–height indicator (RHI) scans
Alexis Hunzinger, Joseph C. Hardin, Nitin Bharadwaj, Adam Varble, and Alyssa Matthews
Atmos. Meas. Tech., 13, 3147–3166, https://doi.org/10.5194/amt-13-3147-2020, 2020
The calibration of weather radars is one of the most dominant sources of errors hindering their use. This work takes a technique for tracking the changes in radar calibration using the radar clutter from the ground and extends it to higher-frequency research radars. It demonstrates that after modifications the technique is successful but that special care needs to be taken in its application at high frequencies. The technique is verified using data from multiple DOE ARM field campaigns.