IEEE Transactions on Geoscience and Remote Sensing

Syndicate content
TOC Alert for Publication# 36
Updated: 3 years 35 weeks ago

CFAR Detection Based on Adaptive Tight Frame and Weighted Group-Sparsity Regularization for OTHR

Mon, 03/01/2021 - 00:00
In high-frequency over-the-horizon radar (OTHR), it is a challenging work to detect targets in the nonhomogeneous range-Doppler (RD) map with multitarget interference and sharp/smooth clutter edges. The intensity transition of the clutter edge may be sharp or smooth due to the coexistence of atmospheric noise, sea clutter, and ionospheric clutter in OTHR. The analysis of the RD map shows the spatial correlation among neighboring cell-under-test (CUT) that varies from clutter to clutter. This article proposes an algorithm that uses the spatial relationship to estimate the statistical distribution parameters of every CUT by the adaptive tight frame and the weighted group-sparsity regularization. In the proposed algorithm, the spatial relationship is formulated mathematically by regularization terms and combined with the log-likelihood function of CUTs to construct the objective function. The proposed algorithm is verified by the simulated data and real RD maps collected from both trial sky-wave and surface-wave OTHRs in which it shows robust and improved detection.

Bistatic-Range-Doppler-Aperture Wavenumber Algorithm for Forward-Looking Spotlight SAR With Stationary Transmitter and Maneuvering Receiver

Mon, 03/01/2021 - 00:00
Bistatic forward-looking spotlight synthetic aperture radar with stationary transmitter and maneuvering receiver (STMR-BFSSAR) is a promising sensor for various applications, such as the automatic navigation and landing of maneuvering vehicles. Because of the bistatic forward-looking configuration and the receiver’s maneuvers, conventional image formation algorithms suffer from high computational complexity or small size of a well-focused scene if applied to STMR-BFSSAR. In this article, we propose a wavenumber-domain algorithm for STMR-BFSSAR image formation, which is termed the bistatic-range-Doppler-aperture wavenumber algorithm (BDWA). First, a novel range model in bistatic-range and Doppler-aperture coordinate space instead of conventional Cartesian coordinate space is established by employing the elliptic polar coordinate system and the method of series reversion. The novel range model not only makes the echo’s samples to be regular along the direction of the bistatic-range wavenumber axis but also constructs a curved wavefront close to the true wavefront. Second, an operation termed wavenumber-domain gridding is conceived to regularize the echo’s samples along the Doppler-aperture wavenumber axis, which can be implemented by 1-D interpolation. The proposed algorithm significantly outperforms the conventional algorithms in terms of computational complexity and scene size limits. Both point and distributed targets are simulated for two STMR-BFSSAR systems with different parameters. The simulation results verify the validity and superiority of the proposed BDWA.

Integration of Rotation Estimation and High-Order Compensation for Ultrahigh-Resolution Microwave Photonic ISAR Imagery

Mon, 03/01/2021 - 00:00
The microwave photonic (MWP) radar technique is capable of providing ultrawide frequency bandwidth waveforms to generate ultrahigh-resolution (UHR) inverse synthetic aperture radar (ISAR) imagery. Nevertheless, conventional ISAR imaging algorithms have limitations in focusing UHR MWP-ISAR imagery, where high-precision high-order range cell migration (RCM) and phase correction are crucially necessary. In this article, a UHR MWP-ISAR imaging algorithm integrating rotation estimation and high-order motion terms compensation is proposed. By establishing the relationship between parametric ISAR rotation model and high-order motion terms, an average range profile sharpness maximization (ARPSM) is developed to obtain rotation velocity by using nonuniform fast Fourier transform (NUFFT). Second-order range-dependent RCM is corrected with parametric compensation model by using the rotation velocity estimation. Furthermore, the spatial-variant high-order phase error is extracted to compensation by the entire image sharpness maximization (EISM). A new imaging framework is established with two one-dimensional (1-D) parameter estimations: ARPSM and EISM. Extensive experiments demonstrate that the proposed algorithm outperforms traditional ISAR imaging strategies in high-order RCM correction and azimuth focusing performance.

Denoising Sentinel-1 Extra-Wide Mode Cross-Polarization Images Over Sea Ice

Mon, 03/01/2021 - 00:00
Sentinel-1 (S1) extra-wide (EW) swath data in cross-polarization (horizontal–vertical, HV or vertical–horizontal, VH) are strongly affected by the scalloping effect and thermal noise, particularly over areas with weak backscattered signals, such as sea surfaces. Although noise vectors in both the azimuth and range directions are provided in the standard S1 EW data for subtraction, the residual thermal noise still significantly affects sea ice detection by the EW data. In this article, we improve the denoising method developed in previous studies to remove the additive noise for the S1 EW data in cross-polarization. Furthermore, we propose a new method for eliminating the residual noise (i.e., multiplicative noise) at the subswath boundaries of the EW data, which cannot be well processed by simply subtracting the reconstructed 2-D noise field. The proposed method of removing both the additive and multiplicative noise was applied to EW HV-polarized images processed using different Instrument Processing Facility (IPF) versions. The results suggest that the proposed algorithm significantly improves the quality of EW HV-polarized images under various sea ice conditions and sea states in the marginal ice zone (MIZ) of the Arctic. This is of great support for the utilization of cross-polarization synthetic aperture radar (SAR) images in wide swaths for intensive sea ice monitoring in polar regions.

Single-Look Multi-Master SAR Tomography: An Introduction

Mon, 03/01/2021 - 00:00
This article addresses the general problem of single-look multi-master SAR tomography. For this purpose, we establish the single-look multi-master data model, analyze its implications for the single and double scatterers, and propose a generic inversion framework. The core of this framework is the nonconvex sparse recovery, for which we develop two algorithms: one extends the conventional nonlinear least squares (NLS) to the single-look multi-master data model and the other is based on bi-convex relaxation and alternating minimization (BiCRAM). We provide two theorems for the objective function of the NLS subproblem, which lead to its analytic solution up to a constant phase angle in the 1-D case. We also report our findings from the experiments on different acceleration techniques for BiCRAM. The proposed algorithms are applied to a real TerraSAR-X data set and validated with the height ground truth made available by an SAR imaging geodesy and simulation framework. This shows empirically that the single-master approach, if applied to a single-look multi-master stack, can be insufficient for layover separation, and the multi-master approach can indeed perform slightly better (despite being computationally more expensive) even in the case of single scatterers. In addition, this article also sheds light on the special case of single-look bistatic SAR tomography, which is relevant for the current and future SAR missions such as TanDEM-X and Tandem-L.

Parametric Image Reconstruction for Edge Recovery From Synthetic Aperture Radar Echoes

Mon, 03/01/2021 - 00:00
The edges of a target provide essential geometric information and are extremely important for human visual perception and image recognition. However, due to the coherent superposition of received echoes, the continuous edges of targets are discretized in synthetic aperture radar (SAR) images, i.e., the edges become dispersed points, which seriously affects the extraction of visual and geometric information from SAR images. In this article, we focus on solving the problem of how to recover smooth linear edges (SLEs). By introducing multiangle observations, we propose an SAR parametric image reconstruction method (SPIRM) that establishes a parametric framework to recover SLEs from SAR echoes. At the core of the SPIRM is a novel physical characteristic parameter called the scattering-phase-mutation feature (SPMF), which reveals the most essential difference between the residual endpoints of a disappeared SLE and points. Numerical simulations and real-data experiments demonstrate the robustness and effectiveness of the proposed method.

FEC: A Feature Fusion Framework for SAR Target Recognition Based on Electromagnetic Scattering Features and Deep CNN Features

Mon, 03/01/2021 - 00:00
The active recognition of interesting targets has been a vital issue for synthetic aperture radar (SAR) systems. The SAR recognition methods are mainly grouped as follows: extracting image features from the target amplitude image or matching the testing samples with the template ones according to the scattering centers extracted from the target complex data. For amplitude image-based methods, convolutional neural networks (CNNs) achieve nearly the highest accuracy for images acquired under standard operating conditions (SOCs), while scattering center feature-based methods achieve steady performance for images acquired under extended operating conditions (EOCs). To achieve target recognition with good performance under both SOCs and EOCs, a feature fusion framework (FEC) based on scattering center features and deep CNN features is proposed for the first time. For the scattering center features, we first extract the attributed scattering centers (ASCs) from the input SAR complex data, then we construct a bag of visual words from these scattering centers, and finally, we transform the extracted parameter sets into feature vectors with the $k$ -means. For the CNN, we propose a modified VGGNet, which can not only extract powerful features from amplitude images but also achieve state-of-the-art recognition accuracy. For the feature fusion, discrimination correlation analysis (DCA) is introduced to the FEC framework, which not only maximizes the correlation between the CNN and ASCs but also decorrelates the features belonging to different categories within each feature set. Experiments on Moving and Stationary Target Acquisition and Recognition (MSTAR) database demonstrate that the proposed FEC achieves superior effectiveness and robustness under both SOCs and EOCs.

Selective Adversarial Adaptation-Based Cross-Scene Change Detection Framework in Remote Sensing Images

Mon, 03/01/2021 - 00:00
Supervised change detection methods always face a big challenge that the current scene (target domain) is fully unlabeled. In remote sensing, it is common that we have sufficient labels in another scene (source domain) with a different but related data distribution. In this article, we try to detect changes in the target domain with the help of the prior knowledge learned from multiple source domains. To achieve this goal, we propose a change detection framework based on selective adversarial adaptation. The adaptation between multisource and target domains is fulfilled by two domain discriminators. First, the first domain discriminator regards each scene as an individual domain and is designed for identifying the domain to which each input sample belongs. According to the output of the first domain discriminator, a subset of important samples is selected from multisource domains to train a deep neural network (DNN)-based change detection model. As a result, not only the positive transfer is enhanced but also the negative transfer is alleviated. Second, as for the second domain discriminator, all the selected samples are thought from one domain. Adversarial learning is introduced to align the distributions of the selected source samples and the target ones. Consequently, it further adapts the knowledge of change from the source domain to the target one. At the fine-tuning stage, target samples with reliable labels and the selected source ones are used to jointly fine-tune the change detection model. As the target domain is fully unlabeled, homogeneity- and boundary-based strategies are exploited to make the pseudolabels from a preclassification map reliable. The proposed method is evaluated on three SAR and two optical data sets, and the experimental results have demonstrated its effectiveness and superiority.

Cluster-Based Empirical Tropospheric Corrections Applied to InSAR Time Series Analysis

Mon, 03/01/2021 - 00:00
Interferometric synthetic aperture radar (InSAR) allows for mapping of crustal deformation on land with high spatial resolution and precision in areas with high signal-to-noise ratios. Efforts to obtain precise displacement time series globally, however, are severely limited by radar path delays within the troposphere. The tropospheric delay is integrated along the full path length between the ground and the satellite, resulting in correlations between the interferometric phase and elevation that can vary dramatically in both space and time. We evaluate the performance of spatially variable, empirical removal of phase-elevation dependence within SAR interferograms through the use of the $K$ -means clustering algorithm. We apply this method to both synthetic test data, as well as to C-band Sentinel-1a/b time series acquired over a large area in south-central Mexico along the Pacific coast and inland—an area with a large elevation gradient that is of particular interest to researchers studying tectonic- and anthropogenic-related deformation. We show that the clustering algorithm is able to identify cases where tropospheric properties vary across topographic divides, reducing total root mean square (rms) by an average of 50%, as opposed to a spatially constant phase-elevation correction, which has insignificant error reduction. Our approach also reduces tropospheric noise while preserving test signals in synthetic examples. Finally, we show the average standard deviation of the residuals from the best-fit linear rate decreases from approximately 3 to 1.5 cm, which corresponds to a change in the error on the best-fit linear rate from 0.94 to 0.63 cm/yr.

A Multichannel Data Fusion Method to Enhance the Spatial Resolution of Microwave Radiometer Measurements

Mon, 03/01/2021 - 00:00
In this study, a method to improve the reconstruction performance of antenna-pattern deconvolution based on the gradient iterative regularization scheme is proposed. The method exploits microwave measurements acquired by a multichannel radiometer to enhance their native spatial resolution. The proposed rationale consists of using the information carried on a high-frequency (finer spatial resolution) channel to ameliorate the spatial resolution of the lowest resolution radiometer channel. Experiments performed using both synthetic and real special sensor microwave/imager (SSM/I) radiometer data demonstrate that an enhanced spatial resolution 19.35-GHz channel can be obtained by ingesting in the algorithm information coming from 37.0-GHz channel. This multichannel spatial resolution method is also shown to outperform the conventional gradient-like regularization scheme in terms of both observation of smaller targets and reduction of ringings and fluctuations.

Analysis and Correction of the Rank-Deficient Error for 2-D Mirrored Aperture Synthesis

Mon, 03/01/2021 - 00:00
In two-dimensional mirrored aperture synthesis (2-D MAS), the rank of the transformation matrix can affect the accuracy of the solved cosine visibilities; therefore, it has an impact on the accuracy of the reconstructed brightness temperature image. In this article, the influence of the rank deficient on the accuracy of the reconstructed brightness temperature image of 2-D MAS is discussed. An analysis of the rank-deficient error is performed by computing its impact on the radiometric accuracy for a reference scene. Two correction methods based on multiple measurements are proposed to correct the rank-deficient error. The simulations and experiments are carried out to demonstrate the effectiveness of the proposed methods.

Pan-Sharpening via Multiscale Dynamic Convolutional Neural Network

Mon, 03/01/2021 - 00:00
Pan-sharpening is an effective method to obtain high-resolution multispectral images by fusing panchromatic (PAN) images with fine spatial structure and low-resolution multispectral images with rich spectral information. In this article, a multiscale pan-sharpening method based on dynamic convolutional neural network is proposed. The filters in dynamic convolution are generated dynamically and locally by the filter generation network which is different from the standard convolution and strengthens the adaptivity of the network. The dynamic filters are adaptively changed according to the input images. The proposed multiscale dynamic convolutions extract detail feature of PAN image at different scales. Multiscale network structure is beneficial to obtain effective detail features. The weights obtained by the weight generation network are used to adjust the relationship among the detail features in each scale. The GeoEye-1, QuickBird, and WorldView-3 data are used to evaluate the performance of the proposed method. Compared with the widely used state-of-the-art pan-sharpening approaches, the experimental results demonstrate the superiority of the proposed method in terms of both objective quality indexes and visual performance.

Class-Guided Feature Decoupling Network for Airborne Image Segmentation

Mon, 03/01/2021 - 00:00
Contextual information has been demonstrated to be helpful for airborne image segmentation. However, most of the previous works focus on the exploitation of spatially contextual information, which is difficult to segment isolated objects, mainly surrounded by uncorrelated objects. To alleviate this issue, we attempt to take advantage of the co-occurrence relations between different classes of objects in the scene. Especially, similar to other works, convolutional features are first extracted to capture the spatially contextual information. Then, a feature decoupling module is designed to encode the class co-occurrence relations into the convolutional features; thus, the most discriminative features can be decoupled. Finally, the segmentation result is inferred from the decoupled features. The whole process is integrated to form an end-to-end network, named class-guided feature decoupling network (CGFDN). Experimental results on two widely used benchmark data sets show that CGFDN obtains competitive results (>90% overall accuracy (OA) on 5-cm-resolution Potsdam and >91% OA on 9-cm-resolution Vaihingen) in comparison with several state-of-the-art models.

Super-Resolution Mapping Based on Spatial–Spectral Correlation for Spectral Imagery

Mon, 03/01/2021 - 00:00
Due to the influences of imaging conditions, spectral imagery can be coarse and contain a large number of mixed pixels. These mixed pixels can lead to inaccuracies in the land-cover class (LC) mapping. Super-resolution mapping (SRM) can be used to analyze such mixed pixels and obtain the LC mapping information at the subpixel level. However, traditional SRM methods mostly rely on spatial correlation based on linear distance, which ignores the influences of nonlinear imaging conditions. In addition, spectral unmixing errors affect the accuracy of utilized spectral properties. In order to overcome the influence of linear and nonlinear imaging conditions and utilize more accurate spectral properties, the SRM based on spatial–spectral correlation (SSC) is proposed in this work. Spatial correlation is obtained using the mixed spatial attraction model (MSAM) based on the linear Euclidean distance. Besides, a spectral correlation that utilizes spectral properties based on the nonlinear Kullback–Leibler distance (KLD) is proposed. Spatial and spectral correlations are combined to reduce the influences of linear and nonlinear imaging conditions, which results in an improved mapping result. The utilized spectral properties are extracted directly by spectral imagery, thus avoiding the spectral unmixing errors. Experimental results on the three spectral images show that the proposed SSC yields better mapping results than state-of-the-art methods.

Spectral Superresolution of Multispectral Imagery With Joint Sparse and Low-Rank Learning

Mon, 03/01/2021 - 00:00
Extensive attention has been widely paid to enhance the spatial resolution of hyperspectral (HS) images with the aid of multispectral (MS) images in remote sensing. However, the ability in the fusion of HS and MS images remains to be improved, particularly in large-scale scenes, due to the limited acquisition of HS images. Alternatively, we super-resolve MS images in the spectral domain by the means of partially overlapped HS images, yielding a novel and promising topic: spectral superresolution (SSR) of MS imagery. This is challenging and less investigated task due to its high ill-posedness in inverse imaging. To this end, we develop a simple but effective method, called joint sparse and low-rank learning (J-SLoL), to spectrally enhance MS images by jointly learning low-rank HS–MS dictionary pairs from overlapped regions. J-SLoL infers and recovers the unknown HS signals over a larger coverage by sparse coding on the learned dictionary pair. Furthermore, we validate the SSR performance on three HS–MS data sets (two for classification and one for unmixing) in terms of reconstruction, classification, and unmixing by comparing with several existing state-of-the-art baselines, showing the effectiveness and superiority of the proposed J-SLoL algorithm. Furthermore, the codes and data sets will be available at https://github.com/danfenghong/IEEE_TGRS_J-SLoL, contributing to the remote sensing (RS) community.

Hyperspectral Image Classification With Attention-Aided CNNs

Mon, 03/01/2021 - 00:00
Convolutional neural networks (CNNs) have been widely used for hyperspectral image classification. As a common process, small cubes are first cropped from the hyperspectral image and then fed into CNNs to extract spectral and spatial features. It is well known that different spectral bands and spatial positions in the cubes have different discriminative abilities. If fully explored, this prior information will help improve the learning capacity of CNNs. Along this direction, we propose an attention-aided CNN model for spectral–spatial classification of hyperspectral images. Specifically, a spectral attention subnetwork and a spatial attention subnetwork are proposed for spectral and spatial classifications, respectively. Both of them are based on the traditional CNN model and incorporate attention modules to aid networks that focus on more discriminative channels or positions. In the final classification phase, the spectral classification result and the spatial classification result are combined together via an adaptively weighted summation method. To evaluate the effectiveness of the proposed model, we conduct experiments on three standard hyperspectral data sets. The experimental results show that the proposed model can achieve superior performance compared with several state-of-the-art CNN-related models.

Hyperspectral Image Classification via Spatial Window-Based Multiview Intact Feature Learning

Mon, 03/01/2021 - 00:00
Due to the high dimensionality of hyperspectral images (HSIs), more training samples are needed in general for better classification performance. However, surface materials cannot always provide sufficient training samples in practice. HSI classification with small size training samples is still a challenging problem. Multiview learning is a feasible way to improve the classification accuracy in the case of small training samples by combining information from different views. This article proposes a new spatial window-based multiview intact feature learning method (SWMIFL) for HSI classification. In the proposed SWMIFL, multiple features that reflect different information of the original image are extracted and spatial windows are imposed on training samples to select unlabeled samples. Then, multiview intact feature learning is performed to learn the intact feature of the training and unlabeled samples. Considering that neighboring samples are likely to belong to the same class, labels of spatial neighboring samples are determined by two factors including the labels of training samples that locate in the spatial window and the labels learned from the intact feature. Finally, unlabeled samples that have same labels under these two factors are treated as new training samples. Experimental results demonstrate that the proposed SWMIFL-based classification method outperforms several well-known HSI classification methods on three real-world data sets.

Hyperspectral Anomaly Detection via Image Super-Resolution Processing and Spatial Correlation

Mon, 03/01/2021 - 00:00
Anomaly detection is a key problem in hyperspectral image (HSI) analysis with important remote sensing applications. Traditional methods for hyperspectral anomaly detection are mostly based on the distinctive statistical features of the HSIs. However, the anomaly-detection performance of these methods has been negatively impacted by two major limitations: 1) failure to consider the spatial pixel correlation and the ground-object correlation and 2) the existence of the mixing pixels caused by both lower spatial resolution and higher spectral resolution, which leads to higher false-alarm rates. In this article, these two problems are largely solved through a novel hyperspectral anomaly-detection method based on image super-resolution (SR) and spatial correlation. The proposed method encompasses two innovative ideas. First, based on the spectral variability in the anomaly targets, an extended linear mixing model can be obtained with more accurate ground-object information. Then, image SR is used to improve the spatial resolution of the HSIs by injecting the ground-object information from the mixing model. This alleviates the effect of mixed pixels on anomaly detection. Second, spatial correlation is exploited jointly with the global Reed-Xiaoli (GRX) method and the ground-object correlation detection for anomaly detection. Experimental results show that the proposed method not only effectively improves the hyperspectral spatial resolution and reduces the false-alarm rate but also increases the detectability with the spatial correlation information. Furthermore, the results for the real HSIs demonstrate that the proposed method achieves higher rates of anomaly detection with lower false-alarm rates.

Hybrid 2-D–3-D Deep Residual Attentional Network With Structure Tensor Constraints for Spectral Super-Resolution of RGB Images

Mon, 03/01/2021 - 00:00
RGB image spectral super-resolution (SSR) is a challenging task due to its serious ill-posedness, which aims at recovering a hyperspectral image (HSI) from a corresponding RGB image. In this article, we propose a novel hybrid 2-D–3-D deep residual attentional network (HDRAN) with structure tensor constraints, which can take fully advantage of the spatial–spectral context information in the reconstruction progress. Previous works improve the SSR performance only through stacking more layers to catch local spatial correlation neglecting the differences and interdependences among features, especially band features; different from them, our novel method focuses on the context information utilization. First, the proposed HDRAN consists of a 2D-RAN following by a 3D-RAN, where the 2D-RAN mainly focuses on extracting abundant spatial features, whereas the 3D-RAN mainly simulates the interband correlations. Then, we introduce 2-D channel attention and 3-D band attention mechanisms into the 2D-RAN and 3D-RAN, respectively, to adaptively recalibrate channelwise and bandwise feature responses for enhancing context features. Besides, since structure tensor represents structure and spatial information, we apply structure tensor constraint to further reconstruct more accurate high-frequency details during the training process. Experimental results demonstrate that our proposed method achieves the state-of-the-art performance in terms of mean relative absolute error (MRAE) and root mean square error (RMSE) on both the “clean” and “real world” tracks in the NTIRE 2018 Spectral Reconstruction Challenge. As for competitive ranking metric MRAE, our method separately achieves a 16.06% and 2.90% relative reduction on two tracks over the first place. Furthermore, we investigate HDRAN on the other two HSI benchmarks noted as t- e CAVE and Harvard data sets, also demonstrating better results than state-of-the-art methods.

Autonomous Endmember Detection via an Abundance Anomaly Guided Saliency Prior for Hyperspectral Imagery

Mon, 03/01/2021 - 00:00
Determining the optimal number of endmember sources, which is also called “virtual dimensionality” (VD), is a priority for hyperspectral unmixing (HU). Although the VD estimation directly affects the HU results, it is usually solved independently of the HU process. In this article, a saliency-based autonomous endmember detection (SAED) algorithm is proposed to jointly estimate the VD in the process of endmember extraction (EE). In SAED, we first demonstrate that the abundance anomaly (AA) value is an important feature of undetected endmembers since pure pixels have larger AA values than “distractors” (i.e., mixed pixels and pure pixels of detected endmembers). Then, motivated by the fact that endmembers usually gather in certain local regions (superpixels) in the scene, due to spatial correlation, a superpixel prior is introduced in SAED to distinguish endmembers from noise. Specifically, the undetected endmembers are defined as visual stimuli in the AA subspace, the EE is formulated as a salient region detection problem, and the VD is automatically determined when there are no salient objects in the AA subspace. Since the spatial-contextual information of the endmembers is exploited during the saliency analysis, the proposed method is more robust than the spectral-only methods, which was verified using both real and synthetic hyperspectral images.

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer