Synthetic aperture radar (SAR) image classification is a fundamental research direction in image interpretation. With the development of various intelligent technologies, deep learning techniques are gradually being applied to SAR image classification. In this study, a new SAR classification algorithm known as the multiscale convolutional neural network with an autoencoder regularization joint contextual attention network (MCAR-CAN) is proposed. The MCAR-CAN has two branches: the autoencoder regularization branch and the context attention branch. First, autoencoder regularization is used for the reconstruction of the input to regularize the classification in the autoencoder regularization branch. Multiscale input and an asymmetric structure of the autoencoder branch cause the network more to be focused on classification than on reconstruction. Second, the attention mechanism is used to produce an attention map in which each attention weight corresponds to a context correlation in attention branch. The robust features are obtained by the attention mechanism. Finally, the features obtained by the two branches are spliced for classification. In addition, a new training strategy and a postprocessing method are designed to further improve the classification accuracy. Experiments performed on the data from three SAR images demonstrated the effectiveness and robustness of the proposed algorithm.
Geosynchronous spaceborne–airborne bistatic synthetic aperture radar (GEO-SA-BiSAR) can achieve high-resolution Earth observation with superior system flexibility and efficiency, which offers huge potential for advanced SAR applications. In this article, the echo characteristics of GEO-SA-BiSAR are analyzed in detail, including range history, the Doppler parameters, and spatial variance. The distinct features of GEO-SAR and airborne receiver result in the failure of the traditional bistatic SAR range model and imaging methods. In order to deal with these problems and achieve high-precision data focusing on GEO-SA-BiSAR, this article first proposes a novel range model based on one-stationary equivalence (RMOSE) to accommodate the distinctiveness of the GEO-SA-BiSAR echo, which changes with orbit positions of GEO transmitter. Then, a 2-D frequency-domain imaging algorithm is put forward based on RMOSE, which solves the problem of the 2-D spatial variance of GEO-SA-BiSAR. Finally, simulations are presented to demonstrate the effectiveness of the proposed range model and algorithm.
Speckle noise inherent in synthetic aperture radar (SAR) images seriously affects the visual effect and brings great difficulties to the postprocessing of the SAR image. Due to the edge-preserving feature, total variation (TV) regularization-based techniques have been extensively utilized to reduce the speckle. However, the strong scatters in SAR image with radiometry several orders of magnitude larger than their surrounding regions limit the effectiveness of TV regularization. Meanwhile, the ${ell _{1}}$ -norm first-order TV regularization sometimes causes staircase artifacts as it favors solutions that are piecewise constant, and it usually underestimates high-amplitude components of image gradient as the ${ell _{1}}$ -norm uniformly penalizes the amplitude. To overcome these shortcomings, a new hybrid variation model, called Fisher–Tippett (FT) distribution- ${ell _{p}}$ -norm first-and second-order hybrid TVs (HTpVs), is proposed to reduce the speckle after removing the strong scatters. Especially, the FT-HTpV inherits the advantages of the distribution based data fidelity term, the nonconvex regularization, and the higher order TV regularization. Therefore, it can effectively remove the speckle while preserving point scatters and edges and reducing staircase artifacts well. To efficiently solve the nonconvex minimization problem, an iterative framework with a nonmonotone-accelerated proximal gradient (nmAPG) method and a matrix-vector acceleration strategy are used. Extensive experiments on both the simulated and real SAR images demonstrate the effectiveness of the proposed method.
Although numerous methods based on sequence image classification have improved the accuracy of synthetic-aperture radar (SAR) automatic target recognition, most of them only concentrate on the fusion of spatial features of multiple images and fail to fully utilize the temporal-varying features. In order to exploit the spatial and temporal features contained in the SAR image sequence simultaneously, this article proposes a sequence SAR target classification method based on the spatial–temporal ensemble convolutional network (STEC-Net). In the STEC-Net, the dilated 3-D convolution is first applied to extract the spatial–temporal features. Then, the features are gradually integrated hierarchically from local to global and represented as the united tensors. Finally, a compact connection is applied to obtain a lightweight classification network. Compared with the available methods, the STEC-Net achieves a higher accuracy (99.93%) in the moving and stationary target acquisition and recognition (MSTAR) data set and exhibits robustness to depression angle, configuration, and version variants.
Phase decorrelation, as one of the main error sources, limits the capability of interferometric synthetic aperture radar (InSAR) for deformation mapping over areas with low coherence. Although several methods have been realized to reduce decorrelation noise, for example, by phase linking and spatial and temporal filters, their performances deteriorate when coherence estimation bias exists. We present an arc-based approach that allows reconstructing unwrapped interval phase time-series based on iterative weighted least squares (WLS) in temporal and spatial domains. The main features of the method are that phase optimization and unwrapping can be jointly conducted by spatial and temporal iterative WLS and coherence matrix bias has negligible effects on the estimation. In addition, the linear formation makes the implementation suitable with small subset of interferograms, providing an efficient solution for future big SAR data. We demonstrate the effectiveness of the proposed method using simulated and real data with different decorrelation mechanisms and compare our approach with the state-of-art phase reconstruction methods. Substantial improvement can be achieved in terms of reduced root-mean-square error (RMSE) in the simulation data and increased density of coherent measurements in the real data.
We address the problem of high-resolution radar imaging in low signal-to-noise ratio (SNR) environments in an approximate Bayesian inference framework. First, the probabilistic graphical model is constructed by imposing the sparsity-promoting spike-and-slab prior to the distribution of scattering centers. Then, the model parameters and phase errors are estimated iteratively by expectation propagation (EP) and maximum likelihood (ML) estimation. Compared with the available imaging methods based on the numerical optimization and Bayesian inference, the proposed method has exhibited more flexibility in data representation and better performance in parameter estimation, particularly in sparse-aperture and low SNR scenarios.
This article investigates the presence of a new interferometric signal in multilooked synthetic aperture radar (SAR) interferograms that cannot be attributed to the atmospheric or Earth-surface topography changes. The observed signal is short-lived and decays with the temporal baseline; however, it is distinct from the stochastic noise attributed to temporal decorrelation. The presence of such a fading signal introduces a systematic phase component, particularly in short temporal baseline interferograms. If unattended, it biases the estimation of Earth surface deformation from SAR time series. Here, the contribution of the mentioned phase component is quantitatively assessed. The biasing impact on the deformation-signal retrieval is further evaluated. A quality measure is introduced to allow the prediction of the associated error with the fading signals. Moreover, a practical solution for the mitigation of this physical signal is discussed; special attention is paid to the efficient processing of Big Data from modern SAR missions such as Sentinel-1 and NISAR. Adopting the proposed solution, the deformation bias is shown to decrease significantly. Based on these analyses, we put forward our recommendations for efficient and accurate deformation-signal retrieval from large stacks of multilooked interferograms.
The performance of synthetic aperture radar is vulnerable to radio frequency interference (RFI). In many situations, the RFI has a low-rank property, since the frequency bands occupied by RFI usually remain stable during a short slow time period. Therefore, low-rank representation (LRR)-based methods can be applied to separate RFI and signal of interest (SOI), by minimizing the rank of RFI components with a regularization constraint to protect SOI. However, traditional methods use the sparsity of the raw data or range profile to formulate the regularization term, which fails to describe the properties of SOI accurately. In addition to the sparse property of range profiles, this article explores the common patterns hidden in the range profiles and proposes two new LRR-based RFI suppression optimization models with a well-designed regularization term to describe such common sparsity to protect the SOI. Four methods are proposed to solve the optimization problems based on the alternating direction multiplier (ADM) method, which provides tradeoff between efficiency and accuracy. Compared with traditional LRR-based RFI suppression methods, the proposed methods make a more precise description of the features of SOI, therefore can better protect the information of SOI during the RFI suppression process and improves the imaging quality. The superior performance of the proposed method is validated by measured data in both sparse and nonsparse scenes.
The azimuth phase coding (APC) technique is known for its very low implementation complexity and its effectiveness for point and distributed ambiguities in conventional synthetic aperture radar (SAR) systems. In recent years, as an extension, the APC technique has been briefly discussed for multichannel SAR systems. However, the properties of the APC technique are no longer guaranteed in the multichannel SAR systems based on the digital beamforming (DBF) on-receive, and only a slight APC gain in the suppression of the range ambiguity can be obtained. In this article, we first provide a more thorough analysis for an APC-multichannel SAR system with respect to a uniform pulse-repetition frequency (PRF). Then, the APC/multichannel SAR system with nonuniform spatial sampling is briefly discussed, and an improved reconstruction approach based on a quadratically constrained optimization model is proposed to increase greatly the APC gain with respect to existing multichannel reconstruction algorithms. This proposed approach allows the minimization of the range ambiguity with a given azimuth-ambiguity constraint. In particular, for some specific PRFs, the proposed method permits a cancellation of the odd-order range ambiguity. Finally, simulation experiments are performed to verify the advantages and effectiveness of the proposed approach.
Recently, deep-learning methods have been successfully applied to the ship detection in the synthetic aperture radar (SAR) images. It is still a great challenge to detect multiscale SAR ships due to the broad diversity of the scales and the strong interference of the inshore background. Most prevalent approaches are based on the anchor mechanism that uses the predefined anchors to search the possible regions containing objects. However, the anchor settings have a great impact on their detection performance as well as the generalization ability. Furthermore, considering the sparsity of the ships, most anchors are redundant and will lead to the computation increase. In this article, a novel detection method named feature balancing and refinement network (FBR-Net) is proposed. First, our method eliminates the effect of anchors by adopting a general anchor-free strategy that directly learns the encoded bounding boxes. Second, we leverage the proposed attention-guided balanced pyramid to balance semantically the multiple features across different levels. It can help the detector learn more information about the small-scale ships in complex scenes. Third, considering the SAR imaging mechanism, the interference near the ship boundary with the similar scattering power probably affects the localization accuracy because of feature misalignment. To tackle the localization issue, a feature-refinement module is proposed to refine the object features and guide the semantic enhancement. Finally, extensive experiments are conducted to show the effectiveness of our FBR-Net compared with the general anchor-free baseline. The detection results on the SAR ship detection dataset (SSDD) and AIR-SARShip-1.0 dataset illustrate that our method achieves the state-of-the-art performance.
Calibration of satellite-borne radiometer is a key issue for quantitative remote sensing. Its accuracy depends on the stability of the calibration source. Because of no atmosphere and biological activity, the Moon surface keeps stable in the long term and may be a good candidate for thermal calibration. Observation of microwave humidity sounder (MHS) onboard the NOAA-18 made measurements of the disk-integrated brightness temperature (TB) of the Moon for the phase angle between -800 and 500. The measurement of NOAA-18 has been studied to validate the TB model of lunar surface. In this article, the near side of the Moon surface is divided into 900 subregions with a span of 60 x 60 in longitude and latitude. By solving 1-D heat conductive equation with the thermophysical parameters validated by the Diviner data of the Lunar Reconnaissance Orbiter (LRO), the temperature profiles of the regolith media in all 900 subregions are obtained. The loss tangents are inversed from the Chang'e-2 (CE-2) 37-GHz microwave TB data at noontime. Employing the fluctuation-dissipation theorem and the Wentzel-Kramer-Brillouin (WKB) approach, the microwave and millimeter-wave TBs of each subregion are simulated. Then, the weighted average TB can be disk-integrated from 900 TBs of all subregions versus the phase angle. These simulations well demonstrate diurnal TB variation and its dependence upon the frequency channels. It is found that the disk-integrated TB of the Moon in MHS channels is sensitive to the full-width at half-maximum (FWHM) of the deep space view (DSV), which is corrected in our simulation, where the Moon is now taken as an extended target, instead of a point-like object. Simulated integrated TBs are compared with the corrected MHS TB data at 89, 157, and 183 GHz. The simulated TB is well consistent with these MHS TB data at 89 and 183 GHz at various phase angles. But the maximum TB of MHS data at 157 GHz is unusually lower than that of 89 GHz. The influence of the loss ta-
gent, emissivity, and the pointing error is analyzed. Some more careful design to observe the Moon TB and technical parameters, especially the FWHM should be well determined. Our model and numerical simulation provides a tool for TB calibration and validation.
Satellite-based passive microwave (PMW) remote sensing is an essential technique to clarify long-term and large-scale distribution patterns of cloud water content (CWC). However, most CWC estimation methods are not implemented over land because of high heterogeneity of land radiation, and the detailed characteristics of microwave (MW) radiative transfer between land and atmosphere including clouds have not been elucidated. This study aims to elucidate these characteristics and reveal the accuracy of land emissivity representation necessary for adequate CWC estimation over land using satellite-based PMW under various CWC conditions. First, important parameters related to MW radiative transfer between land and atmosphere at CWC-relevant frequencies in the presence of clouds are determined theoretically. Next, the relationship between errors in these parameters and the brightness temperatures used for CWC estimation is clarified through considerations based on radiative transfer equations. Then, ground-based PMW observations and numerical simulation are used to reveal the actual values of these important parameters and the size of errors. Finally, the results show that for any cloud liquid water path (LWP) value greater than 1.6 kg/m2 at 89 GHz and 5 kg/m2 at 36 GHz, we can reasonably neglect the heterogeneity of emissivity and radiation from the land surface for CWC estimation. However, when LWP values are below the threshold, the error in the representation of land emissivity should be kept below 0.015 for both 89- and 36-GHz data, and volumetric soil moisture content should have an error lower than 5%-6% for both frequencies.
Generally, ground stress accumulates in the process of earthquake (EQ) preparation. The change trend of microwave brightness temperature (TB) of rock with stress change is an important factor for the understanding of the microwave anomalies associated with EQs. However, it is not yet clear whether the downtrend of rocks' TB is associated with increased stress. To confirm this, in this article, the instantaneous field of view of the microwave radiometer was identified first. Then, the microwave observation experiments were conducted on granite samples at 6.6 GHz under cyclic loading and outdoor conditions with weak background radiation. It was found that besides uptrend and fluctuation, the downtrend of granite samples' TB also correlates with stress, with an occurrence probability of 47.62% and a maximum rate of -0.038 K/MPa. The variation trends of TB with stress are not uniform across different areas of the same sample. To reveal the cause of this phenomenon, the permittivity of single-crystal quartz, one of the main mineral compositions of granite, was measured when it was under compassion loading at the direction perpendicular or parallel to the optical axis. For quartz, the real part of the permittivity rises (falls) when the two directions are perpendicular (parallel), causing the TB to fall (rise). The optic axes of minerals are randomly distributed in granite samples, which make the variation in the permittivity of minerals also random, thereby resulting in the nonuniformity of stress-induced TB variation in granite samples. Finally, the implications of these results were discussed.
The radiance received by satellite sensors viewing the ocean is a mixed signal of the atmosphere and ocean. Accurate decomposition of the radiance components is crucial because any inclusion of atmospheric signal in the water-leaving radiance leads to an incorrect estimation of the oceanic parameters. This is especially true over the turbid coastal waters, where the estimation of the radiance components is difficult. A layer removal scheme for atmospheric correction (LRSAC) has been developed to take the atmospheric and oceanic components as the layer structure according to the sunlight passing in the Sun-Earth-satellite system. Compared with the normal coupled atmospheric column, the uncertainty of the layer structure of Rayleigh and aerosols has a relatively small error with a mean relative error (MRE) of 0.063%. As the aerosol layer was put between Rayleigh and ocean, a new Rayleigh lookup table (LUT) was regenerated using 6SV (Second Simulation of a Satellite Signal in the Solar Spectrum, Vector version 3.2) based on the zero reflectance at the ground to produce the pure Rayleigh reflectance without the Rayleigh-ocean interaction. The accuracy of the LRSAC was validated by in situ water-leaving reflectance, obtaining an MRE of 6.3%, a root-mean-square error (RMSE) of 0.0028, and the mean correlation coefficient of 0.86 based on 430 matchup pairs over the East China Sea. Results show that the LRSAC can be used to decompose the reflectance at the top of each layer for the atmospheric correction over turbid coastal waters.
Retrieving surface properties from airborne hyperspectral imagery requires the use of an atmospheric correction model to compensate for atmospheric scattering and absorption. In this study, a solar spectral irradiance monitor (SSIM) from the University of Colorado Boulder was flown on a Twin Otter aircraft with the National Ecological Observatory Network's (NEON) imaging spectrometer. Upwelling and downwelling irradiance observations from the SSIM were used as boundary conditions for the radiative transfer model used to atmospherically correct NEON imaging spectrometer data. Using simultaneous irradiance observations as boundary conditions removed the need to model the entire atmospheric column so that atmospheric correction required modeling only the atmosphere below the aircraft. For overcast conditions, incorporating SSIM observations into the atmospheric correction process reduced root-mean-square (rms) error in retrieved surface reflectance by up to 57% compared with a standard approach. In addition, upwelling irradiance measurements were used to produce an observation-based estimate of the adjacency effect. Under cloud-free conditions, this correction reduced the rms error of surface reflectance retrievals by up to 27% compared with retrievals that ignored adjacency effects.
Hyperspectral image with high dimensionality always increases the computational consumption, which challenges image processing. Deep learning models have achieved extraordinary success in various image processing domains, which are effective to improve classification performance. There remain considerable challenges in fully extracting abundant spectral information, such as the combination of spatial and spectral information. In this article, a novel unsupervised hyperspectral feature extraction architecture based on spatial revising variational autoencoder (AE) ( $U_{text {Hfe}}text {SRVAE}$ ) is proposed. The core concept of this method is extracting spatial features via designed networks from multiple aspects for the revision of the obtained spectral features. Multilayer encoder extracts spectral features, and then, latent space vectors are generated from the obtained means and standard deviations. Spatial features based on local sensing and sequential sensing are extracted using multilayer convolutional neural networks and long short-term memory networks, respectively, which can revise the obtained mean vectors. Besides, the proposed loss function guarantees the consistency of the probability distributions of various latent spatial features, which obtained from the same neighbor region. Several experiments are conducted on three publicly available hyperspectral data sets, and the experimental results show that $U_{text {Hfe}}text {SRVAE}$ achieves better classification results compared with comparison methods. The combination of spatial feature extraction models and deep AE models is designed based on the unique characteristics of hyperspectral images, which contributes to the performance of this method.
Deep learning has shown its huge potential in the field of hyperspectral image (HSI) classification. However, most of the deep learning models heavily depend on the quantity of available training samples. In this article, we propose a multitask generative adversarial network (MTGAN) to alleviate this issue by taking advantage of the rich information from unlabeled samples. Specifically, we design a generator network to simultaneously undertake two tasks: the reconstruction task and the classification task. The former task aims at reconstructing an input hyperspectral cube, including the labeled and unlabeled ones, whereas the latter task attempts to recognize the category of the cube. Meanwhile, we construct a discriminator network to discriminate the input sample coming from the real distribution or the reconstructed one. Through an adversarial learning method, the generator network will produce real-like cubes, thus indirectly improving the discrimination and generalization ability of the classification task. More importantly, in order to fully explore the useful information from shallow layers, we adopt skip-layer connections in both reconstruction and classification tasks. The proposed MTGAN model is implemented on three standard HSIs, and the experimental results show that it is able to achieve higher performance than other state-of-the-art deep learning models.
The rapid increase in the number of remote sensing sensors makes it possible to develop multisource feature extraction and fusion techniques to improve the classification accuracy of surface materials. It has been reported that light detection and ranging (LiDAR) data can contribute complementary information to hyperspectral images (HSIs). In this article, a multiple feature-based superpixel-level decision fusion (MFSuDF) method is proposed for HSIs and LiDAR data classification. Specifically, superpixel-guided kernel principal component analysis (KPCA) is first designed and applied to HSIs to both reduce the dimensions and compress the noise impact. Next, 2-D and 3-D Gabor filters are, respectively, employed on the KPCA-reduced HSIs and LiDAR data to obtain discriminative Gabor features, and the magnitude and phase information are both taken into account. Three different modules, including the raw data-based feature cube (concatenated KPCA-reduced HSIs and LiDAR data), the Gabor magnitude feature cube, and the Gabor phase feature cube (concatenation of the corresponding Gabor features extracted from the KPCA-reduced HSIs and LiDAR data), can be, thus, achieved. After that, random forest (RF) classifier and quadrant bit coding (QBC) are introduced to separately accomplish the classification task on the aforementioned three extracted feature cubes. Alternatively, two superpixel maps are generated by utilizing the multichannel simple noniterative clustering (SNIC) and entropy rate superpixel segmentation (ERS) algorithms on the combined HSIs and LiDAR data, which are then used to regularize the three classification maps. Finally, a weighted majority voting-based decision fusion strategy is incorporated to effectively enhance the joint use of the multisource data. The proposed approach is, thus, named MFSuDF. A series of experiments are conducted on three real-world data sets to demonstrate the effectiveness of the proposed MFSuDF approach. The experimental results sh-
w that our MFSuDF can achieve the overall accuracy of 73.64%, 93.88%, and 74.11% for Houston, Trento, and Missouri University and University of Florida (MUUFL) Gulport data sets, respectively, when there are only three samples per class for training.
Hyperspectral unmixing (HU) is a crucial technique for exploiting remotely sensed hyperspectral data, which aims at estimating a set of spectral signatures, called endmembers and their corresponding proportions, called abundances. The performance of HU is often seriously degraded by various kinds of noise existing in hyperspectral images (HSIs). Most of existing robust HU methods are based on the assumption that noise or outlier only exists in one kind of formulation, e.g., band noise or pixel noise. However, in real-world applications, HSIs are unavoidably corrupted by noisy bands and noisy pixels simultaneously, which require robust HU in both the spatial dimension and spectral dimension. Meanwhile, the sparsity of abundances is an inherent property of HSIs and different regions in an HSI may possess various sparsity levels across locations. This article proposes a correntropy-based spatial-spectral robust sparsity-regularized unmixing model to achieve 2-D robustness and adaptive weighted sparsity constraint for abundances simultaneously. The updated rules of the proposed model are efficient to be implemented and carried out by a half-quadratic technique. The experimental results obtained by both synthetic and real hyperspectral data demonstrate the superiority of the proposed method compared to the state-of-the-art methods.
Anomaly detection in hyperspectral imagery has been an active topic among the remote sensing applications. It aims at identifying anomalous targets with different spectra from their surrounding background. Therefore, an effective detector should be able to distinguish the anomalies, especially for the weak ones, from the background and noise. In this article, we propose a novel method for hyperspectral anomaly detection based on total variation (TV) and sparsity regularized decomposition model. This model decomposes the hyperspectral imagery into three components: background, anomaly, and noise. In order to distinguish effectively these components, a union dictionary consisting of both background and potential anomalous atoms is utilized to represent the background and anomalies, respectively. Moreover, the TV and the sparsity-inducing regularizations are incorporated to facilitate the separation. Besides, we present a new strategy for constructing the union dictionary with the density peak-based clustering. The proposed detector is evaluated on both simulated and real hyperspectral data sets and the experimental results demonstrate its superiority compared with several traditional and state-of-the-art anomaly detectors.