Feed aggregator

Detection of Subpixel Targets on Hyperspectral Remote Sensing Imagery Based on Background Endmember Extraction

The low spatial resolution associated with imaging spectrometers has caused subpixel target detection to become a special problem in hyperspectral image (HSI) processing that poses considerable challenges. In subpixel target detection, the size of the target is smaller than that of a pixel, making the spatial information of the target almost useless so that a detection algorithm must rely on the spectral information of the image. To address this problem, this article proposes a subpixel target detection algorithm for hyperspectral remote sensing imagery based on background endmember extraction. First, we propose a background endmember extraction algorithm based on robust nonnegative dictionary learning to obtain the background endmember spectrum of the image. Next, we construct a hyperspectral subpixel target detector based on pixel reconstruction (HSPRD) to perform pixel-by-pixel target detection on the image to be tested using the background endmember spectral matrix and the spectra of known ground targets. Finally, the subpixel target detection results are obtained. The experimental results show that, compared with other existing subpixel target detection methods, the algorithm proposed here can provide the optimum target detection results for both synthetic and real-world data sets.

Progressive Compressively Sensed Band Processing for Hyperspectral Classification

Compressive sensing (CS) has recently been demonstrated as an enabling technology for hyperspectral sensing on remote and autonomous platforms. The power, on-board storage, and computation requirements associated with the high dimensionality of hyperspectral images (HSI) are still limiting factors for many applications. A recent work has exploited the benefits of CS to perform HSI classification directly in the compressively sensed band domain (CSBD). Since the required number of compressively sensed bands (CSBs) needed to achieve full band performance varies with the complexity of an image scene, this article presents a progressive band processing (PBP) approach, called progressive CSB classification (PCSBC), to adaptively determine an appropriate number of CSBs required to achieve full band performance, while also providing immediate feedback from progressions of class classification predictions carried out by PCSBC. By taking advantage of PBP, new progression metrics and stopping criteria are also designed for PCSBC. Four real-world HSIs are used to demonstrate the utility of PCSBC.

Spectral–Spatial Joint Sparse NMF for Hyperspectral Unmixing

The nonnegative matrix factorization (NMF) combining with spatial–spectral contextual information is an important technique for extracting endmembers and abundances of hyperspectral image (HSI). Most methods constrain unmixing by the local spatial position relationship of pixels or search spectral correlation globally by treating pixels as an independent point in HSI. Unfortunately, they ignore the complex distribution of substance and rich contextual information, which makes them effective in limited cases. In this article, we propose a novel unmixing method via two types of self-similarity to constrain sparse NMF. First, we explore the spatial similarity patch structure of data on the whole image to construct the spatial global self-similarity group between pixels. And according to the regional continuity of the feature distribution, the spectral local self-similarity group of pixels is created inside the superpixel. Then based on the sparse expression of the pixel in the subspace, we sparsely encode the pixels in the same spatial group and spectral group respectively. Finally, the abundance of pixels within each group is forced to be similar to constrain the NMF unmixing framework. Experiments on synthetic and real data fully demonstrate the superiority of our method over other existing methods.

Orthogonal Subspace Projection-Based Go-Decomposition Approach to Finding Low-Rank and Sparsity Matrices for Hyperspectral Anomaly Detection

Low-rank and sparsity-matrix decomposition (LRaSMD) has received considerable interests lately. One of effective methods for LRaSMD is called go decomposition (GoDec), which finds low-rank and sparse matrices iteratively subject to the predetermined low-rank matrix order $m$ and sparsity cardinality $k$ . This article presents an orthogonal subspace-projection (OSP) version of GoDec to be called OSP-GoDec, which implements GoDec in an iterative process by a sequence of OSPs to find desired low-rank and sparse matrices. In order to resolve the issues of empirically determining $p = m+ j$ and $k$ , the well-known virtual dimensionality (VD) is used to estimate $p$ in conjunction with the Kuybeda et al. developed minimax-singular value decomposition (MX-SVD) in the maximum orthogonal complement algorithm (MOCA) to estimate $k$ . Consequently, LRaSMD can be realized by implementing OSP-GoDec using $p$ and $k$ determined by VD and MX-SVD, respectively. Its application to anomaly detection demonstrates that the proposed OSP-GoDec coupled with VD and MX-SVD performs very effectively and better than the commonly used LRaSMD-based anomaly detectors.

Hyperspectral Image Classification Based on 3-D Octave Convolution With Spatial–Spectral Attention Network

In recent years, with the development of deep learning (DL), the hyperspectral image (HSI) classification methods based on DL have shown superior performance. Although these DL-based methods have great successes, there is still room to improve their ability to explore spatial–spectral information. In this article, we propose a 3-D octave convolution with the spatial–spectral attention network (3DOC-SSAN) to capture discriminative spatial–spectral features for the classification of HSIs. Especially, we first extend the octave convolution model using 3-D convolution, namely, a 3-D octave convolution model (3D-OCM), in which four 3-D octave convolution blocks are combined to capture spatial–spectral features from HSIs. Not only the spatial information can be mined deeply from the high- and low-frequency aspects but also the spectral information can be taken into account by our 3D-OCM. Second, we introduce two attention models from spatial and spectral dimensions to highlight the important spatial areas and specific spectral bands that consist of significant information for the classification tasks. Finally, in order to integrate spatial and spectral information, we design an information complement model to transmit important information between spatial and spectral attention features. Through the information complement model, the beneficial parts of spatial and spectral attention features for the classification tasks can be fully utilized. Comparing with several existing popular classifiers, our proposed method can achieve competitive performance on four benchmark data sets.

Geometry-Aware Deep Recurrent Neural Networks for Hyperspectral Image Classification

Variants of deep networks have been widely used for hyperspectral image (HSI)-classification tasks. Among them, in recent years, recurrent neural networks (RNNs) have attracted considerable attention in the remote sensing community. However, complex geometries cannot be learned easily by the traditional recurrent units [e.g., long short-term memory (LSTM) and gated recurrent unit (GRU)]. In this article, we propose a geometry-aware deep recurrent neural network (Geo-DRNN) for HSI classification. We build this network upon two modules: a U-shaped network (U-Net) and RNNs. We first input the original HSI patches to the U-Net, which can be trained with very few images and obtain a preliminary classification result. We then add RNNs on the top of the U-Net so as to mimic the human brain to refine continuously the output-classification map. However, instead of using the traditional dot product in each gate of the RNNs, we introduce a Net-Gated GRU that increases the nonlinear representation power. Finally, we use a pretrained ResNet as a regularizer to improve further the ability of the proposed network to describe complex geometries. To this end, we construct a geometry-aware ResNet loss, which leverages the pretrained ResNet’s knowledge about the different structures in the real world. Our experimental results on real HSIs and road topology images demonstrate that our approach outperforms the state-of-the-art classification methods and can learn complex geometries.

Adaptive Spectral–Spatial Multiscale Contextual Feature Extraction for Hyperspectral Image Classification

In this article, we propose an end-to-end adaptive spectral–spatial multiscale network to extract multiscale contextual information for hyperspectral image (HSI) classification, which contains spectral feature extraction (FE) and spatial FE subnetworks. In spectral FE aspect, different from previous methods where features are obtained in a single scale, which limits the accuracy improvement, we propose two schemes based on band grouping strategy, and the long short-time memory (LSTM) model is used for perceiving spectral multiscale information. In spatial subnetwork, on the foundation of existing multiscale architecture, the spatial contextual features which are usually ignored by previous literature are successfully obtained under the aid of convolutional LSTM (ConvLSTM) model. Besides, a new spatial grouping strategy is proposed for convenience of ConvLSTM to extract the more discriminative features. Then, a novel adaptive feature combining way is proposed considering the different importance of spectral and spatial parts. Experiments on three public data sets in HSI community demonstrate that our methods achieve competitive results compared with other state-of-the-art methods.

Block-Gaussian-Mixture Priors for Hyperspectral Denoising and Inpainting

This article proposes a denoiser for hyperspectral (HS) images that consider, not only spatial features, but also spectral features. The method starts by projecting the noisy (observed) HS data onto a lower dimensional subspace and then learns a Gaussian mixture model (GMM) from 3-D patches or blocks extracted from the projected data cube. Afterward, the minimum mean squared error (MMSE) estimates of the blocks are obtained in closed form and returned to their original positions. Experiments show that the proposed algorithm is able to outperform other state-of-the-art methods under Gaussian and Poissonian noise and to reconstruct high-quality images in the presence of stripes.

Coupled Convolutional Neural Network With Adaptive Response Function Learning for Unsupervised Hyperspectral Super Resolution

Due to the limitations of hyperspectral imaging systems, hyperspectral imagery (HSI) often suffers from poor spatial resolution, thus hampering many applications of the imagery. Hyperspectral super resolution refers to fusing HSI and MSI to generate an image with both high spatial and high spectral resolutions. Recently, several new methods have been proposed to solve this fusion problem, and most of these methods assume that the prior information of the point spread function (PSF) and spectral response function (SRF) are known. However, in practice, this information is often limited or unavailable. In this work, an unsupervised deep learning-based fusion method—HyCoNet—that can solve the problems in HSI–MSI fusion without the prior PSF and SRF information is proposed. HyCoNet consists of three coupled autoencoder nets in which the HSI and MSI are unmixed into endmembers and abundances based on the linear unmixing model. Two special convolutional layers are designed to act as a bridge that coordinates with the three autoencoder nets, and the PSF and SRF parameters are learned adaptively in the two convolution layers during the training process. Furthermore, driven by the joint loss function, the proposed method is straightforward and easily implemented in an end-to-end training manner. The experiments performed in the study demonstrate that the proposed method performs well and produces robust results for different data sets and arbitrary PSFs and SRFs.

A Novel Radiometric Control Set Sample Selection Strategy for Relative Radiometric Normalization of Multitemporal Satellite Images

This article presents a new relative radiometric normalization (RRN) method for multitemporal satellite images based on the automatic selection and multistep optimization of the radiometric control set samples (RCSS). A novel image-fusion strategy based on the fast local Laplacian filter is employed to generate a difference index using the complementary information extracted from the change vector analysis and absolute gradient difference of the bitemporal satellite images. The difference index is then segmented into changed and unchanged pixels using a fast level-set method. A novel local outlier method is then applied to the unchanged pixels of the bitemporal images to identify the initial RCSS, which are then scored by a novel unchanged purity index, and the histogram of the scores is used to produce the final RCSS. The RRN between the bitemporal images is achieved by adjusting the subject image to the reference image using orthogonal linear regression on the final RCSS. The proposed method is applied to seven different data sets comprised of bitemporal images acquired by various satellites, including Landsat TM/ETM+, Sentinel 2B, Worldview 2/3, and Aster. The experimental results show that the method outperforms the state-of-the-art RRN methods. It reduces the average root-mean-square error (RMSE) of the best baseline method (IR-MAD) by up to 32% considering all data sets.

RSNet: The Search for Remote Sensing Deep Neural Networks in Recognition Tasks

Deep learning algorithms, especially convolutional neural networks (CNNs), have recently emerged as a dominant paradigm for high spatial resolution remote sensing (HRS) image recognition. A large amount of CNNs have already been successfully applied to various HRS recognition tasks, such as land-cover classification and scene classification. However, they are often modifications of the existing CNNs derived from natural image processing, in which the network architecture is inherited without consideration of the complexity and specificity of HRS images. In this article, the remote sensing deep neural network (RSNet) framework is proposed using an automatically search strategy to find the appropriate network architecture for HRS image recognition tasks. In RSNet, the hierarchical search space is first designed to include module- and transition-level spaces. The module-level space defines the basic structure block, where a series of lightweight operations as candidates, including depthwise separable convolutions, is proposed to ensure the efficiency. The transition-level space controls the spatial resolution transformations of the features. In the hierarchical search space, a gradient-based search strategy is used to find the appropriate architecture. In RSNet, the task-driven architecture training process can acquire the optimal model parameters of the switchable recognition module for HRS image recognition tasks. The experimental results obtained using four benchmark data sets for land-cover classification and scene classification tasks demonstrate that the searched RSNet can achieve a satisfactory accuracy with a high computational efficiency and, hence, provides an effective option for the processing of HRS imagery.

RSDehazeNet: Dehazing Network With Channel Refinement for Multispectral Remote Sensing Images

Multispectral remote sensing (RS) images are often contaminated by the haze that degrades the quality of RS data and reduces the accuracy of interpretation and classification. Recently, the emerging deep convolutional neural networks (CNNs) provide us new approaches for RS image dehazing. Unfortunately, the power of CNNs is limited by the lack of sufficient hazy-clean pairs of RS imagery, which makes supervised learning impractical. To meet the data hunger of supervised CNNs, we propose a novel haze synthesis method to generate realistic hazy multispectral images by modeling the wavelength-dependent and spatial-varying characteristics of haze in RS images. The proposed haze synthesis method not only alleviates the lack of realistic training pairs in multispectral RS image dehazing but also provides a benchmark data set for quantitative evaluation. Furthermore, we propose an end-to-end RSDehazeNet for haze removal. We utilize both local and global residual learning strategies in RSDehazeNet for fast convergence with superior performance. Channel attention modules are incorporated to exploit strong channel correlation in multispectral RS images. Experimental results show that the proposed network outperforms the state-of-the-art methods for synthetic data and real Landsat-8 OLI multispectral RS images.

A Novel Cloud Detection Algorithm Based on Simplified Radiative Transfer Model for Aerosol Retrievals: Preliminary Result on Himawari-8 Over Eastern China

Aerosol particles affect the Earth’s radiative balance and represent one of the largest uncertainties in climate research. The removal of clouds is the first step and critical in the aerosol retrievals. However, the cloud detection is still challenging. Here, a novel simplified cloud detection algorithm (SCDA) is proposed to identify the cloud and clear-sky over land and based on a simplified radiative transfer model (RTM). The fewer input bands, dynamic thresholds, and only one parameter to be modified are the main advantages of the algorithm, which can be applied to different satellite sensors. In this article, we apply SCDA to the Himawari-8 data in 2016 for preliminary analysis. The detection results are validated using Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) vertical feature mask (VFM) data and the National Centers for Environmental Information (NCEI) ground-based observation data. We also compare the results with the Himawari-8 cloud products from the Japan Aerospace Exploration Agency (JAXA). Compared with CALIPSO VFM data and NCEI ground-based observation data, the correct rate of SCDA cloud detection result is 86.08% and 79.86%, which are higher than that of Himawari-8 cloud products (85.71% and 78.89%). The correct rate of SCDA clear-sky detection result is 88.33% and 87.85%, which are close to the correct rate of Himawari-8 clear-sky products (90.54% and 88.63%). The overall performance of the SCDA is comparable to that of the threshold method for JAXA Himawari-8 cloud products. Therefore, the SCDA can provide accurate cloud mask with only one threshold to be modified and few input parameters.

Sensitivity of Satellite Ocean Color Data to System Vicarious Calibration of the Long Near Infrared Band

Satellite ocean color missions require accurate system vicarious calibrations (SVC) to retrieve the relatively small remote-sensing reflectance ( $R_{mathrm {rs}}$ , sr−1) from the at-sensor radiance. However, the current atmospheric correction and SVC procedures do not include calibration of the “long” near infrared band (NIRL—869 nm for MODIS), partially because earlier studies, based primarily on simulations, indicate that accuracy in the retrieved $R_{mathrm {rs}}$ is insensitive to moderate changes in the NIRL vicarious gain ( $g$ ). However, the sensitivity of ocean color data products to $g$ (NIRL) has not been thoroughly examined. Here, we first derive 10 SVC “gain configurations” (vicarious gains for all visible and NIR bands) for MODIS/Aqua using current operational NASA protocols, each time assuming a different $g$ (869). From these, we derive a suite of ~1.4E6 unique gain configurations with $g$ (869) ranging from 0.85 to 1.2. All MODIS/A data for 25 locations within each of five ocean gyres were then processed using each of these gain configurations. Resultant time series show substantial variability in dominant $R_{mathrm {rs}}$ (547) patterns in response to changes in $g$ (869) (and associated gain configurati- ns). Overall, mean $R_{mathrm {rs}}$ (547) values generally decrease with increasing $g$ (869), while the standard deviations around those means show gyre-specific minima for $0.97 < g$ (869) < 1.02. Following these sensitivity analyses, we assess the potential to resolve $g$ (869) using such time series, finding $g$ (869) = 1.025 most closely comports with expectations. This approach is broadly applicable to other ocean color sensors, and highlights the importance of rigorous cross-sensor calibration of the NIRL bands, with implications on consistency of merged-sensor data sets.

Automatic Extraction of <italic>Sargassum</italic> Features From Sentinel-2 MSI Images

Frequent Sargassum beaching in the Caribbean Sea and other regions has caused severe problems for local environments and economies. Although coarse-resolution satellite instruments can provide large-scale Sargassum distributions, their use is problematic in nearshore waters that are directly relevant to local communities. Finer resolution instruments, such as the multispectral instruments (MSIs) on the Sentinel-2 satellites, show potential to fill this gap, yet automatic Sargassum extraction is difficult due to compounding factors. In this article, a new approach is developed to extract Sargassum features automatically from MSI Floating Algae Index (FAI) images. Because of the high spatial resolution, limited signal-to-noise ratio (SNR), and staggered instrument internal configuration, there are many nonalgae bright targets (including cloud artifacts and wave-induced glints) causing enhanced near-infrared reflectance and elevated FAI values. Based on the spatial patterns of these image “noises,” a Trainable Nonlinear Reaction Diffusion (TNRD) denoising model is trained to estimate and remove such noise. The model shows excellent performance when tested over realistic noise patterns derived from MSI measurements. After removing such noise and masking clouds (as well as cloud shadows and glint patterns), biomass density from each valid pixel is quantified using the FAI-biomass model established from earlier field measurements, from which Sargassum morphology (length/width/biomass) is derived. Overall, the proposed approach achieves over 86% Sargassum extraction accuracy and shows preliminary success on Landsat-8 images. The approach is expected to be incorporated in the existing near real-time Sargassum Watch System for both Landsat-8 and Sentinel-2 observations to monitor Sargassum over nearshore waters.

Deep Unsupervised Embedding for Remotely Sensed Images Based on Spatially Augmented Momentum Contrast

Convolutional neural networks (CNNs) have achieved great success when characterizing remote sensing (RS) images. However, the lack of sufficient annotated data (together with the high complexity of the RS image domain) often makes supervised and transfer learning schemes limited from an operational perspective. Despite the fact that unsupervised methods can potentially relieve these limitations, they are frequently unable to effectively exploit relevant prior knowledge about the RS domain, which may eventually constrain their final performance. In order to address these challenges, this article presents a new unsupervised deep metric learning model, called spatially augmented momentum contrast (SauMoCo), which has been specially designed to characterize unlabeled RS scenes. Based on the first law of geography, the proposed approach defines spatial augmentation criteria to uncover semantic relationships among land cover tiles. Then, a queue of deep embeddings is constructed to enhance the semantic variety of RS tiles within the considered contrastive learning process, where an auxiliary CNN model serves as an updating mechanism. Our experimental comparison, including different state-of-the-art techniques and benchmark RS image archives, reveals that the proposed approach obtains remarkable performance gains when characterizing unlabeled scenes since it is able to substantially enhance the discrimination ability among complex land cover categories. The source codes of this article will be made available to the RS community for reproducible research.

Part-Based Modeling of Pole-Like Objects Using Divergence-Incorporated 3-D Clustering of Mobile Laser Scanning Point Clouds

3-D digital city vividly presents a real-world city and has been widely needed for many application domains. Numerous pole-like objects (PLOs), including trees, street lamps, and traffic signs, are an indispensable part of 3-D digital city. The point cloud data of mobile laser scanning (MLS) systems can capture both the geometric shape and geospatial coordinates of the PLOs while moving along the roads. This article is motivated to accurately extract and efficiently model PLOs from the point cloud data. The main contributions of this article are as follows: 1) a divergence-incorporated clustering algorithm is proposed to extract trunks accurately from the pole-like 3-D distribution perspective of point cloud; 2) an adaptive growing strategy of alternately extending and updating 3-D neighbors is proposed to get the complete canopy points of various shapes and density; and 3) the part-based modeling is proposed to synthesize the point cloud of PLOs with meaningful 3-D shapes, providing a way to model objects for the 3-D digital city vividly and efficiently. The proposed method is tested on three data sets with different interference, shape of the canopy, and point density. Experimental results demonstrate that the proposed method can extract and model the PLOs effectively and efficiently for 3-D digital city. The precision of trunk extraction is 98.45%, 98.08%, and 92.39%, the completeness of canopy extraction is 80.54%, 89.84%, and 89.29%, and the modeling time for a PLO is 0.011, 0.038, and 0.063 s in three data sets.

Passive Radar Imaging of Ship Targets With GNSS Signals of Opportunity

This article explores the possibility to exploit global navigation satellite systems (GNSS) signals to obtain radar imagery of ships. This is a new application area for the GNSS remote sensing, which adds to a rich line of research about the alternative utilization of navigation satellites for remote sensing purposes, which currently includes reflectometry, passive radar, and synthetic aperture radar (SAR) systems. In the field of short-range maritime surveillance, GNSS-based passive radar has already proven to detect and localize ship targets of interest. The possibility to obtain meaningful radar images of observed vessels would represent an additional benefit, opening the doors to noncooperative ship classification capability with this technology. To this purpose, a proper processing chain is here conceived and developed, able to achieve well-focused images of ships while maximizing their signal-to-background ratio. Moreover, the scaling factors needed to map the backscatter energy in the range and cross-range domain are also analytically derived, enabling the estimation of the length of the target. The effectiveness of the proposed approach at obtaining radar images of ship targets and extracting relevant features is confirmed via an experimental campaign, comprising multiple Galileo satellites and a commercial ferry undergoing different kinds of motion.

A Forward Model for Data Assimilation of GNSS Ocean Reflectometry Delay-Doppler Maps

Delay-Doppler maps (DDMs) are generally the lowest level of calibrated observables produced from global navigation satellite system reflectometry (GNSS-R). A forward model is presented to relate the DDM, in units of absolute power at the receiver, to the ocean surface wind field. This model and the related Jacobian are designed for use in assimilating DDM observables into weather forecast models. Given that the forward model represents a full set of DDM measurements, direct assimilation of this lower level data product is expected to be more effective than using individual specular-point wind speed retrievals. The forward model is assessed by comparing DDMs computed from hurricane weather research and forecasting (HWRF) model winds against measured DDMs from the Cyclone Global Navigation Satellite System (CYGNSS) Level 1a data. Quality controls are proposed as a result of observed discrepancies due to the effect of swell, power calibration bias, inaccurate specular point position, and model representativeness error. DDM assimilation is demonstrated using a variational analysis method (VAM) applied to three cases from June 2017, specifically selected due to the large deviation between scatterometer winds and European Centre for Medium-Range Weather Forecasts (ECMWF) predictions. DDM assimilation reduced the root-mean-square error (RMSE) by 15%, 28%, and 48%, respectively, in each of the three examples.

<italic>Q</italic>-Factor Estimation by Compensation of Amplitude Spectra in Synchrosqueezed Wavelet Domain

We propose a stable ${Q}$ -estimation approach based on the compensation of amplitude spectra in the time–frequency domain after a synchrosqueezed wavelet transform (SSWT). SSWT employing a post-processing frequency reallocation method to the original representation of a continuous wavelet transform (CWT) for improving its readability provides the sharper time–frequency representation of a signal when compared with the other traditional time–frequency methods such as CWT or S-transform. For ${Q}$ -estimation, we transform a seismic trace into the time–frequency domain using SSWT at first. Then, we derive the amplitude compensation in the SSWT domain. By searching the predetermined ${Q}$ range, the comparison between the compensated amplitude spectrum and the reference one in the SSWT domain is carried out. The output optimized ${Q}$ -factor estimation is evaluated by the minimum of the mean square error. For the robust and fast stabilization form of the amplitude compensation in the SSWT domain with noise amplification damping, the seismic pulse is truncated with a limited length and the obtained time–frequency maps using an SSWT are smoothed. The synthetic vertical seismic profiling data and the real stacked seismic data applications illustrate the effectiveness and the ability of the proposed method.

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer