Environmental trace gas monitoring instrument (EMI) onboard GaoFen-5 was launched in May 2018 and has successfully operated on-orbit for more than a year. EMI contains four grating spectrometers, covering wavelengths in the range 240-710 nm with a spectral resolution of 0.3-0.5 nm, and enables one-day global coverage. For EMI on-orbit calibration, two onboard solar diffusers (SD), one surface reflectance aluminum diffuser, and one quartz volume diffuser (QVD) are used to measure solar spectra. The solar spectra are used to perform accurate spectral and radiometric calibrations. EMI on-orbit spectral calibration contains wavelength calibration and instrument spectral response function (ISRF), both of which are the key quantities in trace gas retrievals based on differential optical absorption spectroscopy (DOAS) analysis. The wavelength calibration is performed using the Fraunhofer lines in the solar spectrum, and the ISRF parameters are obtained by fitting high-resolution and EMI-measured solar spectra. Based on the known solar irradiation and characteristic of SD, SD radiance can be calculated and is used for EMI on-orbit radiometric calibration. The radiometric calibration also determines the absolute Earth reflectance spectra that are used as inputs for atmospheric retrieval algorithms. For EMI on-orbit radiometric monitoring, aluminum diffuser is used as reference SD to monitor QVD degradation. An internal white light source is used to detect pixel performance and monitor radiometric throughput.
Satellite-based microwave sensors that respond to the vertical distribution of hydrometeors have been continuously employed in the investigation of precipitation systems characteristics. Rain/no-rain classification (RNC) methods often are either applied before retrieving precipitation information from a number of algorithms based on passive microwave measurements or adopted to build the precipitation event-based databases. As a simple rain indicator, the polarized corrected temperature (PCT) at 89-GHz (PCT89) method using the global precipitation measurement (GPM) microwave imager (GMI) has been employed by many researchers, because it can estimate the scattering intensity while minimizing the effects of the surface emissivity at high resolution. This article presents a new consideration using the PCT89-based RNC method through statistical verification. Precipitating clouds were subdivided into 11 types (three stratiform types and four convective types) by the GPM dual frequency precipitation radar (DPR) precipitation classification algorithm. Quantitative comparison of verification results was performed in the tropics from January to December 2015 and major sources of uncertainty were analyzed from the perspective of the precipitation mechanism. Results showed a tendency of false identification for stratiform types except for those located near the convective core, and thus the method was susceptible to failure in the identification of convective types. Consequently, this method leads to an increase of 70% and 54% in the number of two significant stratiform types compared to DPR, while the convective types decreased by up to 53%. This article suggests that the precipitations identified by the PCT89 have features that enhance the bias toward the stratiform type.
This article introduces the task of visual question answering for remote sensing data (RSVQA). Remote sensing images contain a wealth of information, which can be useful for a wide range of tasks, including land cover classification, object counting, or detection. However, most of the available methodologies are task-specific, thus inhibiting generic and easy access to the information contained in remote sensing data. As a consequence, accurate remote sensing product generation still requires expert knowledge. With RSVQA, we propose a system to extract information from remote sensing data that is accessible to every user: we use questions formulated in natural language and use them to interact with the images. With the system, images can be queried to obtain high-level information specific to the image content or relational dependencies between objects visible in the images. Using an automatic method introduced in this article, we built two data sets (using low- and high-resolution data) of image/question/answer triplets. The information required to build the questions and answers is queried from OpenStreetMap (OSM). The data sets can be used to train (when using supervised methods) and evaluate models to solve the RSVQA task. We report the results obtained by applying a model based on convolutional neural networks (CNNs) for the visual part and a recurrent neural network (RNN) for the natural language part of this task. The model is trained on the two data sets, yielding promising results in both cases.
Despite the successful applications of probabilistic collaborative representation classification (PCRC) in pattern classification, it still suffers from two challenges when being applied on hyperspectral images (HSIs) classification: 1) ineffective feature extraction in HSIs under noisy situation; and 2) lack of prior information for HSIs classification. To tackle the first problem existed in PCRC, we impose the sparse representation to PCRC, i.e., to replace the 2-norm with 1-norm for effective feature extraction under noisy condition. In order to utilize the prior information in HSIs, we first introduce the Euclidean distance (ED) between the training samples and the testing samples for the PCRC to improve the performance of PCRC. Then, we bring the coordinate information (CI) of the HSIs into the proposed model, which finally leads to the proposed locality regularized robust PCRC (LRR-PCRC). Experimental results show the proposed LRR-PCRC outperformed PCRC and other state-of-the-art pattern recognition and machine learning algorithms.
In this article, the terrain classifications of polarimetric synthetic aperture radar (PolSAR) images are studied. A novel semi-supervised method based on improved Tri-training combined with a neighborhood minimum spanning tree (NMST) is proposed. Several strategies are included in the method: 1) a high-dimensional vector of polarimetric features that are obtained from the coherency matrix and diverse target decompositions is constructed; 2) this vector is divided into three subvectors and each subvector consists of one-third of the polarimetric features, randomly selected. The three subvectors are used to separately train the three different base classifiers in the Tri-training algorithm to increase the diversity of classification; and 3) a help-training sample selection with the improved NMST that uses both the coherency matrix and the spatial information is adopted to select highly reliable unlabeled samples to increase the training sets. Thus, the proposed method can effectively take advantage of unlabeled samples to improve the classification. Experimental results show that with a small number of labeled samples, the proposed method achieves a much better performance than existing classification methods.
The surface mass balance (SMB) of West Antarctica is an important glaciological input to understanding polar climate and sea-level rise but with historically poor in situ data coverage. Previous studies demonstrate the utility of frequency-modulated continuous-wave radar to image subsurface layering in ice sheets, providing an additional source of data with which to estimate SMB. Traditional methods, however, require time-intensive manual oversight. Here, we present a probabilistic, fully automated approach to estimate annual SMB and uncertainties from radar echograms using successive peak-finding and weighted neighborhood search algorithms with the Monte Carlo simulations based on annual-layer likelihood scores. We apply this method to ground-based and airborne radar in a 175-km transect of the West Antarctic interior and compare the results to traditional manual methods and independent estimates from firn cores. The method demonstrates an automated estimation of SMB across a range of accumulation rates (100-450-mm water equivalent per year) and layer gradients up to 2 m/km. Based on likelihood-weighted F-scores, automated layer picks have a success rate between 64% and 84.6% compared with manually picked layers for three validation sites dispersed across the region. Comparisons between the automated SMB estimates and independent firn cores show a bias of 24 ± 70-mm water equivalent per year (12% ± 35% water equivalent of the in situ mean accumulation rate) although individual core site biases differ. This new approach permits the fully automated extraction of annual SMB rates and should be broadly and readily applicable to previously collected and ongoing radar data sets across polar regions.
Infrared (IR) information is fundamental to global precipitation estimation. Although researchers have developed numerous IR-based retrieval algorithms, there is still plenty of scope for promoting their accuracy. This article develops a novel deep learning-based algorithm entitled infrared precipitation estimation using a convolutional neural network (IPEC). Based on the five-channel IR data, the IPEC first identifies the precipitation occurrence and then estimates the precipitation rates at hourly and 0.04° × 0.04° resolutions. The performance of the IPEC is validated using the Stage-IV radar-gauge-combined data and compared to the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS) in three subregions over the continental United States (CONUS). The results show that the five-channel input is more efficient in precipitation estimation than the commonly used one-channel input. The IPEC estimates based on the five-channel input show better statistical performance than the PERSIANN-CCS with 34.9% gain in Pearson's correlation coefficient (CC), 38.0% gain in relative bias (BIAS), and 45.2% gain in mean squared error (MSE) during the testing period from June to August 2014 over the central CONUS. Furthermore, the optimized IPEC model is applied in totally independent periods and regions, and still achieves significantly better performance than the PERSIANN-CCS, indicating that the IPEC has a stronger generalization capability. On the whole, this article proves the effectiveness of the convolutional neural network (CNN) combined with the physical multichannel inputs in IR precipitation retrieval. This end-to-end deep learning algorithm shows the potential for serving as an operational technique that can be applied globally and provides a new perspective for the future development of satellite precipitation retrievals.
Synchrosqueezing transform (SST) is an effective time–frequency analysis (TFA) approach for the processing of nonstationary signals. The SST shows a satisfactory ability of the TF localization of the nonlinear signal with a slowly time-varying instantaneous frequency (IF). However, for the signal of which ridge curves in the TF domain are fast varying, or even almost parallel to the frequency axis, the SST will provide a blurred TF representation (TFR). To solve this issue, the transient-extracting transform (TET) was recently put forward. The TET can effectively characterize and extract transient features in the much concentrated TFR for the strongly frequency-modulated (FM) signal, especially the impulse-like signal. However, contrary to the SST, it is not suitable for weak FM modes. In this study, we propose a TFA method called the time-synchroextracting general chirplet transform (TEGCT). The TEGCT can achieve a highly concentrated TFR for strong FM signals as well as weak FM ones. Quantized indicators, the concentration measurement and the peak signal-to-noise ratio, are used to analyze the performances of the proposed method compared with those of other methods. The comparisons show that the TEGCT can provide a result with better TF localization. Then, the proposed method was applied to the spectrum analysis of the seismic data for oil reservoir characteristics. The horizontal slices of the offshore 3-D seismic data show that the TEGCT delineates more distinct and continuous subsurface channels in a fluvial-delta deposition system. All the results illustrate that our proposed method is a good potential tool for seismic processing and interpretation in the geoscience.
Powerline inspection is an important task for electric power management. Corridor mapping, i.e., the task of surveying the surroundings of the line and detecting potentially hazardous vegetation and objects, is performed by aerial light detection and ranging (LiDAR) survey. To this purpose, the main tasks are automatic extraction of the wires and measurement of the distance of objects close to the line. In this article, we present a new fully automated solution, which does not use time-consuming line fitting method, but is based on simple geometrical assumptions and relies on the fact that wire points are isolated, sparse and widely separated from all other points in the data set. In particular, we detect and classify pylons by local-maxima strategy. Then, a new reference system, having its origin on the first pylon and $y$ -axis toward the second one, is defined. In this new reference system, transverse sections of the raw point cloud are extracted; by iterating such procedure for all detected pylons, we are able to detect the wire bundle. Obstacles are then automatically detected according to corridor mapping requirements. The algorithm is tested on two relevant data sets.
Longwave infrared (LWIR) spectroscopy is useful for detecting and identifying hazardous clouds by passive remote sensing technology. Gaseous constituents are usually assumed to be thin plumes in a three-layer model, from which the spectral signatures are linearly superimposed on the brightness temperature spectrum. However, the thin-plume model performs poorly in cases of thick clouds. A modification to this method is made using synthetic references as target spectra, which allow linear models to be used for thick clouds. The prior background, which is generally unknown in most applications, is reconstructed through a regression method using predefined references. However, large residuals caused by fitting errors may distort the extracted spectral signatures and identification results if the predefined references are not consistent with the real spectral shapes. A group of references are generated to represent the possible spectral shapes, and the least absolute shrinkage and selection operator (LASSO) method is used to select the most appropriate reference for spectral fitting. Small residuals and adaptive identification are achieved by automatically selecting the reference spectrum. Two experiments are performed to verify the algorithm proposed in this article. Ethylene is adaptively detected during an indoor release process, and the spectral shape varies with the amount released. In addition, ammonia is measured under different humidity conditions, and the background is adaptively removed using the LASSO method. Based on this research, LWIR remote sensing technology can be applied in various target-detection scenarios, and adaptive identification is achieved to promote hazardous cloud detection.
This article proposes an innovative scheme for recovering sparse reflectivity series from uniformly quantized seismic signals. In this scheme, the statistically less affected impulses by the quantization error are assigned higher weights than the ones with a larger error. First, the orthogonal matching pursuit (OMP) algorithm is applied on a quantized seismic trace to obtain a set of conservative estimates of the reflectivity impulses. Second, the quantization error is formulated as a systematic uncertainty within a neighborhood of the obtained conservative estimates from the OMP. Finally, a robust worst case (RWC) deconvolution method is developed to recover an improved estimate of the reflectivity impulses. The proposed scheme significantly increases the robustness and enhances the recovered reflectivity impulses obtained by the OMP algorithm. This is substantiated by experiments on both synthetic and real seismic data. Specifically, the falsely or overly estimated impulses are significantly suppressed, and the robustness to the change of the quantization interval is enhanced.
The angular and spectral kernel-driven (ASK) model distinguishes soil and vegetation spectral features by the component spectra and is a promising model which combines multisensor data for inversion. However, its global application is limited by the component spectra. This article proposes parameterization of the ASK component spectra of soil and leaf from global spectra libraries as ANGERS, GOSPEL, LOPEX, and USGS. A statistical ratio (y) of various leaf to soil spectra is used to capture their spectral differences and variations, with mean (m) + u (0, ±0.5, ±1) standard deviations (σ) [i.e., y (m + uσ)]. Optimization inversion is applied to determine the ratio candidates y(m + uσ), allowing more tolerance for spectral uncertainty, which releases the semiempirical nature of the kernel-driven model. Simulation data analysis proves its feasibility and good capture of vegetation-soil spectral differences. The model's bidirectional reflectance factor (BRF) fitting error [root-mean-square error (RMSE)] of 0.0245 is slightly larger than the true component spectra of 0.0178, and albedo RMSE is 0.0116 in Black Sky Albedo and 0.0182 in White Sky Albedo. The result also shows its good robustness to the noises, where the====level up to 20% noise conducts a 0.0277 error in BRF fitting and an ignorable influence in albedo. The synergistic-retrieved albedo from multisensor satellite data consists of in situ measurements with an RMSE of 0.0171, compared to 0.0131 from true component spectra retrievals. The new parameterization sacrifices some accuracy, but it is simple and operational for global retrieval with a satisfactory precision.
Existing methods of the small target detection from infrared videos are not effective with the complex background. It is mainly caused by: 1) the interference of strong edges and the similarity with other nontarget objects and 2) the lack of the context information of both the background and the target in a spatio-temporal domain. By considering these two points, we propose to slide a window in a single frame and form a spatio-temporal cube with the current frame patch and other frame patches in the spatio-temporal domain. Then, we establish a spatio-temporal tensor model based on these patches. According to the sparse prior of the target and the local correlation of the background, the separation of the target and the background can be cast as a low rank and sparse tensor decomposition problem. The target is obtained from the sparse tensor by the tensor decomposition. The experiments show that our method gains better detection performance in infrared videos with the complex background by making full use of the spatio-temporal context information.
In this article, an automatic and forward method is realized to establish attributed scattering center models directly from the computer-aided design (CAD) model of the complex target. The main steps include the preprocessing of the CAD model, the separation of scattering sources, the selection of strong scattering sources, and the automatic determination of model parameters. With the proposed method, the scattering sources, scattering mechanisms, and model parameters of the scattering centers can be identified and derived, such that the complicated manual intervention is completely avoided. Moreover, the method establishes the distributed scattering center models formed by curved surfaces with large curvature radii. Therefore, the formation mechanism of the distributed scattering center is extended from typical scattering structures to a general case. Thus, the model of the attributed scattering center is extended and can be applied to describe the scattering from the real structures of the complex target. The geometric shape of the scattering source is distinguished based on the principal curvature radii, which are calculated by differential geometry theory. Thus, the frequency dependence parameter is obtained according to its corresponding relationship with the geometric shape. In addition, based on the automatic method, a technology is studied to diagnose and correct the scattering center models of a target whose CAD model is unknown or partially known. Finally, parametric models of several targets in the Moving and Stationary Target Acquisition Recognition (MSTAR) program are established, and then compared with the measured data. The results validate the effectiveness of the proposed method.
Primary productivity (PP) has been recently investigated using remote sensing-based models over quite limited geographical areas of the Red Sea. This work sheds light on how phytoplankton and primary production would react to the effects of global warming in the extreme environment of the Red Sea and, hence, illuminates how similar regions may behave in the context of climate variability. study focuses on using satellite observations to conduct an intercomparison of three net primary production (NPP) models-the vertically generalized production model (VGPM), the Eppley-VGPM, and the carbon-based production model (CbPM)-produced over the Red Sea domain for the 1998-2018 time period. A detailed investigation is conducted using multilinear regression analysis, multivariate visualization, and moving averages correlative analysis to uncover the models' responses to various climate factors. Here, we use the models' eight-day composite and monthly averages compared with satellite-based variables, including chlorophyll-a (Chla), mixed layer depth (MLD), and sea-surface temperature (SST). Seasonal anomalies of NPP are analyzed against different climate indices, namely, the North Pacific Gyre Oscillation (NPGO), the multivariate ENSO Index (MEI), the Pacific Decadal Oscillation (PDO), the North Atlantic Oscillation (NAO), and the Dipole Mode Index (DMI). In our study, only the CbPM showed significant correlations with NPGO, MEI, and PDO, with disagreements relative to the other two NPP models. This can be attributed to the models' connection to oceanographic and atmospheric parameters, as well as the trends in the southern Red Sea, thus calling for further validation efforts.
The spatiotemporal coverage of a Moon-based synthetic aperture radar (SAR) is analyzed based on the imaging geometry, upon which the spatial coverage and image formulation rely. The distance from the Earth to the Moon-based SAR and bounds of the grazing and azimuthal angles jointly determine the coverage area on the Earth’s surface. Meanwhile, the ground coverage of the Moon-based SAR is determined by the bounds of the grazing and azimuthal angles and geographic coordinates of the nadir point at a specified time. Moreover, the temporal variation in the spatial coverage is pertinent to the temporally varying nadir point of the Moon-based SAR on the Earth’s surface. Furthermore, numerical simulations using the lunar ephemeris data are carried out to complement the analysis and to illustrate the spatiotemporal coverage. Finally, a guideline for the optimal site selection of a Moon-based SAR is proposed. In conclusion, a Moon-based SAR has the potential to perform long-term, continuous Earth observations on a global scale to enhance our capability to understand the planet.
In the case of sparse aperture, the coherence between pulses of radar echo is destroyed, which challenges inverse synthetic aperture radar (ISAR) autofocusing and imaging. Mathematically, reconstructing the ISAR image from the sparse aperture radar echo is a linear underdetermined inverse problem, which, by nature, can be solved by the fast developed compressive sensing (CS) or sparse signal recovery theory. However, the CS-based sparse aperture ISAR imaging algorithms are generally computationally heavy, which becomes the bottleneck of preventing their applications to the real-time ISAR imaging system. In this article, we propose a novel and computationally efficient ISAR autofocusing and imaging algorithm for sparse aperture. We first consider a generalized CS model for ISAR imaging and autofocusing with sparse and entropy-minimization regularizations, and then utilize the alternating direction method of multipliers (ADMM) algorithm to optimize the model. To improve computational efficiency, the matrix inversion is translated to an elementwise division with the usage of a partial Fourier dictionary, and the 2-D ISAR image is updated as a whole instead of range cellwise. To achieve autofocusing for sparse aperture, the phase error is estimated by minimizing the entropy of the ISAR image reconstructed in each iterative loop. Experiments based on both simulated and measured data validate that the proposed algorithm can achieve well-focused ISAR images within a few seconds, which is ten times faster than the reported sparse aperture ISAR imaging algorithms.
Spectral unmixing is an important task in hyperspectral image (HSI) analysis and processing. Sparse representation has become a promising semisupervised method for remotely sensed hyperspectral unmixing and incorporating the spectral or spatial information to improve the spectral unmixing results under a weighted sparse unmixing framework is a recent trend. While most methods focus on analyzing HSI by exploring the spatial information, it is known that hyperspectral data are characterized by its large contiguous set of wavelengths. This information can be naturally used to improve the representation of pixels in HSI. In order to take the advantage of the hyper spectral information as well as the spatial information for hyperspectral unmixing, in this article, we explore and introduce a multiview data processing approach through spectral partitioning to benefit from the abundant spectral information in HSI. Some important findings on the application of multiview data set in sparse unmixing are discussed. Meanwhile, we develop a new spectral-spatial-weighted multiview collaborative sparse unmixing (MCSU) model to tackle such a multiview data set. The MCSU uses a weighted sparse regularizer, which includes both multiview spectral and spatial weighting factors to further impose sparsity on the fractional abundances. The weights are adaptively updated associated with the abundances, and the proposed MCSU can be solved by the alternating direction method of multipliers efficiently. The experimental results on both the simulated and real hyperspectral data sets demonstrate the effectiveness of the proposed MCSU, which can significantly improve the abundance estimation results.
High/very-high-resolution (HR/VHR) multitemporal images are important in remote sensing to monitor the dynamics of the Earth’s surface. Unsupervised object-based image analysis provides an effective solution to analyze such images. Image semantic segmentation assigns pixel labels from meaningful object groups and has been extensively studied in the context of single-image analysis, however not explored for multitemporal one. In this article, we propose to extend supervised semantic segmentation to the unsupervised joint semantic segmentation of multitemporal images. We propose a novel method that processes multitemporal images by separately feeding to a deep network comprising of trainable convolutional layers. The training process does not involve any external label, and segmentation labels are obtained from the argmax classification of the final layer. A novel loss function is used to detect object segments from individual images as well as establish a correspondence between distinct multitemporal segments. Multitemporal semantic labels and weights of the trainable layers are jointly optimized in iterations. We tested the method on three different HR/VHR data sets from Munich, Paris, and Trento, which shows the method to be effective. We further extended the proposed joint segmentation method for change detection (CD) and tested on a VHR multisensor data set from Trento.
The first rotating fan beam scatterometer onboard China-France Oceanography Satellite (CFOSAT) was successfully launched on October 29, 2018. CFOSAT SCATterometer (CSCAT) is dedicated to the monitoring of sea surface wind vectors but also provides valuable data for the applications over land and Polar Regions. This article provides an overview of the relevant procedures of CSCAT data processing, including onboard signal processing and operational ground processing. Then a post-launch analysis is carried out to evaluate the first results of CSCAT L1 and L2 products. It shows that the CSCAT instrument is generally stable in terms of noise measurements and internal calibration, unless there is any important change in the system configuration. Specifically, the CSCAT backscatter (σθ) precision and wind quality are studied using a set of collocated ancillary data. The σθ precision degrades as wind speed decreases, and it is relatively low at high incidence angles (e.g., θ >46°). In particular, backscatter estimation of the horizontally polarized beam should be further improved by correcting the noise subtraction factor. The retrieved CSCAT winds are in good agreement with the European Centre for Medium Range Weather Forecasts (ECMWF) winds, the Advanced Scatterometer (ASCAT) winds, as well as the buoy winds. However, due to unresolved calibration and interbeam consistency problems, the wind quality degrades remarkably for the out-swath and the nadir-region wind vector cells, implying that the σθ calibration should be improved in the future updates.