These instructions give guidelines for preparing papers for this publication. Presents information for authors publishing in this journal.
Presents a listing of institutions relevant for this issue of the publication.
Presents the front cover for this issue of the publication.
Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.
Presents the table of contents for this issue of the publication.
Over the past few years making use of deep networks, including convolutional neural networks (CNNs) and recurrent neural networks (RNNs), classifying hyperspectral images has progressed significantly and gained increasing attention. In spite of being successful, these networks need an adequate supply of labeled training instances for supervised learning, which, however, is quite costly to collect. On the other hand, unlabeled data can be accessed in almost arbitrary amounts. Hence it would be conceptually of great interest to explore networks that are able to exploit labeled and unlabeled data simultaneously for hyperspectral image classification. In this article, we propose a novel graph-based semisupervised network called nonlocal graph convolutional network (nonlocal GCN). Unlike existing CNNs and RNNs that receive pixels or patches of a hyperspectral image as inputs, this network takes the whole image (including both labeled and unlabeled data) in. More specifically, a nonlocal graph is first calculated. Given this graph representation, a couple of graph convolutional layers are used to extract features. Finally, the semisupervised learning of the network is done by using a cross-entropy error over all labeled instances. Note that the nonlocal GCN is end-to-end trainable. We demonstrate in extensive experiments that compared with state-of-the-art spectral classifiers and spectral–spatial classification networks, the nonlocal GCN is able to offer competitive results and high-quality classification maps (with fine boundaries and without noisy scattered points of misclassification).
Two leaf optical property models, PROSPECT-D and ABM-B, were compared to determine their respective parameter sensitivities and to correlate their parameters. ABM-B was used to generate 150 leaf spectra with various input parameters, and the inversion of PROSPECT-D was used to estimate leaf parameters from these spectra. Wavelength-specific sensitivities were described, and correlations were developed between the leaf pigments and structure parameters of the two models. Of particular importance was the correlation of PROSPECTD's structure parameter (N) which is a generalized parameter integrating several leaf-level and cell-level characteristics. At the leaf-level, N showed correlations with the leaf thickness and the mesophyll percentage, and at the cell-level, N was affected by the cell cap aspect ratios defined in ABM-B. The estimated value of N also varied substantially with changes in the angle of incidence specified in ABM-B. All of these correlations were nonlinear, and it is unclear how these parameters are combined to affect the final value for N. The correlations developed in this article indicate that additional structural parameters (possibly separated into leaf-level and cell-level) should be considered in future model development that aims to maintain inversion potential while providing more information about the leaf.
Attenuated backscatter measurements from a Vaisala CL31 ceilometer and a modified form of the well-known slope method are used to derive the ceilometer extinction profiles during rain events, restricted to rainfall rates (RRs) below approximately 10 mm/h. RR estimates from collocated S-band radar and portable disdrometer are used to derive the RR-to-extinction correlation models for the ceilometer-radar and ceilometer-disdrometer combinations. Data were collected during an intensive observation period of the Verification of the Origins of Rotation in Tornadoes Experiment Southeast (VORTEX-SE) conducted in northern Alabama. These models are used to estimate the RR from the ceilometer observations in similar situations that do not have collocated radar or the disdrometer. Such correlation models are, however, limited by the different temporal and spatial resolutions of the measured variables, measurement capabilities of the instruments, and the inherent assumption of a homogeneous atmosphere. An empirical method based on extinction and RR uncertainty scoring and covariance fitting are proposed to solve, in part, these limitations.
In this article, we generate a regional mapping of space-borne carbon dioxide (CO2) concentration through a data fusion approach, including emission estimates and Land Use and Land Cover (LULC) information. NASA's Orbiting Carbon Observatory-2 (OCO-2) satellite measures the column-averaged CO2 dry air mole fraction (XCO2) as contiguous parallelogram footprints. A major hindrance of this data set, specifically with its Level-2 observations, is missing footprints at certain time instants and the sparse sampling density in time. This article aims to generate Level-3 XCO2 maps on a regional scale for different locations worldwide through spatial interpolation of the OCO-2 retrievals. To deal with the sparse OCO-2 sampling, the cokriging-based spatial interpolation methods are suitable, which models auxiliary densely-sampled variables to predict the primary variable. In this article, a cokriging-based approach is applied using auxiliary emission data sets and the principles of the semantic kriging (SemK) method. Two global high-resolution emission data sets, the Open-source Data Inventory for Anthropogenic CO2 (ODIAC) and the Emissions Database for Global Atmospheric Research (EDGAR), are used here. The ontology-based semantic analysis of the SemK method quantifies the interrelationships of LULC classes for analyzing the local XCO2 pattern. Validations have been carried out in different regions worldwide, where the OCO-2 and the Total Carbon Column Observing Network (TCCON) measurements coexist. It is observed that the modeling of auxiliary emission data sets enhances the prediction accuracy of XCO2. This article is one of the initial attempts to generate Level-3 XCO2 mapping of OCO-2 through a data fusion approach using emission data sets.
Semantic segmentation is one of the fundamental tasks in understanding and applying urban scene point clouds. Recently, deep learning has been introduced to the field of point cloud processing. However, compared to images that are characterized by their regular data structure, a point cloud is a set of unordered points, which makes semantic segmentation a challenge. Consequently, the existing deep learning methods for semantic segmentation of point cloud achieve less success than those applied to images. In this article, we propose a novel method for urban scene point cloud semantic segmentation using deep learning. First, we use homogeneous supervoxels to reorganize raw point clouds to effectively reduce the computational complexity and improve the nonuniform distribution. Then, we use supervoxels as basic processing units, which can further expand receptive fields to obtain more descriptive contexts. Next, a sparse autoencoder (SAE) is presented for feature embedding representations of the supervoxels. Subsequently, we propose a regional relation feature reasoning module (RRFRM) inspired by relation reasoning network and design a multiscale regional relation feature segmentation network (MS-RRFSegNet) based on the RRFRM to semantically label supervoxels. Finally, the supervoxel-level inferences are transformed into point-level fine-grained predictions. The proposed framework is evaluated in two open benchmarks (Paris-Lille-3D and Semantic3D). The evaluation results show that the proposed method achieves competitive overall performance and outperforms other related approaches in several object categories. An implementation of our method is available at: https://github.com/HiphonL/MS_RRFSegNet.
The objective of this article is a systematic investigation of the sensitivity of C- and X-band emissions to leaf shape and orientation for various growth stages of corn. To simulate these effects, we used the model developed at Tor Vergata University (TOV model), which is based on a matrix doubling algorithm considering multiple scattering. Corn leaves have specific properties of shape, curvature, and orientation. We have compared different approaches, including segmented elliptical disk oriented following leaf curvature, unique elliptical disk per leaf, and segmented circular disk with size determined by the shorter leaf dimension and following the leaf curvature. Moreover, widespread leaf inclination angle distribution functions combined with in situ measurements of leaf inclination angle are adopted. The scatterers' phase matrix calculations are based on the physical optics approximation. Simulations are conducted with the ground-measured soil and vegetation properties as inputs and evaluated against the corresponding ground-based, multifrequency radiometer observations carried out in four different years over Chinese sites. The investigations show that in most cases the segmented circular disk assumption shows the best correspondence to the measurements over intermediate growth stages when the vegetation heights lie between 50 and 200 cm, and the unique elliptical disk model achieves the best correspondence for the later growth stages when the vegetation heights are larger than 200 cm with prefer-erectophile distribution of leaf orientation. The use of in situ leaf inclination angle measurements can improve the model accuracy by up to 25 K for tall vegetation heights compared with random distribution assumption.
Wintertime Arctic surface emissivities are retrieved from Advanced Technology Microwave Sounder (ATMS) passive microwave measurements at 88.2, 165.5, and 183.31 GHz. Surface emitting layer temperatures are simultaneously retrieved at 183.31 GHz. Random errors in emissivities are estimated to be 2.0%, 2.0%, and 3.5% at 88.2, 165.5, and 183.31 GHz, respectively, and the random errors in surface emitting layer temperatures are 4.3 K. A series of tests on the retrieved products reveal that land and sea ice are Lambertian reflectors and ocean is a specular reflector. The retrieved emissivities show broad agreement with products from published databases, with differences partly due to the uncertainties in surface emitting layer temperatures. The geographical distribution of 165.5/183.31 GHz surface reflectance ratios over land and sea ice, which is important for the retrieval of microwave satellite water vapor column (WVC), is presented. Neglecting the geographical variations leads to random errors in retrieved wintertime Arctic WVCs of approximately 1.8% and 25% in the mid (1.5 - 9 kgm-2) and extended (8 - 15 kgm-2) slant column retrieval regimes, respectively. Choosing specular instead of Lambertian reflection in the surface emissivity retrievals over land and sea ice causes systematic WVC retrieval errors of up to -4.1%.
By analyzing the motion characteristics and the radar observation model of triaxial stabilized space targets, a new 3-D geometry reconstruction method is proposed based on the energy accumulation of inverse synthetic aperture radar (ISAR) image sequence. According to the radar line of sight (LOS), we first construct the projection vectors of the 3-D geometry of a space target on the imaging planes. Then, by projecting the 3-D scatterer candidates on each imaging plane, we can accumulate the scattering energy of the corresponding 2-D projection position in each image. The 3-D scatterer candidates occupying the larger accumulated energy will be reserved as the real scatterers. To improve the efficiency, the real 3-D scatterers will be searched by using the particle swarm optimization (PSO) algorithm one by one. Compared with traditional 3-D geometry reconstruction methods, the proposed one never needs the 2-D scatterer extraction and trajectory association, which remains the challenges in ISAR image processing. Experimental results based on the simulated point target and electromagnetic data are presented to validate the effectiveness and robustness of the proposed method.
Remote sensing image (RSI) classification is one of the most important fields in RSI processing. It is well known that RSIs are very complicated due to its various kinds of contents. Therefore, it is very difficult to distinguish different scene categories with similar visual contents, like desert and bare land. To address hard negative categories, an attribute-cooperated convolutional neural network (ACCNN) is proposed to exploit attributes as additional guiding information. First, the classification branch extracts convolutional neural network feature, which is then utilized to recognize the RSI scene categories. Second, the attribute branch is proposed to make the network distinguish scene categories efficiently. The proposed attribute branch shares feature extraction layers with the classification branch and makes the classification branch aware of extra attribute information. Finally, the relationship branch constraints the relationship between the classification branch and the attribute branch. To exploit the attribute information, three attribute-classification data sets are generated (AC-AID, AC-UCM, and AC-Sydney). Experimental results show that the proposed method is competitive to state-of-the-art methods. The data sets are available at https://github.com/CrazyStoneonRoad/Attribute-Cooperated-Classification-Data sets.
Recently, super-resolution (SR) of satellite videos has received increasing attention as it can overcome the limitation of spatial resolution in applications of satellite videos to dynamic analysis. The low quality of satellite videos presents big challenges to the development of the spatial SR techniques, e.g., accurate motion estimation and motion compensation for multiframe SR. Therefore, reasonable image priors in maximum a posteriori (MAP) framework, where motion information among adjacent frames is involved, are needed to regularize the solution space and generate the corresponding high-resolution frames. In this article, an effective satellite video SR framework based on locally spatiotemporal neighbors and nonlocal similarity modeling is proposed. Firstly, local prior knowledge is represented by means of adaptively exploiting spatiotemporal neighbors. In this way, implicitly local motion information can be captured without explicit motion estimation. Secondly, the nonlocal spatial similarity is integrated into the proposed SR framework to enhance texture details. Finally, the locally spatiotemporal regularization and nonlocal similarity modeling bring out a complex optimization problem, which is solved via the iterated reweighted least squares in the proposed SR framework. The videos from the Jilin-1 satellite and the OVS-1A satellite are used for evaluating the proposed method. Experimental results show that the proposed method demonstrates better SR performance in preserving edges and texture details compared with the-state-of-art video SR methods.
Hyperspectral image (HSI) super-resolution refers to enhancing the spatial resolution of a 3-D image with many spectral bands (slices). It is a seriously ill-posed problem when the low-resolution (LR) HSI is the only input. It is better solved by fusing the LR HSI with a high-resolution (HR) multispectral image (MSI) for a 3-D image with both high spectral and spatial resolution. In this article, we propose a novel nonnegative and nonlocal 4-D tensor dictionary learning-based HSI super-resolution model using group-block sparsity. By grouping similar 3-D image cubes into clusters and then conduct super-resolution cluster by cluster using 4-D tensor structure, we not only preserve the structure but also achieve sparsity within the cluster due to the collection of similar cubes. We use 4-D tensor Tucker decomposition and impose nonnegative constraints on the dictionaries and group-block sparsity. Numerous experiments demonstrate that the proposed model outperforms many state-of-the-art HSI super-resolution methods.
Accurate ranging and wideband tracking are treated as two independent and separate processes in traditional radar systems. As a result, limited by low data rate due to nonsequential processing, accurate ranging usually performs low efficiency in practical application. Similarly, without applying accurate ranging, the data after thresholding and clustering are used in wideband tracking, leading to a significant decrease in tracking accuracy. In this article, an integrated Kalman filter of accurate ranging and tracking is proposed using methods of phase-derived-ranging and Bayesian inference in wideband radar. Besides the motion state, in this integrated Kalman filter, the complex-valued high-resolution range profile (HRRP) is also introduced as a reference signal by coherent integration in a sliding window, which incorporates target’s scattering distribution and phase characteristics. Corresponding kinetic equations are derived to predict the motion state and the reference signal in the next moment. A ranging process is constructed based on the received signal and the predicted reference signal in order to estimate innovation using methods of phase-derived-ranging and Bayesian inference, and a sequential update for motion state can be accomplished with the Kalman filter as well. In every recursion, the complex-valued reference signal is also updated by coherently integrating the latest pulses. The integrated Kalman filter takes full use of high range resolution and phase information, improving both efficiency and precision compared with conventional approaches of ranging and wideband tracking. Implemented in a sequential manner, the integrated Kalman filter can be applied in a real-time application, realizing simultaneous ranging with high precision and wideband tracking. Finally, simulated and real-measured experiments confirm the remarkable performance.
Land surface temperature (LST) is a key parameter for many fields of study. Currently, LST retrieved from satellite thermal infrared (TIR) measurements is attainable with an accuracy of about 1 K for most natural flat surfaces. However, over urban areas, TIR measurements are influenced by 3-D structures and their radiation that could degrade the performance of existing LST retrieval algorithms. Therefore, quantitative models are needed to investigate such impact. Current 3-D radiative transfer models are generally based on time-consuming numerical integrations whose solutions are not analytical, and are therefore difficult to exploit in the methods of physical retrieval of LST in urban areas. This article proposes an analytical TIR radiative transfer model over urban (ATIMOU) areas that considers the impact of 3-D structures and their radiation. The magnitude of this impact on TIR measurements is investigated in detail, using ATIMOU, under various conditions. Simulations show that failure to acknowledge this impact can potentially introduce a 1.87-K bias to the ground brightness temperature for street canyon whose ratio “wall height/road width” is 2, wall and road temperature is 300 K, wall emissivity is 0.906, and road emissivity is 0.950. This bias reaches 4.60 K if road emissivity decreases to 0.921, and road temperature decreases to 260 K. ATIMOU is also compared to the discrete anisotropic radiative transfer (DART) model. Small mean absolute error of 0.10 K was found between the models regarding the simulated ground brightness temperatures, indicating that ATIMOU is in good agreement with DART.
The ground-based microwave radiometer (MWR) retrieves atmospheric profiles with a high temporal resolution for temperature and humidity up to a height of 10 km. Such profiles are critical for understanding the evolution of climate systems. To improve the accuracy of profile retrieval in MWR, we developed a deep learning approach called batch normalization and robust neural network (BRNN). In contrast to the traditional backpropagation neural network (BPNN), which has previously been applied for MWR profile retrieval, BRNN reduces overfitting and has a greater capacity to describe nonlinear relationships between MWR measurements and atmospheric structure information. Validation of BRNN with the radiosonde demonstrates a good retrieval capability, showing a root-mean-square error of 1.70 K for temperature, 11.72% for relative humidity (RH), and 0.256 g/m3 for water vapor density. A detailed comparison with various inversion methods (BPNN, extreme gradient boosting, support vector machine, ridge regression, and random forest) has also been conducted in this research, using the same training and test data sets. From the comparison, we demonstrated that BRNN significantly improves retrieval accuracy, particularly for the retrieval of temperature and RH near the surface.
Monitoring airports using remote sensing imagery require us to first detect the airports and then perform airplane detection. Detecting airports and airplanes with large-scale remote sensing imagery are significant and challenging tasks in the field of remote sensing. Although many detection algorithms have been developed for detecting airports and airplanes in remote sensing imagery, the efficiency of the processing does not meet the needs of real applications in large-scale remote sensing imagery. In recent years, deep learning techniques, such as deep convolutional neural networks (DCNNs), have achieved great progress in image recognition. However, training a DCNN needs a large number of training examples to accurately fit the data distribution. Annotating training examples in large-scale remote sensing imagery is time-consuming, which makes the pipeline inefficient. In this article, to overcome the above two weaknesses, we propose a novel cycling data-driven framework for efficient and robust airport localization and airplane detection. The proposed method consists of three modules: cycling by example refinement (C), offline learning (OL), and online representation (OR), namely cycling, offline learning, and online representation (COLOR). The OR module is a coarse-to-fine cascaded convolutional neural network, which is used to detect airports and airplanes. The example refinement (ER) module implements the cycling and makes use of the unlabeled remote sensing images and the corresponding predictions obtained by the OR module, to generate training examples. The OL module aims to use the training examples from the ER module to update the OR module, to further improve the performance. The whole workflow involves COLOR. The COLOR framework was used to detect airplanes and airports in 512 large-scale Gaofen-2 (GF-2) remote sensing images with 29 $200times27$ 620 pixels. The results showed that t-
e proposed method obtained a mean average precision (mAP) of 88.32% for the airplane detection. In addition due to the proposed coarse-to-fine cascaded OR module the proposed method is much faster than the traditional approaches in real-world applications.