TOC Alert for Publication# 36
Updated: 3 years 42 weeks ago
Mon, 02/01/2021 - 00:00
Calibration of satellite-borne radiometer is a key issue for quantitative remote sensing. Its accuracy depends on the stability of the calibration source. Because of no atmosphere and biological activity, the Moon surface keeps stable in the long term and may be a good candidate for thermal calibration. Observation of microwave humidity sounder (MHS) onboard the NOAA-18 made measurements of the disk-integrated brightness temperature (TB) of the Moon for the phase angle between -800 and 500. The measurement of NOAA-18 has been studied to validate the TB model of lunar surface. In this article, the near side of the Moon surface is divided into 900 subregions with a span of 60 x 60 in longitude and latitude. By solving 1-D heat conductive equation with the thermophysical parameters validated by the Diviner data of the Lunar Reconnaissance Orbiter (LRO), the temperature profiles of the regolith media in all 900 subregions are obtained. The loss tangents are inversed from the Chang'e-2 (CE-2) 37-GHz microwave TB data at noontime. Employing the fluctuation-dissipation theorem and the Wentzel-Kramer-Brillouin (WKB) approach, the microwave and millimeter-wave TBs of each subregion are simulated. Then, the weighted average TB can be disk-integrated from 900 TBs of all subregions versus the phase angle. These simulations well demonstrate diurnal TB variation and its dependence upon the frequency channels. It is found that the disk-integrated TB of the Moon in MHS channels is sensitive to the full-width at half-maximum (FWHM) of the deep space view (DSV), which is corrected in our simulation, where the Moon is now taken as an extended target, instead of a point-like object. Simulated integrated TBs are compared with the corrected MHS TB data at 89, 157, and 183 GHz. The simulated TB is well consistent with these MHS TB data at 89 and 183 GHz at various phase angles. But the maximum TB of MHS data at 157 GHz is unusually lower than that of 89 GHz. The influence of the loss ta-
gent, emissivity, and the pointing error is analyzed. Some more careful design to observe the Moon TB and technical parameters, especially the FWHM should be well determined. Our model and numerical simulation provides a tool for TB calibration and validation.
Mon, 02/01/2021 - 00:00
Satellite-based passive microwave (PMW) remote sensing is an essential technique to clarify long-term and large-scale distribution patterns of cloud water content (CWC). However, most CWC estimation methods are not implemented over land because of high heterogeneity of land radiation, and the detailed characteristics of microwave (MW) radiative transfer between land and atmosphere including clouds have not been elucidated. This study aims to elucidate these characteristics and reveal the accuracy of land emissivity representation necessary for adequate CWC estimation over land using satellite-based PMW under various CWC conditions. First, important parameters related to MW radiative transfer between land and atmosphere at CWC-relevant frequencies in the presence of clouds are determined theoretically. Next, the relationship between errors in these parameters and the brightness temperatures used for CWC estimation is clarified through considerations based on radiative transfer equations. Then, ground-based PMW observations and numerical simulation are used to reveal the actual values of these important parameters and the size of errors. Finally, the results show that for any cloud liquid water path (LWP) value greater than 1.6 kg/m2 at 89 GHz and 5 kg/m2 at 36 GHz, we can reasonably neglect the heterogeneity of emissivity and radiation from the land surface for CWC estimation. However, when LWP values are below the threshold, the error in the representation of land emissivity should be kept below 0.015 for both 89- and 36-GHz data, and volumetric soil moisture content should have an error lower than 5%-6% for both frequencies.
Mon, 02/01/2021 - 00:00
Generally, ground stress accumulates in the process of earthquake (EQ) preparation. The change trend of microwave brightness temperature (TB) of rock with stress change is an important factor for the understanding of the microwave anomalies associated with EQs. However, it is not yet clear whether the downtrend of rocks' TB is associated with increased stress. To confirm this, in this article, the instantaneous field of view of the microwave radiometer was identified first. Then, the microwave observation experiments were conducted on granite samples at 6.6 GHz under cyclic loading and outdoor conditions with weak background radiation. It was found that besides uptrend and fluctuation, the downtrend of granite samples' TB also correlates with stress, with an occurrence probability of 47.62% and a maximum rate of -0.038 K/MPa. The variation trends of TB with stress are not uniform across different areas of the same sample. To reveal the cause of this phenomenon, the permittivity of single-crystal quartz, one of the main mineral compositions of granite, was measured when it was under compassion loading at the direction perpendicular or parallel to the optical axis. For quartz, the real part of the permittivity rises (falls) when the two directions are perpendicular (parallel), causing the TB to fall (rise). The optic axes of minerals are randomly distributed in granite samples, which make the variation in the permittivity of minerals also random, thereby resulting in the nonuniformity of stress-induced TB variation in granite samples. Finally, the implications of these results were discussed.
Mon, 02/01/2021 - 00:00
The radiance received by satellite sensors viewing the ocean is a mixed signal of the atmosphere and ocean. Accurate decomposition of the radiance components is crucial because any inclusion of atmospheric signal in the water-leaving radiance leads to an incorrect estimation of the oceanic parameters. This is especially true over the turbid coastal waters, where the estimation of the radiance components is difficult. A layer removal scheme for atmospheric correction (LRSAC) has been developed to take the atmospheric and oceanic components as the layer structure according to the sunlight passing in the Sun-Earth-satellite system. Compared with the normal coupled atmospheric column, the uncertainty of the layer structure of Rayleigh and aerosols has a relatively small error with a mean relative error (MRE) of 0.063%. As the aerosol layer was put between Rayleigh and ocean, a new Rayleigh lookup table (LUT) was regenerated using 6SV (Second Simulation of a Satellite Signal in the Solar Spectrum, Vector version 3.2) based on the zero reflectance at the ground to produce the pure Rayleigh reflectance without the Rayleigh-ocean interaction. The accuracy of the LRSAC was validated by in situ water-leaving reflectance, obtaining an MRE of 6.3%, a root-mean-square error (RMSE) of 0.0028, and the mean correlation coefficient of 0.86 based on 430 matchup pairs over the East China Sea. Results show that the LRSAC can be used to decompose the reflectance at the top of each layer for the atmospheric correction over turbid coastal waters.
Mon, 02/01/2021 - 00:00
Retrieving surface properties from airborne hyperspectral imagery requires the use of an atmospheric correction model to compensate for atmospheric scattering and absorption. In this study, a solar spectral irradiance monitor (SSIM) from the University of Colorado Boulder was flown on a Twin Otter aircraft with the National Ecological Observatory Network's (NEON) imaging spectrometer. Upwelling and downwelling irradiance observations from the SSIM were used as boundary conditions for the radiative transfer model used to atmospherically correct NEON imaging spectrometer data. Using simultaneous irradiance observations as boundary conditions removed the need to model the entire atmospheric column so that atmospheric correction required modeling only the atmosphere below the aircraft. For overcast conditions, incorporating SSIM observations into the atmospheric correction process reduced root-mean-square (rms) error in retrieved surface reflectance by up to 57% compared with a standard approach. In addition, upwelling irradiance measurements were used to produce an observation-based estimate of the adjacency effect. Under cloud-free conditions, this correction reduced the rms error of surface reflectance retrievals by up to 27% compared with retrievals that ignored adjacency effects.
Mon, 02/01/2021 - 00:00
Hyperspectral image with high dimensionality always increases the computational consumption, which challenges image processing. Deep learning models have achieved extraordinary success in various image processing domains, which are effective to improve classification performance. There remain considerable challenges in fully extracting abundant spectral information, such as the combination of spatial and spectral information. In this article, a novel unsupervised hyperspectral feature extraction architecture based on spatial revising variational autoencoder (AE) ( $U_{text {Hfe}}text {SRVAE}$ ) is proposed. The core concept of this method is extracting spatial features via designed networks from multiple aspects for the revision of the obtained spectral features. Multilayer encoder extracts spectral features, and then, latent space vectors are generated from the obtained means and standard deviations. Spatial features based on local sensing and sequential sensing are extracted using multilayer convolutional neural networks and long short-term memory networks, respectively, which can revise the obtained mean vectors. Besides, the proposed loss function guarantees the consistency of the probability distributions of various latent spatial features, which obtained from the same neighbor region. Several experiments are conducted on three publicly available hyperspectral data sets, and the experimental results show that $U_{text {Hfe}}text {SRVAE}$ achieves better classification results compared with comparison methods. The combination of spatial feature extraction models and deep AE models is designed based on the unique characteristics of hyperspectral images, which contributes to the performance of this method.
Mon, 02/01/2021 - 00:00
Deep learning has shown its huge potential in the field of hyperspectral image (HSI) classification. However, most of the deep learning models heavily depend on the quantity of available training samples. In this article, we propose a multitask generative adversarial network (MTGAN) to alleviate this issue by taking advantage of the rich information from unlabeled samples. Specifically, we design a generator network to simultaneously undertake two tasks: the reconstruction task and the classification task. The former task aims at reconstructing an input hyperspectral cube, including the labeled and unlabeled ones, whereas the latter task attempts to recognize the category of the cube. Meanwhile, we construct a discriminator network to discriminate the input sample coming from the real distribution or the reconstructed one. Through an adversarial learning method, the generator network will produce real-like cubes, thus indirectly improving the discrimination and generalization ability of the classification task. More importantly, in order to fully explore the useful information from shallow layers, we adopt skip-layer connections in both reconstruction and classification tasks. The proposed MTGAN model is implemented on three standard HSIs, and the experimental results show that it is able to achieve higher performance than other state-of-the-art deep learning models.
Mon, 02/01/2021 - 00:00
The rapid increase in the number of remote sensing sensors makes it possible to develop multisource feature extraction and fusion techniques to improve the classification accuracy of surface materials. It has been reported that light detection and ranging (LiDAR) data can contribute complementary information to hyperspectral images (HSIs). In this article, a multiple feature-based superpixel-level decision fusion (MFSuDF) method is proposed for HSIs and LiDAR data classification. Specifically, superpixel-guided kernel principal component analysis (KPCA) is first designed and applied to HSIs to both reduce the dimensions and compress the noise impact. Next, 2-D and 3-D Gabor filters are, respectively, employed on the KPCA-reduced HSIs and LiDAR data to obtain discriminative Gabor features, and the magnitude and phase information are both taken into account. Three different modules, including the raw data-based feature cube (concatenated KPCA-reduced HSIs and LiDAR data), the Gabor magnitude feature cube, and the Gabor phase feature cube (concatenation of the corresponding Gabor features extracted from the KPCA-reduced HSIs and LiDAR data), can be, thus, achieved. After that, random forest (RF) classifier and quadrant bit coding (QBC) are introduced to separately accomplish the classification task on the aforementioned three extracted feature cubes. Alternatively, two superpixel maps are generated by utilizing the multichannel simple noniterative clustering (SNIC) and entropy rate superpixel segmentation (ERS) algorithms on the combined HSIs and LiDAR data, which are then used to regularize the three classification maps. Finally, a weighted majority voting-based decision fusion strategy is incorporated to effectively enhance the joint use of the multisource data. The proposed approach is, thus, named MFSuDF. A series of experiments are conducted on three real-world data sets to demonstrate the effectiveness of the proposed MFSuDF approach. The experimental results sh-
w that our MFSuDF can achieve the overall accuracy of 73.64%, 93.88%, and 74.11% for Houston, Trento, and Missouri University and University of Florida (MUUFL) Gulport data sets, respectively, when there are only three samples per class for training.
Mon, 02/01/2021 - 00:00
Hyperspectral unmixing (HU) is a crucial technique for exploiting remotely sensed hyperspectral data, which aims at estimating a set of spectral signatures, called endmembers and their corresponding proportions, called abundances. The performance of HU is often seriously degraded by various kinds of noise existing in hyperspectral images (HSIs). Most of existing robust HU methods are based on the assumption that noise or outlier only exists in one kind of formulation, e.g., band noise or pixel noise. However, in real-world applications, HSIs are unavoidably corrupted by noisy bands and noisy pixels simultaneously, which require robust HU in both the spatial dimension and spectral dimension. Meanwhile, the sparsity of abundances is an inherent property of HSIs and different regions in an HSI may possess various sparsity levels across locations. This article proposes a correntropy-based spatial-spectral robust sparsity-regularized unmixing model to achieve 2-D robustness and adaptive weighted sparsity constraint for abundances simultaneously. The updated rules of the proposed model are efficient to be implemented and carried out by a half-quadratic technique. The experimental results obtained by both synthetic and real hyperspectral data demonstrate the superiority of the proposed method compared to the state-of-the-art methods.
Mon, 02/01/2021 - 00:00
Anomaly detection in hyperspectral imagery has been an active topic among the remote sensing applications. It aims at identifying anomalous targets with different spectra from their surrounding background. Therefore, an effective detector should be able to distinguish the anomalies, especially for the weak ones, from the background and noise. In this article, we propose a novel method for hyperspectral anomaly detection based on total variation (TV) and sparsity regularized decomposition model. This model decomposes the hyperspectral imagery into three components: background, anomaly, and noise. In order to distinguish effectively these components, a union dictionary consisting of both background and potential anomalous atoms is utilized to represent the background and anomalies, respectively. Moreover, the TV and the sparsity-inducing regularizations are incorporated to facilitate the separation. Besides, we present a new strategy for constructing the union dictionary with the density peak-based clustering. The proposed detector is evaluated on both simulated and real hyperspectral data sets and the experimental results demonstrate its superiority compared with several traditional and state-of-the-art anomaly detectors.
Mon, 02/01/2021 - 00:00
In this article, a single-spectrum-driven binary-class sparse representation target detector (SSBSTD) via target and background dictionary construction (BDC) is proposed. The SSBSTD leans upon the binary-class sparse representation (BSR) model. Due to the fact that a background spectrum usually consists in background samples composed low-dimensional subspace and a target spectrum also consists in target samples composed low-dimensional subspace, only background samples should be used for sparsely representing the test pixel under the target absent hypothesis and the samples from target-only dictionary for target present hypothesis. To alleviate the problem that there are insufficient available target samples in the sparse representation model, this article proposed a predetection method to construct the target dictionary utilizing the given target spectrum. With regard to the BDC, we proposed an approach based on the classification to generate a global over-complete background dictionary. The detection output is composed of the residual difference between the BSR. Extensive experiments were made on four benchmark hyperspectral images and the experimental results indicate that our SSBSTD algorithm demonstrates superior detection performances.
Mon, 02/01/2021 - 00:00
The presence of mixed pixels in the hyperspectral data makes unmixing to be a key step for many applications. Unsupervised unmixing needs to estimate the number of endmembers, their spectral signatures, and their abundances at each pixel. Since both endmember and abundance matrices are unknown, unsupervised unmixing can be considered as a blind source separation problem and can be solved by nonnegative matrix factorization (NMF). However, most of the existing NMF unmixing methods use a least-squares objective function that is sensitive to the noise and outliers. To deal with different types of noises in hyperspectral data, such as the noise in different bands (band noise), the noise in different pixels (pixel noise), and the noise in different elements of hyperspectral data matrix (element noise), we propose three self-paced learning based NMF (SpNMF) unmixing models in this article. The SpNMF models replace the least-squares loss in the standard NMF model with weighted least-squares losses and adopt a self-paced learning (SPL) strategy to learn the weights adaptively. In each iteration of SPL, atoms (bands or pixels or elements) with weight zero are considered as complex atoms and are excluded, while atoms with nonzero weights are considered as easy atoms and are included in the current unmixing model. By gradually enlarging the size of the current model set, SpNMF can select atoms from easy to complex. Usually, noisy or outlying atoms are complex atoms that are excluded from the unmixing model. Thus, SpNMF models are robust to noise and outliers. Experimental results on the simulated and two real hyperspectral data sets demonstrate that our proposed SpNMF methods are more accurate and robust than the existing NMF methods, especially in the case of heavy noise.
Mon, 02/01/2021 - 00:00
Random Gaussian noise and striping artifacts are common phenomena in hyperspectral images (HSI). In this article, an effective restoration method is proposed to simultaneously remove Gaussian noise and stripes by merging a denoising and a destriping submodel. A denoising submodel performs a multiband denoising, i.e., Gaussian noise removal, considering Gaussian noise variations between different bands, to restore the striped HSI from the corrupted image, in which the striped HSI is constrained by a weighted nuclear norm. For the destriping submodel, we propose an adaptive anisotropy total variation method to adaptively smoothen the striped HSI, and we apply, for the first time, the truncated nuclear norm to constrain the rank of the stripes to 1. After merging the above two submodels, an ultimate image restoration model is obtained for both denoising and destriping. To solve the obtained optimization problem, the alternating direction method of multipliers (ADMM) is carefully schemed to perform an alternative and mutually constrained execution of denoising and destriping. Experiments on both synthetic and real data demonstrate the effectiveness and superiority of the proposed approach.
Mon, 02/01/2021 - 00:00
Oblique photogrammetry with multiple cameras onboard unmanned aerial vehicle (UAV) has been widely applied in the construction of photorealistic three-dimensional (3-D) urban models, but how to obtain the optimal building facade texture images (BFTIs) from the abundant oblique images has been a challenging problem. This article presents an optimization method for selection of BFTIs from the image flows acquired by five oblique cameras onboard UAV. The proposed method uses multiobjective functions, which consists of the smallest occlusion of the BFTI and the largest façade texture area, to select the optimal BFTIs. Geometric correction, color equalization, and texture repairment are also considered for correction of BFTI’s distortions, uneven color, and occlusion by other objects such as trees. Visual C++ and OpenGL under the Windows Operating System are used to implement the proposed methods and algorithms. The proposed method is verified using 49 800 oblique images collected by five cameras onboard the Matrice 600 Pro (M600 Pro) UAV system over Dongguan Street, in the City of Ji’nan, Shandong, China. To restore the partially occluded textures, different thresholds and different sizes of windows are experimented, and a template window of $200times200$ pixels2 is recommended. With the proposed method, 2740 BFTIs are extracted from 49 800 oblique images. As compared with the Pix4Dmapper and Smart 3-D method, it can be concluded that the optimal texture can be selected from the image flow acquired by multiple cameras onboard UAV and the approximately 95% memory occupied by the original BFTIs is reduced.
Mon, 02/01/2021 - 00:00
Ship detection in remote sensing plays an important role in civil and military fields. Owing to the complex background and uncertain direction, ship detection is full of challenge by using the commonly used object-detection methods. In this article, a new framework for detecting the arbitrary direction ships is proposed based on the improvement in the Faster region-based convolutional network (R-CNN), in which the shape of the bounding box is described by three sides, namely, vertical side, horizontal side, and short side, respectively. The inclination of the ship is obtained by calculating the arc-tangent value of the vertical side to the horizontal side. First, the better performing ResNet-101 is adopted to extract features over an entire image, which are shared by the region proposal network (RPN) and the head network. Then, the multidirection proposal regions that may contain ships are generated by the RPN. Next, the global and local features of the proposal regions are combined as the whole features of the regions by a multiregion feature-fusion (MFF) module, which can provide more detailed information of the regions. Finally, the head network uses the whole features of the proposal regions for bounding-box recognition through multitask learning, including classification, regression, and incline direction prediction (left or right). The proposed method is tested and compared with other state-of-the-art ship-detection methods on two open remote-sensing data sets and some large-scale and real images. The experimental results validate that the proposed approach has achieved better performance.
Mon, 02/01/2021 - 00:00
Image stitching aims to generate a natural seamless high-resolution panoramic image free of distortions or artifacts as fast as possible. In this article, we propose a new seam cutting strategy based on superpixels for unmanned aerial vehicle (UAV) image stitching. Explicitly, we decompose the issue into three steps: image registration, seam cutting, and image blending. First, we employ adaptive as-natural-as-possible (AANAP) warps for registration, obtaining two aligned images in the same coordinate system. Then, we propose a novel superpixel-based energy function that integrates color difference, gradient difference, and texture complexity information to search a perceptually optimal seam located in continuous areas with high similarity. We apply the graph cut algorithm to solve the problem and thereby conceal artifacts in the overlapping area. Finally, we utilize a superpixel-based color blending approach to eliminate visible seams and achieve natural color transitions. Experimental results demonstrate that our method can effectively and efficiently realize seamless stitching, and is superior to several state-of-the-art methods in UAV image stitching.
Mon, 02/01/2021 - 00:00
As a fundamental and critical task in feature-based remote sensing image registration, feature matching refers to establishing reliable point correspondences from two images of the same scene. In this article, we propose a simple yet efficient method termed linear adaptive filtering (LAF) for both rigid and nonrigid feature matching of remote sensing images and apply it to the image registration task. Our algorithm starts with establishing putative feature correspondences based on local descriptors and then focuses on removing outliers using geometrical consistency priori together with filtering and denoising theory. Specifically, we first grid the correspondence space into several nonoverlapping cells and calculate a typical motion vector for each one. Subsequently, we remove false matches by checking the consistency between each putative match and the typical motion vector in the corresponding cell, which is achieved by a Gaussian kernel convolution operation. By refining the typical motion vector in an iterative manner, we further introduce a progressive strategy based on the coarse-to-fine theory to promote the matching accuracy gradually. In addition, an adaptive parameter setting strategy and posterior probability estimation based on the expectation–maximization algorithm enhance the robustness of our method to different data. Most importantly, our method is quite efficient where the gridding strategy enables it to achieve linear time complexity. Consequently, some sparse point-based tasks may inspire from our method when they are achieved by deep learning techniques. Extensive feature matching and image registration experiments on several remote sensing data sets demonstrate the superiority of our approach over the state of the art.
Mon, 02/01/2021 - 00:00
By studying the spectral reflectance features of different land cover types and leveraging information of primarily “BLUE” band along with “RED” and “NIR” bands, this article seeks to introduce a new built-up index such as powered B1 built-up index (PB1BI). The proposed index, while being conceptually simple and computationally inexpensive, can extract the built-up areas from Landsat7 satellite images efficiently. For Landsat7 satellite imagery, classification performances of the proposed index along with support vector machine (SVM), artificial neural network (ANN), and three existing built-up indices have been examined for three study sites of 1° Latitude $times 1^circ $ Longitude ( $approx 12,100~{mathrm {sq}}cdot {mathrm {km}}$ ) area from three diverse geographical regions in India. The computed value of the M-Statistics for PB1BI is consistently greater than 1.80, indicating a better spectral separability between built-up and nonbuilt-up classes by the index. In order to improve the performance of the built-up indices, this article has suggested a bootstrapping method for threshold estimation in addition to the existing Otsu’s method for the same. It has been found that using bootstrapping method instead of Otsu’s method for threshold estimation has helped to improve the classification performance of built-up indices up to 17.75% and 40.49% in terms of overall accuracy and kappa ( $kappa $ ) coefficient, respectively. It has been observed that for the validation set, average overall accuracy (97.45%) and kappa ( $kappa $ ) coefficient (0.907) of PB1BI for considered study sites are not only significantly higher than existing indices but also comparable with the same of SVM (99.10% and 0.942) and ANN (87.24% and 0.450). This article has also shown that the proposed index provides a stable performance for multitemporal analysis of the study sites and is able to capture growth in built-up region in time horizon. The classification performance of PB1BI has also been verified for Landsat8 imagery across 11 study sites from different continents around the globe, and the results show overall accuracy and $kappa $ to be consistently more than 90% and 0.75, respectively. For considered study sites, the reported values of average accuracy and $kappa $ of PB1BI for built-up classification using Landsat8 satellite data are 95.7151% and 0.8843, respectively.
Mon, 02/01/2021 - 00:00
Deep neural networks, which can learn the representative and discriminative features from data in a hierarchical manner, have achieved state-of-the-art performance in the remote sensing scene classification task. Despite the great success that deep learning algorithms have obtained, their vulnerability toward adversarial examples deserves our special attention. In this article, we systematically analyze the threat of adversarial examples on deep neural networks for remote sensing scene classification. Both targeted and untargeted attacks are performed to generate subtle adversarial perturbations, which are imperceptible to a human observer but may easily fool the deep learning models. Simply adding these perturbations to the original high-resolution remote sensing (HRRS) images, adversarial examples can be generated, and there are only slight differences between the adversarial examples and the original ones. An intriguing discovery in our study shows that most of these adversarial examples may be misclassified into the wrong category by the state-of-the-art deep neural networks with very high confidence. This phenomenon, undoubtedly, may limit the practical deployment of these deep learning models in the safety-critical remote sensing field. To address this problem, the adversarial training strategy is further investigated in this article, which significantly increases the resistibility of deep models toward adversarial examples. Extensive experiments on three benchmark HRRS image data sets demonstrate that while most of the well-known deep neural networks are sensitive to adversarial perturbations, the adversarial training strategy helps to alleviate their vulnerability toward adversarial examples.
Mon, 02/01/2021 - 00:00
Super-resolution (SR) techniques play a crucial role in increasing the spatial resolution of remote sensing data and overcoming the physical limitations of the spaceborne imaging systems. Though the convolutional neural network (CNN)-based methods have obtained good performance, they show limited capacity when coping with large-scale super-resolving tasks. The more complicated spatial distribution of remote sensing data further increases the difficulty in reconstruction. This article develops a dense-sampling super-resolution network (DSSR) to explore the large-scale SR reconstruction of the remote sensing imageries. Specifically, a dense-sampling mechanism, which reuses an upscaler to upsample multiple low-dimension features, is presented to make the network jointly consider multilevel priors when performing reconstruction. A wide feature attention block (WAB), which incorporates the wide activation and attention mechanism, is introduced to enhance the representation ability of the network. In addition, a chain training strategy is proposed to optimize further the performance of the large-scale models by borrowing knowledge from the pretrained small-scale models. Extensive experiments demonstrate the effectiveness of the proposed methods and show that the DSSR outperforms the state-of-the-art models in both quantitative evaluation and visual quality.