Feed aggregator

DML-GANR: Deep Metric Learning With Generative Adversarial Network Regularization for High Spatial Resolution Remote Sensing Image Retrieval

With a small number of labeled samples for training, it can save considerable manpower and material resources, especially when the amount of high spatial resolution remote sensing images (HSR-RSIs) increases considerably. However, many deep models face the problem of overfitting when using a small number of labeled samples. This might degrade HSR-RSI retrieval accuracy. Aiming at obtaining more accurate HSR-RSI retrieval performance with small training samples, we develop a deep metric learning approach with generative adversarial network regularization (DML-GANR) for HSR-RSI retrieval. The DML-GANR starts from a high-level feature extraction (HFE) to extract high-level features, which includes convolutional layers and fully connected (FC) layers. Each of the FC layers is constructed by deep metric learning (DML) to maximize the interclass variations and minimize the intraclass variations. The generative adversarial network (GAN) is adopted to mitigate the overfitting problem and validate the qualities of extracted high-level features. DML-GANR is optimized through a customized approach, and the optimal parameters are obtained. The experimental results on the three data sets demonstrate the superior performance of DML-GANR over state-of-the-art techniques in HSR-RSI retrieval.

Deep Metric Learning Based on Scalable Neighborhood Components for Remote Sensing Scene Characterization

With the development of convolutional neural networks (CNNs), the semantic understanding of remote sensing (RS) scenes has been significantly improved based on their prominent feature encoding capabilities. While many existing deep-learning models focus on designing different architectures, only a few works in the RS field have focused on investigating the performance of the learned feature embeddings and the associated metric space. In particular, two main loss functions have been exploited: the contrastive and the triplet loss. However, the straightforward application of these techniques to RS images may not be optimal in order to capture their neighborhood structures in the metric space due to the insufficient sampling of image pairs or triplets during the training stage and to the inherent semantic complexity of remotely sensed data. To solve these problems, we propose a new deep metric learning approach, which overcomes the limitation on the class discrimination by means of two different components: 1) scalable neighborhood component analysis (SNCA) that aims at discovering the neighborhood structure in the metric space and 2) the cross-entropy loss that aims at preserving the class discrimination capability based on the learned class prototypes. Moreover, in order to preserve feature consistency among all the minibatches during training, a novel optimization mechanism based on momentum update is introduced for minimizing the proposed loss. An extensive experimental comparison (using several state-of-the-art models and two different benchmark data sets) has been conducted to validate the effectiveness of the proposed method from different perspectives, including: 1) classification; 2) clustering; and 3) image retrieval. The related codes of this article will be made publicly available for reproducible research by the community.

Simultaneous Road Surface and Centerline Extraction From Large-Scale Remote Sensing Images Using CNN-Based Segmentation and Tracing

Accurate and up-to-date road maps are of great importance in a wide range of applications. Unfortunately, automatic road extraction from high-resolution remote sensing images remains challenging due to the occlusion of trees and buildings, discriminability of roads, and complex backgrounds. To address these problems, especially road connectivity and completeness, in this article, we introduce a novel deep learning-based multistage framework to accurately extract the road surface and road centerline simultaneously. Our framework consists of three steps: boosting segmentation, multiple starting points tracing, and fusion. The initial road surface segmentation is achieved with a fully convolutional network (FCN), after which another lighter FCN is applied several times to boost the accuracy and connectivity of the initial segmentation. In the multiple starting points tracing step, the starting points are automatically generated by extracting the road intersections of the segmentation results, which then are utilized to track consecutive and complete road networks through an iterative search strategy embedded in a convolutional neural network (CNN). The fusion step aggregates the semantic and topological information of road networks by combining the segmentation and tracing results to produce the final and refined road segmentation and centerline maps. We evaluated our method utilizing three data sets covering various road situations in more than 40 cities around the world. The results demonstrate the superior performance of our proposed framework. Specifically, our method’s performance exceeded the other methods by 7% and 40% for the connectivity indicator for road surface segmentation and for the completeness indicator for centerline extraction, respectively.

Extracting Dispersion Curves From Ambient Noise Correlations Using Deep Learning

We present a machine learning approach to classify the phases of surface wave dispersion curves. Standard frequency-time analysis (FTAN) analysis of seismograms observed on an array of receivers is converted into an image, of which each pixel is classified as fundamental mode, first overtone, or noise. We use a convolutional neural network (U-Net) architecture with a supervised learning objective and incorporate transfer learning. The training is initially performed with synthetic data to learn coarse structure, followed by fine-tuning of the network using approximately 10% of the real data based on human classification. The results show that the machine classification is nearly identical to the human picked phases. Expanding the method to process multiple images at once did not improve the performance. The developed technique will facilitate the automated processing of large dispersion curve data sets.

Object Saliency-Aware Dual Regularized Correlation Filter for Real-Time Aerial Tracking

Spatial regularization has been proved as an effective method for alleviating the boundary effect and boosting the performance of a discriminative correlation filter (DCF) in aerial visual object tracking. However, existing spatial regularization methods usually treat the regularizer as a supplementary term apart from the main regression and neglect to regularize the filter involved in the correlation operation. To address the aforementioned issue, this article introduces a novel object saliency-aware dual regularized correlation filter, i.e., DRCF. Specifically, the proposed DRCF tracker suggests a dual regularization strategy to directly regularize the filter involved with the correlation operation inside the core of the filter generating ridge regression. This allows the DRCF tracker to suppress the boundary effect and consequently enhance the performance of the tracker. Furthermore, an efficient method based on a saliency detection algorithm is employed to generate the dual regularizers dynamically and provide the regularizers with online adjusting ability. This enables the generated dynamic regularizers to automatically discern the object from the background and actively regularize the filter to accentuate the object during its unpredictable appearance changes. By the merits of the dual regularization strategy and the saliency-aware dynamical regularizers, the proposed DRCF tracker performs favorably in terms of suppressing the boundary effect, penalizing the irrelevant background noise coefficients and boosting the overall performance of the tracker. Exhaustive evaluations on 193 challenging video sequences from multiple well-known challenging aerial object tracking benchmarks validate the accuracy and robustness of the proposed DRCF tracker against 27 other state-of-the-art methods. Meanwhile, the proposed tracker can perform real-time aerial tracking applications on a single CPU with sufficient speed of 38.4 frames/s.

Simultaneously Counting and Extracting Endmembers in a Hyperspectral Image Based on Divergent Subsets

Most existing endmember extraction techniques require prior knowledge about the number of endmembers in a hyperspectral image. The number of endmembers is normally estimated by a separate procedure, whose accuracy has a large influence on the endmember extraction performance. In order to bridge the two seemingly independent but, in fact, highly correlated procedures, we develop a new endmember estimation strategy that simultaneously counts and extracts endmembers. We consider a hyperspectral image as a hyperspectral pixel set and define the subset of pixels that are most different from one another as the divergent subset (DS) of the hyperspectral pixel set. The DS is characterized by the condition that any additional pixel would increase the likeness within the DS and, thus, reduce its divergent degree. We use the DS as the endmember set, with the number of endmembers being the subset cardinality. To render a practical computation scheme for identifying the DS, we reformulate it in terms of a quadratic optimization problem with a numerical solution. In addition to operating as an endmember estimation algorithm by itself, the DS method can also co-operate with existing endmember extraction techniques by transforming them into a novel and more effective schemes. Experimental results validate the effectiveness of the DS methodology in simultaneously counting and extracting endmembers not only as an individual algorithm but also as a foundation algorithm for improving existing methods. Our full code is released for public evaluation.11

https://github.com/xuanwentao/DivergentSubset

Characterization of MSS Channel Reflectance and Derived Spectral Indices for Building Consistent Landsat 1–5 Data Record

The Landsat 1-5 multispectral scanner system (MSS) collected records of land surface mainly during 1972-1992. Investigations on MSS have been relatively limited compared with the numerous investigations on its successors, such as Thematic Mapper (TM) and Enhanced TM Plus (ETM+). The benefits of the Landsat program are not fully accomplished without the inclusion of MSS archives. Investigations on the Landsat 1-5 MSS channel reflectance characteristics wereperformed followed by derived vegetation spectral indices and the Tasseled Cap (TC) transformed features mainly using a collection of synthesized records. On average, the Landsat 4 MSS is generally comparable to the Landsat 5 MSS. The Landsat 1-3 MSSs show disagreement in channel reflectance compared with the Landsat 5 MSS, especially for the red channel (600-700 nm) and the near-infrared channel (700-800 nm). Meanwhile, the relative differences for vegetation spectral indices of the Landsat 3 MSS are mainly from -16% to -5% with the median about -11.5%, while those of the Landsat 2 MSS are mainly from -15% to -7%. Cross-validation tests and two case applications suggested that between-sensor consistency was improved generally through the transformation models generated by ordinary least-squares regression. To improve the consistency of the vegetation indices and the TC greenness, direct strategy employing respective transformation models was more effective than calculations based on the transformed channel reflectance. Considering the shortages of the Landsat MSS archives, further efforts are needed to improve its comparability with observations by other successive Landsat sensors.

TechRxiv: Share Your Preprint Research with the World!

Prospective authors are requested to submit new, unpublished manuscripts for inclusion in the upcoming event described in this call for papers.

IEEE Transactions on Geoscience and Remote Sensing information for authors

These instructions give guidelines for preparing papers for this publication. Presents information for authors publishing in this journal.

IEEE Transactions on Geoscience and Remote Sensing institutional listings

Presents the institutional listings for this issue of the publication.

How to design continents for maximum tides

GeoSpace: Earth & Space Science - Wed, 07/01/2020 - 17:29
A new study simulates ocean tides on imaginary Earth-like worlds, revealing the limits of topography’s influence on tidal energy

By Liza Lester

The shape and size of continents control the size of ocean tides on Earth-like planets, according to a new study that simulated the effects of random continental configurations on the energy of tides. The results have implications for Earth’s early history as well as the search for habitable planets beyond the solar system.

Modern day Earth’s arrangement of continents creates large tides at the extreme end of what is possible for Earth-like planets, according to the researchers.

“Earth’s current tides are the biggest we’ve found in 750 million years. I certainly think the tides now may be among the biggest in Earth’s history,” said Mattias Green, an oceanographer at Bangor University in Wales, the United Kingdom, and an author of the new study in AGU’s journal Geophysical Research Letters.

The width of an ocean basin controls the magnitude of the tides contained in it. The current Atlantic Ocean happens to be the perfect size and shape to produce large tides.

“The Atlantic is an almost perfectly tuned organ pipe for the tide. It resonates,” Green said, amplifying the tidal energy and making tides higher. Although the Pacific Ocean is larger than the Atlantic, its tides are smaller, because, Green said, “the Pacific is poorly tuned.”

Tides influence life on Earth by stirring the oceans, moving nutrients and distributing heat. On a long timescale, tides slow the speed of a planet’s rotation. Eventually, planets become tidally locked to their stars, with the same face always in sunlight.

Because tectonic activity constantly remodels Earth’s surface, the size of its tides has varied widely over repeated cycles of supercontinent formation and break-up.

Testing tidal limits

The new study investigated the upper and lower limits of tides on Earth-like planets by simulating 123 different topographies, from waterworlds to present day Earth to planets with tiny oceans covering only 10% of their surfaces (about the size of the Arctic Ocean).

The range in in energy conveyed by tides was larger than the researchers expected, Green said, extending over three orders of magnitude due to continental complexity alone. Tides on Earth today are 1,000 times more energetic than on an ocean world of the same size, according to the new study.

“If you’re just one big ocean it’s difficult to have a big tide. Adding one New Zealand-sized continent doesn’t make much difference, but add a couple New Zealands and you get tides 100 times more energetic,” Green said.

Tides on Earth are generated, primarily, by the pull of the Moon’s gravity. If the seabed were perfectly frictionless, and there were no continents to get in the way, Earth would spin smoothly under the bulge of water, which would always align with the Moon.

“The key thing is that there is friction between the ocean and land. If we didn’t have that, the tidal bulge would point directly at the moon,” Green said. “We don’t have high tide when the moon is directly overhead, and that lag is what slows Earth’s spin and pushes the Moon away.”

Tides don’t peak when the moon is directly overhead because the viscosity of the water and friction against solid ground resist the relative motion of the water. Friction causes the release of tidal energy. The bulge of water lags behind the Moon, and this lag creates drag on Earth’s rotation, which has been slowing throughout its 4-billion-year history. Near the end of the time of dinosaurs, 70 million years ago, Earth’s day was only 23.5 hours long.

Modeling exoplanets

Day length is important to scientists studying exoplanets because it has huge consequences for climate and habitability. Planets that rotate very slowly, like Venus, have deep temperature contrasts between their sunward and spaceward facing hemispheres. This could be good or bad news for the possibility of life on the planet, depending on the proximity of its sun.

But the rotation of distant planets is difficult to observe directly. Astronomers have proposed estimates based on size, age and water content. Green said the new study sets useful bounds for such models when considering how fast tides can slow spin.

“Planets may spin down a lot quicker than we think,” he said.

—Liza Lester is a senior media relations specialist at AGU.

The post How to design continents for maximum tides appeared first on GeoSpace.

Vulnerable carbon stores twice as high where permafrost subsidence is factored in, new research finds

GeoSpace: Earth & Space Science - Wed, 06/17/2020 - 17:40

By Kate Peterson

New research suggests that subsidence, gradually sinking terrain caused by the loss of ice and soil mass in permafrost, is causing deeper thaw than previously thought and making vulnerable twice as much carbon as estimates that don’t account for this shifting ground. These findings, published this week in AGU’s Journal of Geophysical Research: Biogeosciences, suggest traditional methods of permafrost thaw measurement underestimate the amount of previously-frozen carbon unlocked from warming permafrost by over 100 percent.

“Though we’ve known for a long time that subsidence happens across the permafrost zone, this phenomenon hasn’t been systematically accounted for when we talk about thaw and carbon vulnerability,” said Heidi Rodenhizer, a researcher at the Center for Ecosystem Science and Society at Northern Arizona University and lead author of the study, which was co-authored by a team from NAU, Woods Hole Research Center, Instituto de Ciencias Agrarias and Yale University. “We saw that in both warming and control environments, slight temperature increases drove significant thaw and unlocked more carbon than we saw when we weren’t looking at subsidence.”

A time series shows ground-ice ‘atlases’ in permafrost struggling to support the active layer as soil temperatures warm and accelerate thaw. As ice is lost, we see a significant shift in the soil surface over time, and the need to account for subsidence in measurements. Credit: Victor Leshyk, Center for Ecosystem Science and Society

Traditionally, permafrost thaw has been calculated by measuring active layer thickness. To do that, scientists insert a metal rod into the ground until it hits permafrost and measure from that depth to the soil surface. However, subsidence can mask actual thaw by lowering the soil surface and changing the frame of reference; for instance, some long-term experiments that rely on measuring active layer thickness have not recorded significant changes in thaw depth from year to year, despite rapid temperature warming.

So Rodenhizer and her team combined subsidence with active layer measurements to discover how much the ground was sinking, and how much unlocked carbon was being missed. At their warming site near Healy, Alaska, the team used high-accuracy GPS to measure the elevation of experimental plots at six time points over nine years. At each plot, Rodenhizer and her team found that permafrost thawed deeper than the active layer thickness indicated: 19 percent in the control plots, and 49 percent in the warming plots. The amount of newly-thawed carbon within the active layer was between 37 percent and 113 percent greater.

As the Arctic warms twice as fast as the rest of the planet, these findings have potentially vast implications for global carbon fluxes. Due to the widespread nature of subsidence—about 20 percent of the permafrost zone is visibly subsided, and contains approximately 50 percent of all carbon stored in permafrost—failing to account for subsidence could lead to significant underestimates of future carbon release in global climate change projections. Rodenhizer’s team hopes that this study will convince more Arctic researchers across the permafrost monitoring network to apply this method and help change that.

“We know that these vast carbon stores in permafrost are at risk, and we have the tools to account for subsidence and track where the carbon is going,” said permafrost researcher and senior author Ted Schuur. “We should be using everything in our toolbox to make the most accurate estimates, because so much depends on what happens to Arctic carbon.”

This post was originally published online by NAU.

The post Vulnerable carbon stores twice as high where permafrost subsidence is factored in, new research finds appeared first on GeoSpace.

Utah’s arches continue to whisper their secrets

GeoSpace: Earth & Space Science - Wed, 06/17/2020 - 14:42

By Paul Gabrielsen, University of Utah

Two new studies show what can be learned from a short seismic checkup of natural rock arches and how erosion sculpts some arches—like the iconic Delicate Arch—into shapes that lend added strength.

A study published in the AGU journal Geophysical Research Letters begins with thorough measurements of vibrations at an arch in Utah, and applies those measurements to glean insights from 17 other arches with minimal scientific equipment required. The second study, published in Geomorphology, compares the strength of arch shapes, specifically beam-like shapes versus inverted catenary shapes like the famous Delicate Arch or Rainbow Bridge.

A seismological stethoscope

The Geohazards Research Group at the University of Utah measures small vibrations in rock structures, which come from earthquakes, wind and other sources both natural and man-made, to construct 3-D models of how the structures resonate. Find the group’s 3-D models here and watch how Moonshine Arch near Vernal, Utah, moves here. Part of the reason for these measurements is to assess the structural health of the rock feature.

In studying 17 natural arches, doctoral candidate Paul Geimer and colleagues set seismometers on the arches for a few hours to a few days. The data from those measurements, coupled with the 3-D models, gave important information about the modes, or major movement directions, of the arches as well as the frequencies for those modes of vibration.

“This is all possible using noninvasive methods,” Geimer says, “that form the first step in improving our ability to detecting and identifying damage within arches and similar features.” The noninvasive nature of the tests—with the seismometers sitting on the arch’s surface without damaging the rock—is important as many of Utah’s rock arches are culturally significant.

Nate Richman, field assistant, sets up a nodal seismometer atop a natural stone arch. Photo by Paul Geimer.

But the studies of the 17 arches used just one or two seismometers each, so with permission from the National Park Service, the researchers went to Musselman Arch in Canyonlands National Park to verify their earlier measurements. The arch is flat across the top and easily accessible, so they dotted it with 30 seismometers and listened.

“This added wealth of information helped us to confirm our assumptions that arch resonant modes closely follow simple predictive models, and surrounding bedrock acts as rigid support,” Geimer says. “To my knowledge, it was the first measurement of its kind for a natural span, after decades of similar efforts at man-made bridges.”

All of the arches studied exhibited the property of low damping, Geimer says, which means that they continued to vibrate long after a gust of wind, for example, or a seismic wave from a far-off earthquake. The results also help researchers infer the mechanical properties of rocks without having to drill into the rock to take a sample. For example, the stiffness of the Navajo Sandstone, widespread in Southern Utah, seems to be related to the amount of iron in the rock.

Sculpted for stability

Natural arches come in a range of shapes, including beam-like spans that stretch between two rock masses and classic freestanding or partly freestanding inverted catenary arches. A catenary is the arc formed by a hanging chain or rope—so flip it upside down and you’ve got an inverted catenary.

“In its ideal form, the inverted catenary eliminates all tensile stresses,” Geimer says, creating a stable curved span supported solely by compression, which the host sandstone can resist most strongly. The idea that inverted catenary arches are sculpted by erosion into strong shapes is not new. But the Utah team’s approach to analyzing them is. Returning back to their 3-D models of arches and analysis of their vibration modes, the researchers simulated the gravitational stresses in detail on each arch and calculated a number, called the mean principle stress ratio, or MSR, that classifies whether the arch is more like a beam or more like an inverted catenary.

The top of a natural stone arch with seismometers. Credit: Paul Geimer

The structure of the rock in which the arch is carved can also influence its shape. Inverted catenary arches are more likely to form in thick massive rock formations. “This allows gravitational stresses to be the dominant sculpting agent,” Geimer says, “leaving behind a smooth arc of rock held in compression.” Beam-like arches typically form in rock formations with multiple layers with varying strengths. “Weaker layers are removed by erosion more quickly,” he adds, “leaving behind a layer of stronger material too thin to form a catenary curve.”

While the inverted catenary shape can lend an arch stability in its current form, Geimer and associate professor Jeff Moore are quick to point out that the arch is still vulnerable to other means of eventual collapse.

“At Delicate Arch,” Moore says, “the arch rests on a very thin easily eroded clayey layer, which provides weak connection to the ground, while Rainbow Bridge is restrained from falling over by being slightly connected to an adjoining rock knoll.”

Still, the MSR metric can help researchers and public lands managers evaluate an arch’s stability due to its shape. The Geohazards Research Group is continuing to study other factors that can influence rock features’ stability, including how cracks grow in rock and how arches have collapsed in the past.

This post was originally published on the University of Utah website.

The post Utah’s arches continue to whisper their secrets appeared first on GeoSpace.

Correcting the impact of the isotope composition on the mixing ratio dependency of water vapour isotope measurements with cavity ring-down spectrometers

Atmos.Meas.Tech. discussions - Tue, 06/16/2020 - 13:39
Correcting the impact of the isotope composition on the mixing ratio dependency of water vapour isotope measurements with cavity ring-down spectrometers
Yongbiao Weng, Alexandra Touzeau, and Harald Sodemann
Atmos. Meas. Tech., 13, 3167–3190, https://doi.org/10.5194/amt-13-3167-2020, 2020
We find that the known mixing ratio dependence of laser spectrometers for water vapour isotope measurements varies with isotope composition. We have developed a scheme to correct for this isotope-composition-dependent bias. The correction is most substantial at low mixing ratios. Stability tests indicate that the first-order dependency is a constant instrument characteristic. Water vapour isotope measurements at low mixing ratios can now be corrected by following our proposed procedure.

Correcting the impact of the isotope composition on the mixing ratio dependency of water vapour isotope measurements with cavity ring-down spectrometers

Correcting the impact of the isotope composition on the mixing ratio dependency of water vapour isotope measurements with cavity ring-down spectrometers
Yongbiao Weng, Alexandra Touzeau, and Harald Sodemann
Atmos. Meas. Tech., 13, 3167–3190, https://doi.org/10.5194/amt-13-3167-2020, 2020
We find that the known mixing ratio dependence of laser spectrometers for water vapour isotope measurements varies with isotope composition. We have developed a scheme to correct for this isotope-composition-dependent bias. The correction is most substantial at low mixing ratios. Stability tests indicate that the first-order dependency is a constant instrument characteristic. Water vapour isotope measurements at low mixing ratios can now be corrected by following our proposed procedure.

An extended radar relative calibration adjustment (eRCA) technique for higher-frequency radars and range–height indicator (RHI) scans

Atmos.Meas.Tech. discussions - Mon, 06/15/2020 - 13:39
An extended radar relative calibration adjustment (eRCA) technique for higher-frequency radars and range–height indicator (RHI) scans
Alexis Hunzinger, Joseph C. Hardin, Nitin Bharadwaj, Adam Varble, and Alyssa Matthews
Atmos. Meas. Tech., 13, 3147–3166, https://doi.org/10.5194/amt-13-3147-2020, 2020
The calibration of weather radars is one of the most dominant sources of errors hindering their use. This work takes a technique for tracking the changes in radar calibration using the radar clutter from the ground and extends it to higher-frequency research radars. It demonstrates that after modifications the technique is successful but that special care needs to be taken in its application at high frequencies. The technique is verified using data from multiple DOE ARM field campaigns.

Validation of TROPOMI Surface UV Radiation Product

Atmos.Meas.Tech. discussions - Mon, 06/15/2020 - 13:39
Validation of TROPOMI Surface UV Radiation Product
Kaisa Lakkala, Jukka Kujanpää, Colette Brogniez, Nicolas Henriot, Antti Arola, Margit Aun, Frédérique Auriol, Alkiviadis F. Bais, Germar Bernhard, Veerle De Bock, Maxime Catalfamo, Christine Deroo, Henri Diémoz, Luca Egli, Jean-Baptiste Forestier, Ilias Fountoulakis, Rosa Delia Garcia, Julian Gröbner, Seppo Hassinen, Anu Heikkilä, Stuart Henderson, Gregor Hülsen, Bjørn Johnsen, Niilo Kalakoski, Angelos Karanikolas, Tomi Karppinen, Kevin Lamy, Sergio F. León-Luis, Anders V. Lindfors, Jean-Marc Metzger, Fanny Minvielle, Harel B. Muskatel, Thierry Portafaix, Alberto Redondas, Ricardo Sanchez, Anna Maria Siani, Tove Svendby, and Johanna Tamminen
Atmos. Meas. Tech. Discuss., https://doi.org/10.5194/amt-2020-121,2020
Preprint under review for AMT (discussion: open, 0 comments)
The TROPOspheric Monitoring Instrument (TROPOMI) onboard the Sentinel-5 Precursor (S5P) satellite was launched on 13 October 2017 to provide the atmospheric composition for atmosphere and climate research. Ground-based data from 25 sites located in arctic, subarctic, temperate, equatorial and antarctic areas were used for validation of the TROPOMI Surface Ultraviolet (UV) Radiation Product. For most sites 60–80 % of TROPOMI data was within ±20 % from ground-based data.

The Importance of Size Ranges in Aerosol Instrument Intercomparisons: A Case Study for the ATom Mission

Atmos.Meas.Tech. discussions - Mon, 06/15/2020 - 13:39
The Importance of Size Ranges in Aerosol Instrument Intercomparisons: A Case Study for the ATom Mission
Hongyu Guo, Pedro Campuzano-Jost, Benjamin A. Nault, Douglas A. Day, Jason C. Schroder, Jack E. Dibb, Maximilian Dollner, Bernadett Weinzierl, and Jose L. Jimenez
Atmos. Meas. Tech. Discuss., https://doi.org/10.5194/amt-2020-224,2020
Preprint under review for AMT (discussion: open, 0 comments)
We utilize a set of high-quality datasets collected during the NASA ATom aircraft mission to investigate the impact of differences in observable particle sizes across aerosol instruments, in aerosol measurement comparisons. Very good agreement was found between chemically and physically derived submicron aerosol volume. Results support a lack of significant unknown biases in the response of Aerodyne Aerosol Mass Spectrometer (AMS) when sampling remote aerosols across the globe.

An extended radar relative calibration adjustment (eRCA) technique for higher-frequency radars and range–height indicator (RHI) scans

An extended radar relative calibration adjustment (eRCA) technique for higher-frequency radars and range–height indicator (RHI) scans
Alexis Hunzinger, Joseph C. Hardin, Nitin Bharadwaj, Adam Varble, and Alyssa Matthews
Atmos. Meas. Tech., 13, 3147–3166, https://doi.org/10.5194/amt-13-3147-2020, 2020
The calibration of weather radars is one of the most dominant sources of errors hindering their use. This work takes a technique for tracking the changes in radar calibration using the radar clutter from the ground and extends it to higher-frequency research radars. It demonstrates that after modifications the technique is successful but that special care needs to be taken in its application at high frequencies. The technique is verified using data from multiple DOE ARM field campaigns.

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer