Feed aggregator

Evaluation of UV Aerosol Retrievals from an Ozone Lidar

Evaluation of UV Aerosol Retrievals from an Ozone Lidar
Shi Kuang, Bo Wang, Michael J. Newchurch, Paula Tucker, Edwin W. Eloranta, Joseph P. Garcia, Ilya Razenkov, John T. Sullivan, Timothy A. Berkoff, Guillaume Gronoff, Liqiao Lei, Christoph J. Senff, Andrew O. Langford, Thierry Leblanc, and Vijay Natraj
Atmos. Meas. Tech. Discuss., https//doi.org/10.5194/amt-2020-40,2020
Preprint under review for AMT (discussion: open, 0 comments)
Ozone lidar is a state-of-art remote sensing instrument to measure atmospheric ozone concentration with high spatio-temporal resolution. In this study, we show that an ozone lidar can also provide reliably aerosol measurements with high resolution by using the collocated data taken by the ozone lidar and an aerosol lidar. This means that ozone lidars are capable of providing simultaneous ozone and aerosol measurements.

Simultaneous leaf-level measurement of trace gas emissions and photosynthesis with a portable photosynthesis system

Atmos.Meas.Tech. discussions - Wed, 03/25/2020 - 20:05
Simultaneous leaf-level measurement of trace gas emissions and photosynthesis with a portable photosynthesis system
Mj Riches, Daniel Lee, and Delphine K. Farmer
Atmos. Meas. Tech. Discuss., https://doi.org/10.5194/amt-2020-45,2020
Preprint under review for AMT (discussion: open, 0 comments)
This manuscript presents a thorough characterization of a leaf emission sampling technique, coupling a portable photosynthesis system with different trace gas analyzers. We further provide several case studies using both online and offline gas analyzers to measure different types of leaf emissions. We further highlight both the capabilities and pitfalls of this method.

Simultaneous leaf-level measurement of trace gas emissions and photosynthesis with a portable photosynthesis system

Simultaneous leaf-level measurement of trace gas emissions and photosynthesis with a portable photosynthesis system
Mj Riches, Daniel Lee, and Delphine K. Farmer
Atmos. Meas. Tech. Discuss., https//doi.org/10.5194/amt-2020-45,2020
Preprint under review for AMT (discussion: open, 0 comments)
This manuscript presents a thorough characterization of a leaf emission sampling technique, coupling a portable photosynthesis system with different trace gas analyzers. We further provide several case studies using both online and offline gas analyzers to measure different types of leaf emissions. We further highlight both the capabilities and pitfalls of this method.

Darkness, not cold, likely responsible for dinosaur-killing extinction

GeoSpace: Earth & Space Science - Thu, 03/19/2020 - 13:17

Roughly 66 million years ago an asteroid slammed into the Yucatan peninsula. New research shows darkness, not cold, likely drove a mass extinction after the impact.
Credit: NASA.

By Lauren Lipuma

New research finds soot from global fires ignited by an asteroid impact could have blocked sunlight long enough to drive the mass extinction that killed most life on Earth, including the dinosaurs, 66 million years ago.

The Cretaceous–Paleogene extinction event wiped out about 75 percent of all species on Earth. An asteroid impact at the tip of Mexico’s Yucatán Peninsula caused a period of prolonged cold and darkness, called an impact winter, that likely fueled a large part of the mass extinction. But scientists have had a hard time teasing out the details of the impact winter and what the exact mechanism was that killed life on Earth.

A new study in AGU’s journal Geophysical Research Letters simulates the contributions of the impact’s sulfur, dust, and soot emissions to the extreme darkness and cold of the impact winter. The results show the cold would have been severe but likely not devastating enough to drive a mass extinction. However, soot emissions from global forest fires darkened the sky enough to kill off photosynthesizers at the base of the food web for well over a year, according to the study.

“This low light seems to be a really big signal that would potentially be devastating to life,” said Clay Tabor, a geoscientist at the University of Connecticut and lead author of the new study. “It seems like these low light conditions are a probable explanation for a large part of the extinction.”

The results help scientists better understand this intriguing mass extinction that ultimately paved the way for humans and other mammals to evolve. But the study also provides insight into what might happen in a nuclear winter scenario, according to Tabor.

“The main driver of a nuclear winter is actually from soot in a similar type situation,” Tabor said. “What it really highlights is just how potentially impactful soot can be on the climate system.”

The impact and extinction

The Chicxulub asteroid impact spewed clouds of ejecta into the upper atmosphere that then rained back down to Earth. The returning particles would have had enough energy to broil Earth’s surface and ignite global forest fires. Soot from the fires, along with sulfur compounds and dust, blocked out sunlight, causing an impact winter lasting several years. Previous research estimates average global temperatures plummeted by at least 26 degrees Celsius (47 degrees Fahrenheit).

Scientists know the extreme darkness and cold were devastating to life on Earth but are still teasing apart which component was more harmful to life and whether the soot, sulfate, or dust particles were most disruptive to the climate.

In the new study, Tabor and his colleagues used a sophisticated climate model to simulate the climatic effects of soot, sulfates, and dust from the impact.

Their results suggest soot emissions from global fires absorbed the most sunlight for the longest amount of time. The model showed soot particles were so good at absorbing sunlight that photosynthesis levels dropped to below one percent of normal for well over a year.

“Based on the properties of soot and its ability to effectively absorb incoming sunlight, it did a very good job at blocking sunlight from reaching the surface,” Tabor said. “In comparison to the dust, which didn’t stay in the atmosphere for nearly as long, and the sulfur, which didn’t block as much light, the soot could actually block almost all light from reaching the surface for at least a year.”

A refuge for life

The darkness would have been devastating to photosynthesizers and could explain the mass extinction through a collapse of the food web, according to the researchers. All life on Earth depends on photosynthesizers like plants and algae that harvest energy from sunlight.

Interestingly, the temperature drop likely wasn’t as disturbing to life as the darkness, according to the study.

“It’s interesting that in their model, soot doesn’t necessarily cause a much larger cooling when compared other types of aerosol particles produced by the impact-but soot does cause surface sunlight to decline a lot more,” said Manoj Joshi, a climate dynamics professor at the University of East Anglia in the United Kingdom who was not connected to the new study.

In regions like the high latitudes, the results suggest oceans didn’t cool significantly more than they do during a normal cycle of the seasons.

“Even though the ocean cools by a decent amount, it doesn’t cool by that much everywhere, particularly in the higher latitude regions,” Tabor said. “In comparison to the almost two years without photosynthetic activity from soot, it seems to be a secondary importance.”

As a result, high latitude coastal regions may have been refuges for life in the months after the impact. Plants and animals living in the Arctic or Antarctic are already used to large temperature swings, extreme cold, and low light, so they may have had a better chance of surviving the impact winter, according to the researchers. 

Lauren Lipuma is a science writer at AGU. 

The post Darkness, not cold, likely responsible for dinosaur-killing extinction appeared first on GeoSpace.

Wavelet tight frame and prior image-based image reconstruction from limited-angle projection data

Inverse Problems and Imaging - Fri, 12/01/2017 - 20:00
The limited-angle projection data of an object, in some practical applications of computed tomography (CT), are obtained due to the restriction of scanning condition. In these situations, since the projection data are incomplete, some limited-angle artifacts will be presented near the edges of reconstructed image using some classical reconstruction algorithms, such as filtered backprojection (FBP). The reconstructed image can be fine approximated by sparse coefficients under a proper wavelet tight frame, and the quality of reconstructed image can be improved by an available prior image. To deal with limited-angle CT reconstruction problem, we propose a minimization model that is based on wavelet tight frame and a prior image, and perform this minimization problem efficiently by iteratively minimizing separately. Moreover, we show that each bounded sequence, which is generated by our method, converges to a critical or a stationary point. The experimental results indicate that our algorithm can efficiently suppress artifacts and noise and preserve the edges of reconstructed image, what's more, the introduced prior image will not miss the important information that is not included in the prior image.

Multiplicative noise removal with a sparsity-aware optimization model

Inverse Problems and Imaging - Fri, 12/01/2017 - 20:00
Restoration of images contaminated by multiplicative noise (also known as speckle noise) is a key issue in coherent image processing. Notice that images under consideration are often highly compressible in certain suitably chosen transform domains. By exploring this intrinsic feature embedded in images, this paper introduces a variational restoration model for multiplicative noise reduction that consists of a term reflecting the observed image and multiplicative noise, a quadratic term measuring the closeness of the underlying image in a transform domain to a sparse vector, and a sparse regularizer for removing multiplicative noise. Being different from popular existing models which focus on pursuing convexity, the proposed sparsity-aware model may be nonconvex depending on the conditions of the parameters of the model for achieving the optimal denoising performance. An algorithm for finding a critical point of the objective function of the model is developed based on coupled fixed-point equations expressed in terms of the proximity operator of functions that appear in the objective function. Convergence analysis of the algorithm is provided. Experimental results are shown to demonstrate that the proposed iterative algorithm is sensitive to some initializations for obtaining the best restoration results. We observe that the proposed method with SAR-BM3D filtering images as initial estimates can remarkably outperform several state-of-art methods in terms of the quality of the restored images.

A numerical study of a mean curvature denoising model using a novel augmented Lagrangian method

Inverse Problems and Imaging - Fri, 12/01/2017 - 20:00
In this paper, we propose a new augmented Lagrangian method for the mean curvature based image denoising model [33]. Different from the previous works in [21,35], this new method only involves two Lagrange multipliers, which significantly reduces the effort of choosing appropriate penalization parameters to ensure the convergence of the iterative process of finding the associated saddle points. With this new algorithm, we demonstrate the features of the model numerically, including the preservation of image contrasts and object corners, as well as its capability of generating smooth patches of image graphs. The data selection property and the role of the spatial mesh size for the model performance are also discussed.

Analysis of a variational model for motion compensated inpainting

Inverse Problems and Imaging - Fri, 12/01/2017 - 20:00
We study a variational problem for simultaneous video inpainting and motion estimation. We consider a functional proposed by Lauze and Nielsen [25] and we study, by means of the relaxation method of the Calculus of Variations, a slightly modified version of this functional. The domain of the relaxed functional is constituted of functions of bounded variation and we compute a representation formula of the relaxed functional. The representation formula shows the role of discontinuities of the various functions involved in the variational model. The present study clarifies the variational properties of the functional proposed in [25] for motion compensated video inpainting.

Some remarks on the small electromagnetic inhomogeneities reconstruction problem

Inverse Problems and Imaging - Fri, 12/01/2017 - 20:00
This work considers the problem of recovering small electromagnetic inhomogeneities in a bounded domain $\Omega \subset \mathbb{R}^3$, from a single Cauchy data, at a fixed frequency. This problem has been considered by several authors, in particular in [4]. In this paper, we revisit this work with the objective of providing another identification method and establishing stability results from a single Cauchy data and at a fixed frequency. Our approach is based on the asymptotic expansion of the boundary condition derived in [4] and the extension of the direct algebraic algorithm proposed in [1].

Accelerated Bregman operator splitting with backtracking

Inverse Problems and Imaging - Fri, 12/01/2017 - 20:00
This paper develops two accelerated Bregman Operator Splitting (BOS) algorithms with backtracking for solving regularized large-scale linear inverse problems, where the regularization term may not be smooth. The first algorithm improves the rate of convergence for BOSVS [5] in terms of the smooth component in the objective function by incorporating Nesterov's multi-step acceleration scheme under the assumption that the feasible set is bounded. The second algorithm is capable of dealing with the case where the feasible set is unbounded. Moreover, it allows more aggressive stepsize than that in the first scheme by properly selecting the penalty parameter and jointly updating the acceleration parameter and stepsize. Both algorithms exhibit better practical performance than BOSVS and AADMM [21], while preserve the same accelerated rate of convergence as that for AADMM. The numerical results on total-variation based image reconstruction problems indicate the effectiveness of the proposed algorithms.

Inversion of weighted divergent beam and cone transforms

Inverse Problems and Imaging - Fri, 12/01/2017 - 20:00
In this paper, we investigate the relations between the Radon and weighted divergent beam and cone transforms. Novel inversion formulas are derived for the latter two. The weighted cone transform arises, for instance, in image reconstruction from the data obtained by Compton cameras, which have promising applications in various fields, including biomedical and homeland security imaging and gamma ray astronomy. The inversion formulas are applicable for a wide variety of detector geometries in any dimension. The results of numerical implementation of some of the formulas in dimensions two and three are also provided.

Near-field imaging of sound-soft obstacles in periodic waveguides

Inverse Problems and Imaging - Fri, 12/01/2017 - 20:00
In this paper, we introduce a direct method for the inverse scattering problems in a periodic waveguide from near-field scattered data. The direct scattering problem is to simulate the point sources scattered by a sound-soft obstacle embedded in the periodic waveguide, and the aim of the inverse problem is to reconstruct the obstacle from the near-field data measured on line segments outside the obstacle. Firstly, we will approximate the scattered field by some solutions of a series of Dirichlet exterior problems, and then the shape of the obstacle can be deduced directly from the Dirichlet boundary condition. We will also show that the approximation procedure is reasonable as the solutions of the Dirichlet exterior problems are dense in the set of scattered fields. Finally, we will give several examples to show that this method works well for different periodic waveguides.

The generalized linear sampling and factorization methods only depends on the sign of contrast on the boundary

Inverse Problems and Imaging - Fri, 12/01/2017 - 20:00
We extend the applicability of the Generalized Linear Sampling Method (GLSM) [2] and the Factorization Method (FM)[16] to the case of inhomogeneities where the contrast changes sign. Both methods give an exact characterization of the target shapes in terms of the farfield operator (at a fixed frequency) using the coercivity property of a special solution operator. We prove this property assuming that the contrast has a fixed sign in a neighborhood of the inhomogeneities boundary. We treat both isotropic and anisotropic scatterers with possibly different supports for the isotropic and anisotropic parts. We finally validate the methods through some numerical tests in two dimensions.

Subdivision connectivity remeshing via Teichmüller extremal map

Inverse Problems and Imaging - Sun, 10/01/2017 - 20:00
Curvilinear surfaces in 3D Euclidean spaces are commonly represented by triangular meshes. The structure of the triangulation is important, since it affects the accuracy and efficiency of the numerical computation on the mesh. Remeshing refers to the process of transforming an unstructured mesh to one with desirable structures, such as the subdivision connectivity. This is commonly achieved by parameterizing the surface onto a simple parameter domain, on which a structured mesh is built. The 2D structured mesh is then projected onto the surface via the parameterization. Two major tasks are involved. Firstly, an effective algorithm for parameterizing, usually conformally, surface meshes is necessary. However, for a highly irregular mesh with skinny triangles, computing a folding-free conformal parameterization is difficult. The second task is to build a structured mesh on the parameter domain that is adaptive to the area distortion of the parameterization while maintaining good shapes of triangles. This paper presents an algorithm to remesh a highly irregular mesh to a structured one with subdivision connectivity and good triangle quality. We propose an effective algorithm to obtain a conformal parameterization of a highly irregular mesh, using quasi-conformal Teichmüller theories. Conformality distortion of an initial parameterization is adjusted by a quasi-conformal map, resulting in a folding-free conformal parameterization. Next, we propose an algorithm to obtain a regular mesh with subdivision connectivity and good triangle quality on the conformal parameter domain, which is adaptive to the area distortion, through the landmark-matching Teichmüller map. A remeshed surface can then be obtained through the parameterization. Experiments have been carried out to remesh surface meshes representing real 3D geometric objects using the proposed algorithm. Results show the efficacy of the algorithm to optimize the regularity of an irregular triangulation.

Well-posed Bayesian inverse problems and heavy-tailed stable quasi-Banach space priors

Inverse Problems and Imaging - Sun, 10/01/2017 - 20:00
This article extends the framework of Bayesian inverse problems in infinite-dimensional parameter spaces, as advocated by Stuart (Acta Numer. 19:451--559, 2010) and others, to the case of a heavy-tailed prior measure in the family of stable distributions, such as an infinite-dimensional Cauchy distribution, for which polynomial moments are infinite or undefined. It is shown that analogues of the Karhunen--Loève expansion for square-integrable random variables can be used to sample such measures on quasi-Banach spaces. Furthermore, under weaker regularity assumptions than those used to date, the Bayesian posterior measure is shown to depend Lipschitz continuously in the Hellinger metric upon perturbations of the misfit function and observed data.

An undetermined time-dependent coefficient in a fractional diffusion equation

Inverse Problems and Imaging - Sun, 10/01/2017 - 20:00
In this work, we consider a FDE (fractional diffusion equation) $$C_{D^\alpha_t} u(x,t)-a(t)\mathcal{L}u(x,t)=F(x,t)$$ with a time-dependent diffusion coefficient $a(t)$. This is an extension of [13], which deals with this FDE in one-dimensional space. For the direct problem, given an $a(t),$ we establish the existence, uniqueness and some regularity properties with a more general domain $\Omega$ and right-hand side $F(x,t)$. For the inverse problem--recovering $a(t),$ we introduce an operator $K$ one of whose fixed points is $a(t)$ and show its monotonicity, uniqueness and existence of its fixed points. With these properties, a reconstruction algorithm for $a(t)$ is created and some numerical results are provided to illustrate the theories.

A direct imaging method for the half-space inverse scattering problem with phaseless data

Inverse Problems and Imaging - Sun, 10/01/2017 - 20:00
We propose a direct imaging method based on the reverse time migration method for finding extended obstacles with phaseless total field data in the half space. We prove that the imaging resolution of the method is essentially the same as the imaging results using the scattering data with full phase information when the obstacle is far away from the surface of the half-space where the measurement is taken. Numerical experiments are included to illustrate the powerful imaging quality.

A wavelet frame approach for removal of mixed gaussian and impulse noise on surfaces

Inverse Problems and Imaging - Sun, 10/01/2017 - 20:00
Surface denoising is a fundamental problem in geometry processing and computer graphics. In this paper, we propose a wavelet frame based variational model to restore surfaces which are corrupted by mixed Gaussian and impulse noise, under the assumption that the region corrupted by impulse noise is unknown. The model contains a universal $\ell_1 + \ell_2$ fidelity term and an $\ell_1$-regularized term which makes additional use of the wavelet frame transform on surfaces in order to preserve key features such as sharp edges and corners. We then apply the augmented Lagrangian and accelerated proximal gradient methods to solve this model. In the end, we demonstrate the efficacy of our approach with numerical experiments both on surfaces and functions defined on surfaces. The experimental results show that our method is competitive relative to some existing denoising methods.

Data driven recovery of local volatility surfaces

Inverse Problems and Imaging - Sun, 10/01/2017 - 20:00
This paper examines issues of data completion and location uncertainty, popular in many practical PDE-based inverse problems, in the context of option calibration via recovery of local volatility surfaces. While real data is usually more accessible for this application than for many others, the data is often given only at a restricted set of locations. We show that attempts to “complete missing data” by approximation or interpolation, proposed and applied in the literature, may produce results that are inferior to treating the data as scarce. Furthermore, model uncertainties may arise which translate to uncertainty in data locations, and we show how a model-based adjustment of the asset price may prove advantageous in such situations. We further compare a carefully calibrated Tikhonov-type regularization approach against a similarly adapted EnKF method, in an attempt to fine-tune the data assimilation process. The EnKF method offers reassurance as a different method for assessing the solution in a problem where information about the true solution is difficult to come by. However, additional advantage in the latter approach turns out to be limited in our context.

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer