Latest articles for selected journal
Updated: 7 years 6 days ago
Fri, 12/01/2017 - 20:00
The limited-angle projection data of an object, in some practical applications of computed tomography (CT), are obtained due to the restriction of scanning condition.
In these situations, since the projection data are incomplete, some limited-angle artifacts will be presented near the edges of reconstructed image using some classical reconstruction algorithms, such as filtered backprojection (FBP).
The reconstructed image can be fine approximated by sparse coefficients under a proper wavelet tight frame, and the quality of reconstructed image can be improved by an available prior image. To deal with limited-angle CT reconstruction problem, we propose a minimization model that is based on wavelet tight frame and a prior image, and perform this minimization problem efficiently
by iteratively minimizing separately. Moreover, we show that each bounded sequence, which is generated by our method, converges to a critical or a stationary point. The experimental results indicate that our algorithm can efficiently suppress artifacts and noise and preserve the edges of reconstructed image, what's more, the introduced prior image will not miss the important information that is not included in the prior image.
Fri, 12/01/2017 - 20:00
Restoration of images contaminated by multiplicative noise (also
known as speckle noise) is a key issue in coherent image
processing. Notice that images under consideration are often highly compressible in certain suitably chosen
transform domains. By exploring this intrinsic feature embedded in images, this paper introduces a variational restoration model
for multiplicative noise reduction that consists of a term
reflecting the observed image and multiplicative noise, a quadratic
term measuring the closeness of the underlying image in a transform
domain to a sparse vector, and a sparse regularizer for removing
multiplicative noise. Being different from popular existing models
which focus on pursuing convexity, the proposed sparsity-aware model
may be nonconvex depending on the conditions of the parameters of
the model for achieving the optimal denoising performance. An algorithm for finding a critical point of the objective function of the model is developed based on coupled fixed-point equations expressed in terms of the proximity operator of functions that appear in the objective function. Convergence analysis of the algorithm is provided.
Experimental results are shown to demonstrate that the proposed iterative algorithm is sensitive to some initializations for obtaining the best restoration results. We observe that the proposed method with SAR-BM3D filtering images as initial estimates can remarkably outperform several state-of-art methods in terms of the
quality of the restored images.
Fri, 12/01/2017 - 20:00
In this paper, we propose a new augmented Lagrangian method for the mean curvature based image denoising model [33]. Different from the previous works in [21,35], this new method only involves two Lagrange multipliers, which significantly reduces the effort of choosing appropriate penalization parameters to ensure
the convergence of the iterative process of finding the associated saddle points. With this new algorithm, we demonstrate the features of the model numerically, including the preservation of image contrasts and object corners, as well as its capability of generating smooth patches of image graphs. The data selection property and the role of the spatial mesh size for the model performance are also discussed.
Fri, 12/01/2017 - 20:00
We study a variational problem for simultaneous video inpainting and motion estimation.
We consider a functional proposed by Lauze and Nielsen [25] and we study, by means
of the relaxation method of the Calculus of Variations, a slightly modified version of this functional.
The domain of the relaxed functional is constituted of functions of bounded variation and
we compute a representation formula of the relaxed functional.
The representation formula shows the role of discontinuities of the various functions involved
in the variational model. The present study clarifies the variational properties of the
functional proposed in [25] for motion compensated video inpainting.
Fri, 12/01/2017 - 20:00
This work considers the problem of recovering small electromagnetic
inhomogeneities in a bounded domain $\Omega \subset \mathbb{R}^3$,
from a single Cauchy data, at a fixed frequency. This problem has
been considered by several authors, in particular in
[4]. In this paper, we revisit this work with the
objective of providing another identification method and
establishing stability results from a single Cauchy data and at a
fixed frequency. Our approach is based on the asymptotic expansion
of the boundary condition derived in [4] and the
extension of the direct algebraic algorithm proposed in
[1].
Fri, 12/01/2017 - 20:00
This paper develops two accelerated Bregman Operator Splitting (BOS) algorithms with backtracking for solving regularized large-scale linear inverse problems, where the regularization term may not be smooth. The first algorithm improves the rate of convergence for BOSVS [5] in terms of the smooth component in the objective function by incorporating Nesterov's multi-step acceleration scheme under the assumption that the feasible set is bounded. The second algorithm is capable of dealing with the case where the feasible set is unbounded. Moreover, it allows more aggressive stepsize than that in the first scheme by properly selecting the penalty parameter and jointly updating the acceleration parameter and stepsize. Both algorithms exhibit better practical performance than BOSVS and AADMM [21], while preserve the same accelerated rate of convergence as that for AADMM. The numerical results on total-variation based image reconstruction problems indicate the effectiveness of the proposed algorithms.
Fri, 12/01/2017 - 20:00
In this paper, we investigate the relations between the Radon and weighted divergent beam and cone transforms. Novel inversion formulas are derived for the latter two. The weighted cone transform arises, for instance, in image reconstruction from the data obtained by Compton cameras, which have promising applications in various fields, including biomedical and homeland security imaging and gamma ray astronomy. The inversion formulas are applicable for a wide variety of detector geometries in any dimension. The results of numerical implementation of some of the formulas in dimensions two and three are also provided.
Fri, 12/01/2017 - 20:00
In this paper, we introduce a direct method for the inverse scattering problems
in a periodic waveguide from near-field scattered data. The direct scattering problem is to simulate the point sources scattered by a sound-soft obstacle embedded in the periodic waveguide, and the aim of the inverse problem is to reconstruct the obstacle from the near-field data measured on line segments outside the obstacle. Firstly, we will approximate the scattered field by some solutions of a series of Dirichlet exterior problems, and then the shape of the obstacle can be deduced directly from the Dirichlet boundary condition. We will also show that the approximation procedure is reasonable as the solutions of the Dirichlet exterior problems are dense in the set of scattered fields. Finally, we will give several examples to show that this method works well for different periodic waveguides.
Fri, 12/01/2017 - 20:00
We extend the applicability of the Generalized Linear Sampling Method (GLSM)
[2] and the Factorization Method (FM)[16] to the case of
inhomogeneities where the contrast changes sign. Both methods give an exact characterization of the target shapes in
terms of the farfield operator (at a fixed frequency) using the
coercivity property of a special solution operator. We prove this property
assuming that the contrast has a fixed sign in a neighborhood of the
inhomogeneities boundary. We treat both isotropic and anisotropic scatterers with possibly different supports for the isotropic and anisotropic parts. We finally validate the methods through some numerical tests in two
dimensions.
Sun, 10/01/2017 - 20:00
Curvilinear surfaces in 3D Euclidean spaces are commonly represented by triangular meshes. The structure of the triangulation is important, since it affects the accuracy and efficiency of the numerical computation on the mesh. Remeshing refers to the process of transforming an unstructured mesh to one with desirable structures, such as the subdivision connectivity. This is commonly achieved by parameterizing the surface onto a simple parameter domain, on which a structured mesh is built. The 2D structured mesh is then projected onto the surface via the parameterization. Two major tasks are involved. Firstly, an effective algorithm for parameterizing, usually conformally, surface meshes is necessary. However, for a highly irregular mesh with skinny triangles, computing a folding-free conformal parameterization is difficult. The second task is to build a structured mesh on the parameter domain that is adaptive to the area distortion of the parameterization while maintaining good shapes of triangles. This paper presents an algorithm to remesh a highly irregular mesh to a structured one with subdivision connectivity and good triangle quality. We propose an effective algorithm to obtain a conformal parameterization of a highly irregular mesh, using quasi-conformal Teichmüller theories. Conformality distortion of an initial parameterization is adjusted by a quasi-conformal map, resulting in a folding-free conformal parameterization. Next, we propose an algorithm to obtain a regular mesh with subdivision connectivity and good triangle quality on the conformal parameter domain, which is adaptive to the area distortion, through the landmark-matching Teichmüller map. A remeshed surface can then be obtained through the parameterization. Experiments have been carried out to remesh surface meshes representing real 3D geometric objects using the proposed algorithm. Results show the efficacy of the algorithm to optimize the regularity of an irregular triangulation.
Sun, 10/01/2017 - 20:00
This article extends the framework of Bayesian inverse problems in infinite-dimensional parameter spaces, as advocated by Stuart (Acta Numer. 19:451--559, 2010) and others, to the case of a heavy-tailed prior measure in the family of stable distributions, such as an infinite-dimensional Cauchy distribution, for which polynomial moments are infinite or undefined.
It is shown that analogues of the Karhunen--Loève expansion for square-integrable random variables can be used to sample such measures on quasi-Banach spaces.
Furthermore, under weaker regularity assumptions than those used to date, the Bayesian posterior measure is shown to depend Lipschitz continuously in the Hellinger metric upon perturbations of the misfit function and observed data.
Sun, 10/01/2017 - 20:00
In this work, we consider a FDE (fractional diffusion equation)
$$C_{D^\alpha_t} u(x,t)-a(t)\mathcal{L}u(x,t)=F(x,t)$$ with a time-dependent diffusion
coefficient $a(t)$. This is an extension of [13],
which deals with this FDE in one-dimensional space.
For the direct problem, given an $a(t),$
we establish the existence, uniqueness and some regularity properties
with a more general domain $\Omega$ and right-hand side $F(x,t)$.
For the inverse problem--recovering $a(t),$
we introduce an operator $K$ one of whose fixed points is $a(t)$
and show its monotonicity, uniqueness and
existence of its fixed points. With these properties, a reconstruction
algorithm for $a(t)$ is created and some numerical results are provided
to illustrate the theories.
Sun, 10/01/2017 - 20:00
We propose a direct imaging method based on the reverse time migration method for finding extended obstacles with phaseless total field data in the half space.
We prove that the imaging resolution of the method is essentially the same as the imaging results using the
scattering data with full phase information when the obstacle is far away from the surface of the half-space where the
measurement is taken. Numerical experiments are included to illustrate the
powerful imaging quality.
Sun, 10/01/2017 - 20:00
Surface denoising is a fundamental problem in geometry processing and computer graphics. In this paper, we propose a wavelet frame based variational model to restore surfaces which are corrupted by mixed Gaussian and impulse noise, under the assumption that the region corrupted by impulse noise is unknown. The model contains a universal $\ell_1 + \ell_2$ fidelity term and an $\ell_1$-regularized term which makes additional use of the wavelet frame transform on surfaces in order to preserve key features such as sharp edges and corners. We then apply the augmented Lagrangian and accelerated proximal gradient methods to solve this model. In the end, we demonstrate the efficacy of our approach with numerical experiments both on surfaces and functions defined on surfaces. The experimental results show that our method is competitive relative to some existing denoising methods.
Sun, 10/01/2017 - 20:00
This paper examines issues of data completion and location uncertainty, popular in many practical
PDE-based
inverse problems, in the context of option calibration via recovery of local volatility surfaces.
While real data is usually more accessible for this application than for many others, the data
is often given only at
a restricted set of locations. We show that attempts to “complete missing data”
by approximation or interpolation,
proposed and applied in the literature, may produce results that are inferior to treating the data as scarce.
Furthermore, model uncertainties may arise which translate to uncertainty in data locations,
and we show how a model-based
adjustment of the asset price may prove advantageous in such situations.
We further compare a carefully calibrated Tikhonov-type regularization approach
against a similarly adapted EnKF method,
in an attempt to fine-tune the data assimilation process.
The EnKF method offers reassurance as a different method
for assessing the solution in a problem where information about the true solution is
difficult to come by.
However, additional advantage in the latter
approach turns out to be limited in our context.