23.06.2013 Views

Research statement - UCLA Department of Mathematics

Research statement - UCLA Department of Mathematics

Research statement - UCLA Department of Mathematics

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>Research</strong> in related fields, s.a. compressed sensing and optimization theory, has yielded very efficient<br />

optimization algorithms, which can be applied to the weighted Beltrami framework as well. We<br />

postulate that the weighted Beltrami framework represents an important step towards a unifying variational<br />

framework for geometric image processing, with a high degree <strong>of</strong> generality and a multitude<br />

<strong>of</strong> beneficial properties. Therefore we research, if and how other inverse problems in computer vision,<br />

image processing and other related domains can be generalized by the weighted Beltrami framework,<br />

and to develop robust and fast numerical schemes to optimize them.<br />

Beyond, I have developed two different generalizations <strong>of</strong> the Beltrami energy, that make the functional<br />

applicable to more interesting and more powerful regularization problems.<br />

Beltrami diffusion in the space <strong>of</strong> patches<br />

First, I am working on weighted Beltrami regularization in the space <strong>of</strong> patches. Recently, the use<br />

<strong>of</strong> patches has significantly gained in importance in various image processing applications. Indeed,<br />

the individual intensity or color information contained in a single local pixel is <strong>of</strong>ten not sufficient to<br />

completely characterize this pixel. Neighborhood information is required in order to better differentiate<br />

between textural features and noise, and diffusion on the space <strong>of</strong> patches was proposed mainly by<br />

Tschumperle. Patch-based embeddings <strong>of</strong> the “Beltrami-kind” were proposed as texture-aware edgeindicators<br />

and for denoising. Only very recently, a computationally more interesting minimization<br />

scheme was proposed.<br />

Here, I propose a novel model for image restoration, based on anisotropic diffusion on the space <strong>of</strong><br />

patches, using the Beltrami embedding. We derive a local multiplicative coupling from the standard<br />

additive scheme and show how this automatically introduces an edge-aware pre-conditioner for diffusion.<br />

We propose a splitting scheme that naturally allows dealing with the patch-overlap and different<br />

non-linearities in a very elegant and efficient way.<br />

Graph-based and non-local Beltrami energy<br />

Beyond patch-diffusion, (patch-based) non-local regularization currently produces very promising results.<br />

For example, the current denoising state-<strong>of</strong>-the art is achieved by sparsification in patch-group<br />

transform-domain (BM3D). Currently, I am working on rendering the Beltrami-energy fully nonlocal,<br />

by extending its definition to non-local operators as introduced e.g., by Osher and Gilboa. This<br />

extension makes the benefits <strong>of</strong> Beltrami regularization, such as the intrinsic inter-channel coupling<br />

in vectorial or color images, readily available for data defined on graphs. This applies to non-local<br />

regularization where the graph-edge-weights are defined by patch-distances, but we equally see important<br />

usage in color-image processing, where node-distances are defined by various color-distances<br />

instead. Beyond, the anisotropic, inherently multichannel Beltrami-regularization thereby becomes<br />

available for any graph-based inverse problem, such as clustering or segmentation, with immediate<br />

applications in machine learning.<br />

Non-local Retinex<br />

Retinex is a theory on the human visual perception, introduced in 1971 by Edwin Land. It was an<br />

attempt to explain how the human visual system, as a combination <strong>of</strong> processes both in the retina<br />

and the cortex, is capable <strong>of</strong> adaptively coping with illumination spatially varying both in intensity<br />

and color. In image processing, the retinex theory has been implemented in various different flavors,<br />

each particularly adapted to specific tasks, including color balancing, contrast enhancement, dynamic<br />

range compression and shadow removal in consumer electronics and imaging, bias field correction in<br />

medical imaging or even illumination normalization, e.g. for face detection.<br />

In this project, I develop a unifying framework for retinex that is able to reproduce many <strong>of</strong> the existing<br />

retinex implementations within a single model, including all gradient-fidelity based models,<br />

variational models, and kernel-based models. The fundamental assumption, as shared with many<br />

5

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!