MPhil Thesis (pdf) - Image Processing and Analysis Group
MPhil Thesis (pdf) - Image Processing and Analysis Group
MPhil Thesis (pdf) - Image Processing and Analysis Group
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
Level Set Implementations on<br />
Unstructured Point Cloud<br />
by<br />
HO, Hon Pong<br />
A <strong>Thesis</strong> Submitted to<br />
The Hong Kong University of Science <strong>and</strong> Technology<br />
in Partial Fulfillment of the Requirements for<br />
the Degree of Master of Philosophy<br />
in Electrical <strong>and</strong> Electronic Engineering<br />
Copyright c○ by HO, Hon Pong (2004)<br />
ALL RIGHTS RESERVED.<br />
27 August, 2004
Authorization<br />
I hereby declare that I am the sole author of this thesis.<br />
I authorize the Hong Kong University of Science <strong>and</strong> Technology to lend this thesis<br />
to other institutions or individuals for the purpose of scholarly research.<br />
I further authorize the Hong Kong University of Science <strong>and</strong> Technology to reproduce<br />
this thesis by photocopying or by other means, in total or in part, at the request of<br />
other institutions or individuals for the purpose of scholarly research.<br />
HO, Hon Pong<br />
ii
Level Set Implementations on<br />
Unstructured Point Cloud<br />
by<br />
HO, Hon Pong<br />
This is to certify that I have examined the above <strong>MPhil</strong> thesis<br />
<strong>and</strong> have found that it is complete <strong>and</strong> satisfactory in all respects,<br />
<strong>and</strong> that any <strong>and</strong> all revisions required by<br />
the thesis examination committee have been made.<br />
Prof. Pengcheng SHI, <strong>Thesis</strong> Supervisor<br />
Prof. Oscar C.L. AU, <strong>Thesis</strong> Examination Chairperson<br />
Prof. Chi-Keung TANG, <strong>Thesis</strong> Examination Examiner<br />
Prof. Khaled Ben LETAIEF, Head of the Department<br />
Department of Electrical <strong>and</strong> Electronic Engineering<br />
27 August, 2004<br />
iii
Acknowledgements<br />
I want to express my deep thanks to my supervisor Dr. Pengcheng Shi for his guidance<br />
over the past years. I learnt great lessons through his kindness effort on every paper<br />
we wrote <strong>and</strong> also his fruitful advices <strong>and</strong> comments from research-related to non-<br />
academic issues. And thank again for his patience to my many mistakes. I also thank<br />
Dr. Oscar C. Au <strong>and</strong> Dr. Chi-Keung Tang for being the thesis examination committee<br />
members.<br />
Special thanks go to Prof. Yunmei Chen of the University of Florida for her clear<br />
analysis <strong>and</strong> precious suggestions about this thesis idea during her visit to our research<br />
group.<br />
I must thank my family <strong>and</strong> Janet for their love <strong>and</strong> care at anytime. Their support<br />
are crucial <strong>and</strong> invaluable to me.<br />
iv
Contents<br />
Acknowledgements iv<br />
List of Figures ix<br />
List of Tables xi<br />
1 Introduction 1<br />
1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1<br />
1.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2<br />
1.2.1 The need of computational grid . ............... 2<br />
1.2.2 The need of grid refinement . . . ............... 3<br />
1.2.3 A grid-less approach . . . ................... 4<br />
1.3 Application to image segmentation ................... 4<br />
1.4 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5<br />
1.5 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6<br />
2 Review on Level Set Method <strong>and</strong> its domain representation 8<br />
2.1 Level Set Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8<br />
v
2.2 Level set domain representation schemes . ............... 9<br />
2.2.1 Sampling problem of a moving interface . ........... 9<br />
2.2.2 Regular level set grid . . . ................... 10<br />
2.2.3 Refined level set grid . . . ................... 12<br />
2.2.4 Moving level set grid . . . ................... 12<br />
2.2.5 Level set domain triangulation . . ............... 13<br />
2.2.6 Adaptive level set domain triangulation . ........... 14<br />
3 Level set implementations on unstructured point cloud 15<br />
3.1 Level set initialization . . . . . . . . . . . . . . . . . . . . . . . . . . 15<br />
3.1.1 Narrow b<strong>and</strong> of level set nodes . . ............... 15<br />
3.1.2 Signed distance function . ................... 18<br />
3.2 Domain interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . 18<br />
3.2.1 Influence domain . . . . . . . . . . . . . . . . . . . . . . . . 19<br />
3.2.2 Moving Least Square Approximation . . ........... 19<br />
3.2.3 Generalized Finite Difference: An alternative to MLS . . . . . 22<br />
3.3 Evolution process <strong>and</strong> level set re-initialization . ........... 24<br />
3.3.1 Evolution process . . . . . . . . . . . . . . . . . . . . . . . . 24<br />
3.3.2 Level set re-initialization . ................... 26<br />
4 Application to image segmentation 30<br />
4.1 Deformable model . . . . . . . . . . . . . . . . . . . . . . . . . . . 30<br />
vi
4.1.1 FEM contour mesh . . . . . . . . . . . . . . . . . . . . . . . 30<br />
4.1.2 Irregular FEM contour mesh . . . ............... 31<br />
4.2 Level set based Geometric Deformable Model (GDM) . ....... 32<br />
4.2.1 GDM segmentation process . . . ............... 35<br />
4.2.2 Level set velocity field for GDM segmentation . ....... 36<br />
4.2.3 Gradient Vector Flow . . . ................... 38<br />
4.3 Adaptive unstructured sampling of computational domain ....... 38<br />
4.3.1 Sampling node distribution ................... 39<br />
4.3.2 Feature-adaptive node distribution algorithm . . ....... 40<br />
5 Experimental results <strong>and</strong> conclusion 43<br />
5.1 General GDM segmentation results in 2D ............... 43<br />
5.2 Segmentation in 3D . . . . . . . . . . . . . . . . . . . . . . . . . . . 49<br />
5.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52<br />
5.4 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54<br />
A Topology-preserving GDM through domain partitioning 55<br />
A.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55<br />
A.2 Problem definition . . . . . . . . . . . . . . . . . . . . . . . . . . . 57<br />
A.3 Domain representation . . . . . . . . . . . . . . . . . . . . . . . . . 58<br />
A.4 Domain partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . 59<br />
A.5 Movable partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . 60<br />
vii
A.6 Separation enforcement . . . . . . . . . . . . . . . . . . . . . . . . . 61<br />
A.7 TPLSS results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62<br />
A.8 Summary of the algorithm . . . . . . . . . . . . . . . . . . . . . . . 64<br />
viii
List of Figures<br />
2.1 Explicit representation for the moving front . . . ........... 9<br />
2.2 Various types of level set domain representation schemes ....... 11<br />
3.1 Initialization for the point-based level set evolution . . . ....... 17<br />
3.2 Influence domain, cubic spline function <strong>and</strong> interpolated surface . . . 20<br />
3.3 Flowchart of general level set evolution process . ........... 24<br />
3.4 An illustration of a complete evolution process for embedded level set<br />
front . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25<br />
4.1 Flowchart of the meshfree GDM segmentation process . ....... 34<br />
4.2 Data inputs for GDM segmentation: intensity, gradient magnitude <strong>and</strong><br />
Gradient Vector Flow (GVF) . . . ................... 37<br />
4.3 Process of recursive gradient-adaptive node distribution . ....... 42<br />
5.1 Segmentation of the endocardium <strong>and</strong> epicardium from cardiac mag-<br />
netic resonance image . . . . . . . . . . . . . . . . . . . . . . . . . . 43<br />
5.2 Segmentation of the brain ventricles from noisy BrainWeb image . . . 44<br />
5.3 Examples of point-based GDM evolution ............... 45<br />
ix
5.4 Comparison of segmentation results among different implementation<br />
strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46<br />
5.5 An illustration of the relationship between node distribution <strong>and</strong> the<br />
precision of implicit contour . . . ................... 48<br />
5.6 Meshfree GDM Segmentation of a synthetic cube ........... 49<br />
5.7 Meshfree GDM Segmentation on BrainWeb image . . . ....... 50<br />
5.8 Solution refinement using unstructured point cloud . . . ....... 51<br />
5.9 Enlarged <strong>and</strong> overlay views of segmentations from BrainWeb image . 52<br />
5.10 Narrow b<strong>and</strong> nodes <strong>and</strong> adaptive influence domains . . . ....... 53<br />
A.1 Brain image <strong>and</strong> the closeup view of the ventricles. . . . ....... 57<br />
A.2 Illustration of the saddle map detecting the crossing of intensity derivative 60<br />
A.3 Comparison between separation-unconstrained <strong>and</strong> separation-constrained<br />
segmentation on synthetic <strong>and</strong> real images ............... 62<br />
A.4 Comparison between separation-unconstrained <strong>and</strong> separation-constrained<br />
GDM evolution processes . . . . . . . . . . . . . . . . . . . . . . . . 63<br />
x
List of Tables<br />
5.1 Error statistics comparison between point-based <strong>and</strong> regular grid level<br />
set implementations . . . . . . . . . . . . . . . . . . . . . . . . . . . 47<br />
xi
Level Set Implementations on<br />
Unstructured Point Cloud<br />
HO, Hon Pong<br />
<strong>Thesis</strong> for the Degree of Master of Philosophy<br />
Department of Electrical <strong>and</strong> Electronic Engineering<br />
The Hong Kong University of Science <strong>and</strong> Technology<br />
ABSTRACT<br />
We present a novel level set representation <strong>and</strong> Geometric Deformable Model<br />
(GDM) evolution scheme where the analysis domain is sampled by unstructured cloud<br />
of sampling points. The points are adaptively distributed according to both local im-<br />
age information <strong>and</strong> level set geometry, hence allow extremely convenient enhance-<br />
ment/reduction of local curve precision by simply putting more/fewer points on the<br />
computation domain without grid refinement (as the cases in finite difference schemes)<br />
or remeshing (typical in finite element schemes). The GDM evolution process is then<br />
conducted on the point-sampled analysis domain, without the use of computational<br />
grid or mesh, through the precise but expensive moving least squares (MLS) approxi-<br />
mation of the continuous domain <strong>and</strong> calculations, or the faster yet coarser generalized<br />
finite difference (GFD) calculations. Because of the adaptive nature of the sampling<br />
point density, our strategy performs fast marching <strong>and</strong> level set local refinement con-<br />
currently. The performance of our effort is evaluated <strong>and</strong> analyzed using synthetic <strong>and</strong><br />
real images.<br />
xii
Chapter 1<br />
Introduction<br />
1.1 Background<br />
Level Set Method [27, 28] is a numerical device to solve for interface motion under a<br />
velocity field. Many things can be modelled as interface, such as water front, electrical<br />
potential or physical object boundary. This method is very useful because it can suc-<br />
cessfully capture <strong>and</strong> evaluate the motion of matters in wide range of both application<br />
<strong>and</strong> research areas.<br />
One key issue in level set implementation is the tradeoff between computational<br />
efficiency <strong>and</strong> accuracy of solution (e.g. the interface being modelled), <strong>and</strong> several<br />
algorithmic improvements such as the fast marching scheme [1] <strong>and</strong> the narrowb<strong>and</strong><br />
search [24] have provided some reliefs. However, since most Level Set implementa-<br />
tions rely on the finite difference computational schemes, the resolutions of the final<br />
results are fundamentally dependent on the computational grids used to solve the dis-<br />
creteized level set partial differential equations (PDEs). While a highly refined grid<br />
allows high accuracy, the computational cost may be prohibitively expensive for prac-<br />
tical applications, for example, image segmentation. On the other h<strong>and</strong>, coarse grid<br />
allows fast computation at the expense of low fidelity to the original continuous solu-<br />
tions.<br />
1
Since the purpose of domain representation is to offer an appropriate embedding<br />
for various calculations needed by the interface or front representation <strong>and</strong> evolution,<br />
one is obviously not limited to the specific pre-structured rectangular sampling for<br />
finite difference computations. Finite element mesh has emerged as an alternative<br />
scheme for level set algorithms by offering a piecewise continuous approximation of<br />
the domain where the level set computations are performed [2, 34]<br />
1.2 Motivation<br />
1.2.1 The need of computational grid<br />
The need of more samples for representing high frequency signal is obvious under<br />
sampling theory, <strong>and</strong> 2D curve sampling follows the same rule. In computer graphics,<br />
lines in-between samples are created for curve visualization with low cost. Indeed, it<br />
is just a first order interpolation of the curve signal. For 3D surface, we will have a<br />
mesh instead. If shading is not required, we can simply use huge amount of samples<br />
to replace the 2D lines or 3D mesh if resources available. In dynamic model of the<br />
moving interface discussed here, tangents, normals <strong>and</strong> their higher derivatives are<br />
needed for measuring geometric properties, like bending energy of the FEM-Snake<br />
or |∇φ| values in the Level Set Method, these are similar to the use of normals in<br />
many shading <strong>and</strong> illumination models. But normal or derivatives of a particular point<br />
on the moving interface is varying over time. In 2D implementation, computing the<br />
gradient of a 2D scalar function requires, for example, one sample on the right <strong>and</strong><br />
one sample above the point. Left, right, above or below are just positional relationship<br />
among samples. Mesh or grid formation for the scalar function is actually a default<br />
establishment of such a relationship so that local neighboring samples can be consulted<br />
immediately upon the computation of derivatives <strong>and</strong> also integrations at a particular<br />
2
point.<br />
1.2.2 The need of grid refinement<br />
Once the mesh or grid has been established, it also means that we divide our interface<br />
model into lines or surface pieces, <strong>and</strong> assume the structure between vertices is well-<br />
defined geometric such as straight lines, quadratic curves, flat plains or quadratic sur-<br />
faces. Indeed, the idea of domain subdivision is critical for simplifying many dynamic<br />
problems concerning complex rigid body, where a node on the mesh corresponds to a<br />
particular sample of the body. But this assumption is valid only if the body is not de-<br />
formed seriously during its motion. Unfortunately, our evolving interface is a non-rigid<br />
body in nature <strong>and</strong> we do not know the final length nor shape in advance. Therefore,<br />
many refinement strategies for the contour vertices have been derived to overcome the<br />
limitation of using default mesh or grid for computation.<br />
In Level Set Method, an n-D scalar function φ is used to embed one or more<br />
(n − 1)D surface by the zero set {φ =0} of the φ function. In order to get higher<br />
precision for certain locations, we prefer more samples over there. But we also need<br />
to compute gradient of φ, i.e. ∇φ, to implement the PDE of the level set evolution.<br />
In other word, we can notice that the refinement method of φ is intuitively constrained<br />
by the arithmetics of various PDE formulations, which usually involve finite differ-<br />
ence among values of grid-defined neighboring points. So refining the φ grid seems a<br />
straight forward way to increase the sampling rate of φ at the same time maintaining<br />
∇φ tractable.<br />
3
1.2.3 A grid-less approach<br />
It is clear to see that if the arithmetic of ∇φ is based on arbitrary neighboring points but<br />
not grid-defined ones, then we can increase the sampling rate of φ freely without a pre-<br />
defined pattern. This can be done by not using finite difference between φ samples, but<br />
first approximating the continuous φ function from its samples by polynomials fitting<br />
<strong>and</strong> then taking derivative of the approximated polynomial surface to obtain ∇φ. An<br />
early developed mathematical tool known as Moving Least Squares [17, 18, 5] exactly<br />
matches our needs for data fitting. By having a continuous φ approximation, we can<br />
also extract the continuous contour {φ =0} on the 2D image by marching precisely<br />
on the continuous φ surface so that the contour precision is solely dependent on the<br />
marching step size, which is just up to our design.<br />
1.3 Application to image segmentation<br />
Segmentation has been nearly the most fundamental process for many computer vision<br />
problems. Edges on an image could be very useful information for identifying object<br />
in an image. On digital images, edges can be regarded as pixels with high gradient of<br />
intensity, <strong>and</strong> gradient can be obtained through various basic image processing tech-<br />
niques [11]. One generic description of object on images could be the outer boundary<br />
of the object which could be viewed as an continuous closed contour on the image.<br />
Looking for the correct closed contour is not trivial at all, as edges are likely to be<br />
broken if noise exists or the object is party occluded. Despite of broken edges, it is still<br />
possible to define some generic ways to connect the edges or high gradient areas up to<br />
form a closed contour. Assume that we do not have particular knowledge towards the<br />
shape of our target object. But we know that our final result should be a closed curve<br />
completely around an object, <strong>and</strong> it should be located mainly on high gradient areas<br />
4
with inconsistency.<br />
Active deformable models have been among the most successful strategies in object<br />
representation <strong>and</strong> segmentation [16, 24]. While in general the parametric deformable<br />
models (PDMs) have explicit representations as parameterized curves in a Lagrangian<br />
formulation [10, 16, 25, 35], the geometric deformable models (GDMs) are implicitly<br />
represented as level sets of higher-dimensional, scalar level set functions <strong>and</strong> evolve<br />
in an Eulerian fashion [7, 24, 28]. The geometric nature of the GDMs gives them<br />
several often desirable properties, i.e. independence from curve parametrization, easy<br />
computation of curvatures, <strong>and</strong> natural compatibility with curve topological changes.<br />
1.4 Contribution<br />
In the Lagrangian formulation of PDMs <strong>and</strong> object motion tracking, meshfree particle<br />
methods have been introduced as more efficient <strong>and</strong> possibly more effective object<br />
representation <strong>and</strong> computation alternatives because of trivial h − p adaptivity [21].<br />
Following the same spirit, it is easy to see that if level set computations can be based on<br />
meshfree particle representation of the analysis domain, we can almost freely increase<br />
or decrease the sampling rate of the domain without many extra efforts. This can be<br />
achieved through point-wise continuous approximation of the domain <strong>and</strong> the level set<br />
functions through polynomials fitting at the local point cloud.<br />
Hence, we introduce a novel level set representation <strong>and</strong> evolution scheme where<br />
the analysis domain is sampled by adaptively distributed cloud of points. The curve<br />
evolution process is performed on the point-sampled analysis domain through either<br />
Moving Least Squares (MLS) approximation or generalized finite difference (GFD)<br />
schemes. Because of the h − p adaptive nature of the sampling point, our strategy nat-<br />
urally possesses multiscale property <strong>and</strong> performs fast marching <strong>and</strong> local refinement<br />
5
concurrently.<br />
1.5 Outline<br />
The focus of this thesis is about enhancing general level set implementation with higher<br />
accuracy <strong>and</strong> flexibility. There are many level set motion governing equations which<br />
involve different utilizations of available data or information. Most of their implemen-<br />
tations are built upon grid-based finite difference scheme. We are not going to add<br />
something on top of those ideas, but we offer a different computation environment<br />
behind their implementations. This new environment can be applied to most of the<br />
currently available level set formulation with minor modification. So we will explain<br />
<strong>and</strong> illustrate the key components for this new environment in first two chapters <strong>and</strong><br />
show how this new strategy can be used to implement the level set based GDM for<br />
segmentation.<br />
In chapter 2, the Level Set Method for modelling general front propagation is in-<br />
troduced as it is the foundation of our numerical strategy. Particularly in chapter 2.2,<br />
different types of level set representation schemes for the moving interface/front will<br />
be reviewed. Representation scheme has huge impact on the speed <strong>and</strong> accuracy of the<br />
interface evolution process. This review is to give a broad discussion about the pros<br />
<strong>and</strong> cons of, from simple to sophisticated, representation schemes in terms of speed<br />
<strong>and</strong> accuracy. More importantly, it shows the trend of improvement on representations<br />
which enlightened our thought to move towards a grid-less / meshfree representation.<br />
We will then move on to the details of the new environment in chapter 3. Chap-<br />
ter 3.2 discusses how to obtain derivatives on unstructured samples using moving least<br />
squares (MLS) approximation or generalized finite difference (GFD) scheme. We state<br />
the changes related to level set re-initialization in chapter 3.3. Common level set liter-<br />
6
atures usually disregard the re-initialization procedures because they assume the zero<br />
set of a sampled function can be exacted easily [23]. Although it is true for grid-based<br />
implementation, our grid-less approach requires more effort on this issue.<br />
To demonstrate the feasibility of the new computational environment, chapter 4<br />
systematically combines all procedures needed for GDM segmentation. The corre-<br />
sponding GDM formulations, especially the force term in that formulation, can be<br />
found in chapter 4.2.2. And chapter 4.3 is about how we adaptively re-sample an im-<br />
age according to selected image features.<br />
Experimental segmentation results on synthetic, real, 2D <strong>and</strong> 3D images are at<br />
chapter 5. Conclusion <strong>and</strong> future work will also appear at the last chapter of this<br />
thesis.<br />
Appendix A is an example of how this new computational environment gives a<br />
simplified solution to an old problem. The problem is related to the side effect of<br />
GDM topological changeability [14] <strong>and</strong> our simple solution takes the advantage of<br />
the infinite resolution capability using grid-less approach.<br />
7
Chapter 2<br />
Review on Level Set Method <strong>and</strong> its<br />
domain representation<br />
2.1 Level Set Method<br />
The basic idea of the Level Set Method is to implicitly represent the moving front Γ,<br />
propagating along its normal direction, by the zero level set of a higher dimensional<br />
hypersurface φ (the level set function) [1, 24, 30, 26]:<br />
Γ={x|φ(x,t)=0} (2.1)<br />
Differentiate the above equation with respect to t, <strong>and</strong> define the scalar velocity field<br />
F as F = n · x ′ (t) with n = ▽φ the normal direction, we have<br />
∂φ<br />
∂t<br />
where |▽φ| represents the norm of the gradient of φ.<br />
= F |▽φ| (2.2)<br />
The front evolution is now driven by Equation (2.2) <strong>and</strong> we are going to solve for<br />
convergence of that dynamic equation by time-domain finite difference:<br />
φ t+∆t = φ t +∆tF|∇φ| (2.3)<br />
Equation (2.3) is the basic level set formulation for modelling front propagation.<br />
8
In fact, there are many motion governing formulations for level set implicit front Γ.<br />
Different application-dependent properties can be achieved by having specific F term<br />
in Equation (2.3), for example, if F =1all the time <strong>and</strong> initial Γ is a circle, then<br />
subsequent results of the Γ will be a sequence of exp<strong>and</strong>ing concentric circles. Apart<br />
from the F term, when modelling real interface in 3D, gradient of the φ can be obtained<br />
through:<br />
∇φ = [ ∂φ ∂φ ∂φ<br />
, , ]<br />
∂x ∂y ∂z<br />
<br />
(2.4)<br />
|∇φ| = ( ∂φ<br />
∂x )<br />
2<br />
+( ∂φ<br />
∂y )<br />
2<br />
+( ∂φ<br />
∂z )<br />
2<br />
(2.5)<br />
Our concern here is not to get a tailor-made F term for particular application,<br />
but we want to address the underlying sampling issue for all existing level set based<br />
solutions <strong>and</strong> suggest a sample way to improve their quality with minor efforts.<br />
2.2 Level set domain representation schemes<br />
2.2.1 Sampling problem of a moving interface<br />
Continuous<br />
front<br />
Discrete<br />
vertex<br />
Front<br />
Mesh<br />
Figure 2.1: Explicit representation of the moving front. Left: regular sampling rate<br />
(vertices) along the continuous front. Right: irregular sampling rate with more samples<br />
at high curvature locations.<br />
When implementing any of the moving interfaces in computers, we also need to<br />
have a proper way to sample the interfaces <strong>and</strong> have a data structure to represent their<br />
status during motion. Before the proposal of using implicit representation, an intuitive<br />
9
way to represent the interfaces under analysis is to use finite number of samples (ver-<br />
tices in Figure 2.1) taking exactly on the ideal continuous front. Relations between<br />
neighboring vertices are also kept in the form of a mesh.<br />
This configuration enables us to approximate the behavior of the actual front by<br />
using discrete values which can be processed by computers. From a mathematical<br />
perspective, this is just a parametrization of a continuous 2D curve: C(x, y) =0into<br />
two scalar functions: {Sx(s),Sy(s)}, where s ∈ [0, 1]. In discrete case, the parameter<br />
s is further restricted to a finite set of values.<br />
Well, one reason for level set representation being more flexible is that we can have<br />
multiple interfaces embedded in one φ function without increasing the complexity of<br />
φ. But, without s <strong>and</strong> vertices mentioned before, the representable shape variety is<br />
still limited by another factor ; how accurate the φ function can be? It should not<br />
be a problem if we can have a continuous φ function. But in practice, very often we<br />
would use a sampled φ <strong>and</strong> zero-crossings of φ as front vertices, see Fig.2.2(a). So the<br />
sampling rate <strong>and</strong> sampling pattern for the φ function take comm<strong>and</strong> of the accuracy<br />
of contour representation. The sampling rate actually means how many samples are<br />
used <strong>and</strong> the sampling pattern is related to where the samples are taken.<br />
2.2.2 Regular level set grid<br />
If we are using the Level Set Method for image processing, a trivial way to perform<br />
sampling in 2D is that we make a sample for the φ function at every image pixel point.<br />
This is equivalent to say that the sampling is based on a rectangular grid which is the<br />
same as the source image. But image grid <strong>and</strong> level set grid (φ grid) do not necessarily<br />
be the same. Making them the same just implies a full coverage of possible contour<br />
positions on the image, i.e. the solution domain. The immediate drawback is that<br />
10
(a) Regular Level Set Grid (b) Refined Level Set Grid (c) Moving Level Set Grid<br />
(d) Level Set Domain<br />
Triangulation<br />
(e) Refined Level Set<br />
Domain Triangulation<br />
Supporting<br />
Nodes<br />
(f) Adaptive Level Set<br />
Domain Triangulation<br />
(g) Meshfree<br />
Representation<br />
Figure 2.2: Various types of level set domain representation schemes, see text for<br />
details<br />
11
we need more computations for obtaining the derivatives of φ. As we are moving a<br />
implicit curve, speed-up can be done by limiting the φ function available only around<br />
the moving curve. The area of valid φ is defined to be the narrow b<strong>and</strong> of the level set<br />
function, <strong>and</strong> it moves together with the curve. Note that the image grid <strong>and</strong> level set<br />
grid are still the same within the narrow b<strong>and</strong>.<br />
2.2.3 Refined level set grid<br />
Recall that we are moving interfaces or curves under a known velocity field (see be-<br />
ginning of this chapter), we could expect those interfaces will have more complicated<br />
shapes if the spatial complexity of the field is higher. So further speed-up can be<br />
achieved by starting with a coarser level set grid, <strong>and</strong> refining the level set grid if<br />
the interfaces need to have higher precisions [31]. This is known as adaptive grid<br />
refinement, see Fig.2.2(b). Unfortunately, grid refinement, which is very similar to<br />
re-sampling the explicit front mesh (Figure 2.1) as mention before, carries most of the<br />
disadvantages of re-meshing since the sampling grid is also a mesh in nature. At this<br />
point, changing the sampling rate may not be good, but there is room for improvement<br />
on the sampling pattern.<br />
2.2.4 Moving level set grid<br />
Without changing the grid density, a recent proposed strategy [12] to move grid lines<br />
towards feature-likely locations can locally increase the sampling rate in those regions<br />
while maintaining the overall sampling rate of the φ function constant, see Fig.2.2(c).<br />
The overhead is an additional mapping between the deformed level set grid <strong>and</strong> the<br />
original input grid. It is because the velocity field F is partly derived from input data,<br />
but spatial grid points correspondence between two grids are distorted. Also, the phys-<br />
12
ical distance among grid points on the deformed grid is needed for calculating the<br />
derivatives of φ using finite difference. An efficiency gain can be justified by having<br />
the overall sampling rate much lower than the original input grid with local rate sim-<br />
ilar to the input grid for high gradient areas. But the local sampling rate for moving<br />
grid strategy is still upper bounded by the number of initial grid lines we are using,<br />
which occurs at an extreme situation when all grid lines moved towards one point on<br />
the image.<br />
2.2.5 Level set domain triangulation<br />
Go back to the very beginning reason for constructing the level set grid, we just want<br />
a proper coverage of the problem domain (e.g image) so that various solutions (e.g.<br />
contours) can be found from the φ samples on the grid. As it is only a matter of<br />
domain coverage, we are not limited to use ”squares” to tile up the problem domain.<br />
For 2D problem domain, observe that φ can also be viewed as a surface in 3D <strong>and</strong><br />
the simplest form of surface element in 3D is triangle, so triangular mesh can also<br />
properly represent the 3D surface rising up from the problem domain [9]. Similar to<br />
the square-mesh surface, implicit fronts embedded in the triangulated surface can also<br />
be moved by changing the z-coordinates of any (x, y, z) vertex of the triangulation .<br />
Normals <strong>and</strong> z coordinates of any point on the triangulation can be interpolated from<br />
the vertices of particular triangles where the point is located. Therefore, a 1D mesh<br />
of φ can be recovered by marching along the zero-crossings of consecutive triangles,<br />
see Fig. 2.2(d). Similar to the case in Fig. 2.2(b), we can re-fine the triangulation<br />
to enhance the front precision, see Fig. 2.2(e). But refinement must follow some<br />
predefined patterns in order to maintain the consistency of the triangulation, otherwise<br />
we need to re-triangulate the surface as a whole [32, 34]. Then the precision gain is<br />
13
estricted by the refining patterns <strong>and</strong> the speed of getting higher precision is restricted<br />
by the speed of re-triangulation.<br />
2.2.6 Adaptive level set domain triangulation<br />
By using some prior knowledge associated with the velocity field, like divergence, we<br />
know in advance that smaller triangles are preferred for certain areas [32, 22]. So adap-<br />
tive pre-triangulation of the level set grid can avoid re-meshing the analysis domain at<br />
the same time having higher front precision, see Fig.2.2(f). This idea of using prior<br />
knowledge is pretty nice <strong>and</strong> narrow b<strong>and</strong> speed-up is also applicable for triangulated<br />
domain, but it could have problem for the evolving fronts when the velocity field causes<br />
the fronts to have topological changes. That is, when a self-closed interface need to be<br />
separated into two at location having big triangles, it is very difficult to represent two<br />
fronts in one triangle as the zero-crossings along the three edges of that triangle may be<br />
completely lost. In fact, this is just an aliasing problem in sampling theory since there<br />
are not enough samples (triangles) to reconstruct the signal (fronts). In this thesis, we<br />
introduce an even more accurate <strong>and</strong> flexible implementation strategy that local resolu-<br />
tion is adaptive to selected measurements or features (like front curvature <strong>and</strong> gradient<br />
of input data) but without upper bound or re-meshing problems, see Fig.2.2(g).<br />
14
Chapter 3<br />
Level set implementations on<br />
unstructured point cloud<br />
3.1 Level set initialization<br />
3.1.1 Narrow b<strong>and</strong> of level set nodes<br />
Suppose we have a set of unstructured points x as sampling nodes for the level set<br />
function φ on the problem domain. Computing φ t+∆t (xi) in Equation (2.3) for all the<br />
nodes could be expensive <strong>and</strong> also unnecessary as our focus is the zero level set, i.e.<br />
{φ =0}. Therefore, narrow b<strong>and</strong> concept [1, 24, 30] was introduced for speeding<br />
up purpose. In ordinary grid-based level set implementation, the narrow b<strong>and</strong> can be<br />
defined as a fixed number (3-4) of pixels around all zero crossings of φ. So the narrow<br />
b<strong>and</strong> in that case had a uniform width. However, we do not assume a fixed node<br />
density in our point-based method, hence the nodes included in the narrow b<strong>and</strong> may<br />
have non-uniform distribution <strong>and</strong> therefore our narrow b<strong>and</strong> could have a non-uniform<br />
width. Nevertheless, the narrow b<strong>and</strong> width w is not our primary interest, the key<br />
concern is that the narrow b<strong>and</strong> has included enough nodes for MLS to appropriately<br />
interpolate the φ values around the zero level set so that the moving front(s) is properly<br />
represented. The later description of our point-based narrow b<strong>and</strong> will base on two<br />
definitions below:<br />
15
1. NodeSet refers to the constant set of level set sampling nodes distributed ac-<br />
cording to the Algorithm 2 before starting the front evolution using Equation<br />
(2.3).<br />
2. NBSet refers to a dynamic set of nodes for the φ function around the moving<br />
fronts. NBSet comprises of nodes mainly from NodeSet <strong>and</strong> also additional<br />
nodes around high curvature location of the fronts.<br />
To effectively implement the dynamic NBSet, we will first switch on / activate a<br />
subset of nodes in the NodeSet which are near to the current front <strong>and</strong> then compute<br />
the evolution of the front as usual but using only that subset of nodes. To further en-<br />
hance the stability <strong>and</strong> precision of the evolving front, we would also like to include<br />
additional non-NodeSet nodes into the Narrow B<strong>and</strong> subset (NBSet) near high cur-<br />
vature locations. The additional node amount is set to be proportional to the curvature<br />
of φ, again the proportional constant is relatively flexible as the analysis domain has<br />
already been covered by original NodeSet.<br />
Since our narrow b<strong>and</strong> has non-uniform width, the word near is not trivial. To<br />
avoid traversing the whole NodeSet all the time <strong>and</strong> at the same time guarantee nodes<br />
sufficiency, we would like to use k-nearest nodes of all extracted contour points in the<br />
most previous re-initialization plus additional nodes for high curvature as the set of<br />
narrow b<strong>and</strong> nodes (NBSet), where k is the number of nodes for surface interpolation.<br />
When the front moves, the NBSet set is also modified by replacing a portion of nodes<br />
in previous NBSet by their neighboring nodes at frontend. The time-saving trick is<br />
that node neighborhood in NodeSet is constant during front evolution so that it can<br />
be pre-computed <strong>and</strong> stored in a lookup table.<br />
16
◮<br />
Figure 3.1: Initialization for the point-based level set evolution. Top left: featureadaptive<br />
node distribution. Bottom left: initial circular on the problem domain. Button<br />
right: the signed distance function from the initial contour with negative values inside<br />
the contour.<br />
17
3.1.2 Signed distance function<br />
Initial level set can be placed manually or using a statistical average position from<br />
training data. In Figure 3.1, an initial circular contour is placed manually at the center<br />
of the problem domain (image with two black dots). Multiple contours are also possi-<br />
ble. Let xc be the manually selected center position with radius r, NodeSet is the set<br />
of sampling nodes distributed on the problem domain beforeh<strong>and</strong>, the signed distance<br />
function φ can be constructed by:<br />
φ(x) =|x − xc|−r ∀ x ∈ NodeSet (3.1)<br />
For multiple initial contours with centers xc1, xc2, ..., xcn <strong>and</strong> radii r1,r2, ..., rn,<br />
φ(x) =min n i (|x − xci|−ri) ∀ x ∈ NodeSet (3.2)<br />
Since the front can be represented by using only nearby nodes, we can just work on a<br />
subset of node during the evolution process. We call this subset to be a narrow b<strong>and</strong> of<br />
nodes (NBSet) which is defined as,<br />
NBSet = x | φ(x)|
grid-based samples of φ(x, y) in 2D or φ(x, y, z) in 3D. But now we have changed our<br />
data structure to a r<strong>and</strong>omly distributed set of points instead of grid-based, exact near<br />
neighbor along a particular x-ory-direction is almost unavailable.<br />
3.2.1 Influence domain<br />
In order to obtain those φ’s derivatives on unstructured point cloud, we further assume<br />
the φ function at any location x is a polynomial surface within an area around x.<br />
We then use the neighboring nodes xi in that area to recover the coefficients of φ by<br />
performing a least square fitting, see Figure 3.2. Therefore, all nodes inside that area<br />
will have certain amount of effect on the recovered coefficients, which depends on<br />
how the least square is performed. That area is usually called the influence domain or<br />
support region. In general data fitting problem, the size of such an area <strong>and</strong> the order<br />
of polynomial basis control the complexity of the constructed surface.<br />
In our practical problem, sampling nodes distribution can be feature-adaptive rather<br />
than purely r<strong>and</strong>om, by fixing the polynomial order <strong>and</strong> choosing a fixed number of<br />
nearest neighboring nodes for any φ(x) has implicitly the same meaning as determin-<br />
ing a feature-adaptive size for the influence domain. Here, we define the Area that<br />
the k neighboring nodes xi are found to interpolate the φ value at x to be the influence<br />
domain of x. We will show how to do feature-adaptive node distribution <strong>and</strong> determine<br />
the size of influence domain in chapter 4.3.<br />
3.2.2 Moving Least Square Approximation<br />
Originally developed for data fitting <strong>and</strong> surface construction [17, 18], the Moving<br />
Least Square (MLS) approximation has found new applications in emerging mesh-<br />
free particle methods [3, 20, 21]. MLS approximation is chosen because it is com-<br />
19
Figure 3.2: Three steps for interpolating ˜ φ surface at an arbitrary node from the Narrow<br />
B<strong>and</strong> Set. Left: Determine the influence domain size Rx of node x by searching knearest<br />
neighbors. Middle: The cubic spline weight function for the node <strong>and</strong> its<br />
neighbor. Right: The portion of interpolated surface.<br />
pact which means the continuity of the approximation within overlapping influence<br />
domains is preserved. In the following sections, we use ˜ φ to denote the MLS approxi-<br />
mated φ for better description. Using MLS, ˜ φ surface is defined to be in the form:<br />
˜φ(x) =<br />
m<br />
pj(x)aj(x) ≡ p T (x)a(x) (3.4)<br />
j<br />
where p is a vector of polynomial basis, m is the order of polynomial, <strong>and</strong> a is a<br />
vector of coefficients for the ˜ φ function at location x using the polynomial basis p.<br />
For example, if p T (x) =[1,x,y], then a T (x) =[a1(x),a2(x),a3(x)] <strong>and</strong> they are<br />
arbitrary function of x. To get ˜ φ(x) at arbitrary location x, we determine the coeffi-<br />
cients a through the k-nearest neighboring nodes xi <strong>and</strong> their known φ(xi) values. The<br />
weighted approximation error is defined as :<br />
J =<br />
k |x − xi|<br />
W (<br />
i<br />
Rx<br />
)[p T (xi)a(x) − φ(xi)] 2<br />
W ( ¯ ⎧<br />
2<br />
⎨ 3<br />
d)=<br />
⎩<br />
− 4 ¯ d2 +4¯ d3 for ¯ d ≤ 1<br />
2<br />
4<br />
3 − 4 ¯ d +4¯ d2 − 4<br />
3 ¯ d3 for 1<br />
2 ≤ ¯ d ≤ 1<br />
0 for ¯ d>1<br />
(3.5)<br />
(3.6)<br />
where W is a cubic spline decaying weight function, |x − xi| is the spatial distance<br />
between x <strong>and</strong> xi, <strong>and</strong> T denotes vector transpose. Solve for a(x) through minimizing<br />
20
J we can get<br />
˜φ(x) =N(x)Us<br />
Us =[φ(x1),φ(x2), ··· ,φ(xk)] T<br />
(3.7)<br />
(3.8)<br />
N(x) =p T (x)A(x) −1 B(x) (3.9)<br />
k<br />
A(x) = W ( |x−xi|<br />
Rx )p(xi)p T (xi) (3.10)<br />
i<br />
B(x) =W ( |x−xi|<br />
Rx )[p(x1), p(x2), ··· , p(xk)] (3.11)<br />
The solution of a(x) has already embedded in N(x) of Equation (3.9) for simpli-<br />
fication. The N(x) is called the shape function that interpolates the known vector Us,<br />
which is a collection of known φ(xi) values, to get the approximated ˜ φ(x) value.<br />
To get the first order derivatives of ˜ φ, let<br />
A(x)Υ(x) =p(x) (3.12)<br />
Υx = A −1 (px − AxΥ) (3.13)<br />
˜φx = ΥxB + ΥBx<br />
where ˜ φx is the shorth<strong>and</strong>ed notation for ∂ ˜ φ(x)<br />
(3.14)<br />
∂x , <strong>and</strong> ˜ φy can be obtained in similar fashion.<br />
Finally, |∇ ˜ <br />
φ| = ˜φ 2<br />
x + ˜ φ2 y for 2D case or use Equation (2.5) for 3D case.<br />
We may also need second order terms φxx,φyy <strong>and</strong> φxy:<br />
Υxx = A −1 (pxx − 2 Ax Υx − Axx Υ)<br />
Υxy = A −1 (pxy − (Ax Υy + Ay Υx + Axy Υ))<br />
˜φxx = Υxx B +2Υx Bx + ΥBxx<br />
˜φxy = Υxy B + Υx By + Υy Bx + ΥBxy<br />
(3.15)<br />
(3.16)<br />
Moreover, ˜ φyy can be obtained in similar way as ˜ φxx. With approximated ˜ φxx, ˜ φyy <strong>and</strong><br />
21
˜φxy, we can get the curvature measurement κ by Equation (4.4) in later chapter.<br />
3.2.3 Generalized Finite Difference: An alternative to MLS<br />
MLS interpolation can give us a nice surface of φ, see the right column of Figure 3.4.<br />
But it requires quite a lot of computation to get the interpolation function in Equation<br />
(3.9). This may not be good for some time-critical applications, therefore we also<br />
provide a faster but less accurate interpolation method to get φ(x) <strong>and</strong> its derivatives<br />
[4, 15].<br />
If we define our surface ˜ φ to be simple polynomial based, with the form<br />
φ(x) =φ(x, y) =a1 + a2x + a3y + a4xy (3.17)<br />
Then let x1, x2, ··· , x¯ k be ¯ k nearest neighboring nodes of x with known φ(xi)∀ i =<br />
1... ¯ k. We can find the unknown coefficients ai∀ i =1...4 by solving for the following<br />
simultaneous equations:<br />
φ(x1) =φ(x1,y1) =a1 + a2x1 + a3y1 + a4x1y1<br />
... = φ(x2,y2) =a1 + a2x2 + a3y2 + a4x2y2<br />
=<br />
.<br />
φ(x¯ k)=φ(x¯ k,y¯ k)=a1 + a2x¯ k + a3y¯ k + a4x¯ ky¯ k<br />
In vector form,<br />
⎡ ⎤ ⎡<br />
φ(x1)<br />
[φ] = ⎣<br />
.<br />
⎦ , G = ⎣<br />
φ(x¯ k )<br />
1 x1 y1 x1y1<br />
.<br />
1 x¯ k y¯ k x¯ k y¯ k<br />
⎤<br />
⎡<br />
⎦ , [a] = ⎣<br />
a1<br />
.<br />
a¯ k<br />
⎤<br />
⎦<br />
(3.18)<br />
⇒ [a] =(G T G) −1 G T [φ] (3.19)<br />
˜φ(x) =N GF D (x)[φ] (3.20)<br />
where N GF D (x) =[1, x, y, xy](G T G) −1 G T<br />
22<br />
(3.21)
N GF D (x) is an alternative shape function for Equation (3.9). In general, if (G T G) is<br />
not invertible, then including more nodes (which means to increase the number of rows<br />
in Equation (3.19)) can turn it be invertible as the nodes are basically r<strong>and</strong>omly dis-<br />
tributed. Notice that G is a constant matrix so as its least square inverse, the derivatives<br />
˜φx, ˜ φy of ˜ φ(x) can be computed quickly by taking the derivatives of the polynomial ba-<br />
sis <strong>and</strong> then multiply (G T G) −1 G T [φ] which has already been computed for the shape<br />
function. Second order ˜ φxx <strong>and</strong> ˜ φyy are zero, while ˜ φxy is exactly the last row of<br />
(G T G) −1 G T .<br />
23
3.3 Evolution process <strong>and</strong> level set re-initialization<br />
Construct<br />
hyper-surface<br />
from front points<br />
MLS / GFD<br />
Approximation<br />
Level Set<br />
Update<br />
Extract new set<br />
of front points<br />
Figure 3.3: Flowchart of general level set evolution process<br />
3.3.1 Evolution process<br />
Output<br />
After obtaining |∇φ|, the level set function φ is to be updated by Equation (2.3), see<br />
Figure 3.3. Update of level set value is supposed to be done node-by-node in the<br />
NodeSet. If narrow b<strong>and</strong> speed-up is applied, then only update values of nodes<br />
within the b<strong>and</strong>, NBSet (see chapter 3.1.1).<br />
After level set update, we have an evolved implicit front. But the evolved level set<br />
function is usually not a desired output format for other applications or display, we<br />
would like to get the front in explicit form, see Fig. 2.1. Since the level set function φ<br />
is defined as a signed distance function for the embedded front, the normal direction n<br />
of the level set function, <strong>and</strong> the nearest zero level set position z for any location x are<br />
given by:<br />
nx = ∇φ(x)<br />
|∇φ(x)|<br />
zx = x − φ(x) nx<br />
24<br />
(3.22)<br />
(3.23)
initial hypersurface φ<br />
<br />
intermediate φ<br />
<br />
final φ<br />
Figure 3.4: Left: An illustration of a complete evolution process for embedded level set<br />
front from top to bottom. Right:A visualization of the actual computational domain,<br />
MLS interpolated initial, intermediate <strong>and</strong> final φ.<br />
25
3.3.2 Level set re-initialization<br />
In chapter 2, we stated that the update equation (2.3) is a time-domain finite difference<br />
solution for the level set PDE given by Equation (2.2). Numerical errors exist in time-<br />
domain discretization. Therefore regularization of the intermediate solution φ t+∆t is<br />
frequently needed to maintain the integrity of the moving interface. Common level<br />
set literatures usually disregard the re-initialization procedures because they assume<br />
the zero set of a sampled function can be exacted easily [23]. Although it is true for<br />
grid-based implementation, our grid-less approach requires more effort on this issue.<br />
There are two ways to perform re-initialization, both aim at restoring the φ function<br />
to be a distance map of embedded front/surface:<br />
1. explicitly find out a set of front points (vertices or zero level set) <strong>and</strong> then re-<br />
calculate all node-to-nearest vertex distances.<br />
2. gradually turn ∇φ of every node xi close to 1 by gradient descent<br />
The first way is easier in 2D because it is possible to trace out all fronts without<br />
using a reference grid. The way we trace out the front in 2D is on coming section.<br />
Nevertheless, if the sampling nodes has reasonable coverage over the whole problem<br />
domain <strong>and</strong> the front motion is small, we could just extracted a set of front vertices<br />
using Equation 3.23 <strong>and</strong> reset the φ values based on these vertices. This works for both<br />
2D <strong>and</strong> 3D case. The second way uses the fact that a distance map should have gradient<br />
magnitude equal to unity everywhere, so this way can avoid extracting vertices in 3D<br />
space. But the second way takes more computations than the first one <strong>and</strong> the step size<br />
of descent has to be carefully chosen because of our adaptive node density. By the way,<br />
if final surface resolution is known <strong>and</strong> fixed in advance, performing marching cubes<br />
26
to extract the vertices is also possible <strong>and</strong> could be more reliable than gradient descent<br />
procedures. But we do not prefer this way because the usage of ”cubes” implies certain<br />
pre-established grid existed, which violates our main purpose of suggesting grid-less<br />
computation environment.<br />
Re-initialize by contour extraction (2D case only)<br />
The re-initialization can be done by first extracting the intermediate embedded front(s)<br />
in the evolved surface φ t+∆t , then resetting all φ(xi) nodes values to be the nearest<br />
distance to the intermediate front. To extract the front from φ t+∆t , we can start from an<br />
arbitrary vertex zx computed from Equations (3.22) <strong>and</strong> (3.23) using an arbitrary node<br />
xi. Here, a vertex means a point on the embedded moving front in φ. Then march along<br />
the tangent of φ t+∆t (zx) by a curvature adaptive step size ∆s to arrive a new predicted<br />
vertex location. This prediction is then immediately adjusted by Equation (3.23) to<br />
become an extracted vertex. This walk-<strong>and</strong>-adjust (or project-<strong>and</strong>-update) procedure<br />
repeats until the next vertex goes back or very near to the starting vertex. Then we can<br />
use the extracted front to reset all node values φ(xi) within the narrow b<strong>and</strong>. If there<br />
are remaining fronts not yet extracted, some reset node values will have great changes<br />
<strong>and</strong> very often greater then the narrow b<strong>and</strong> width w, defined as the maximum |φ(xi)|<br />
of all xi. So we can monitor the changes of node value to check for remaining fronts.<br />
Ideally, φ(zx) should be zero. But small numerical error may exist in Equation<br />
(3.23) using MLS approximation. To significantly reduce the error from Equation<br />
(3.23), we can perform further one step of gradient descent on the zx location along<br />
the nx direction, i.e:<br />
z ′ x = zx − ˜ φ(zz) nx<br />
27<br />
(3.24)
where z ′ x is the improved vertex location for each ∆s walk.<br />
Algorithm 1 (Marching contour)<br />
z ← pick up the zx by xi ∈ NBSet with largest φ(xi)<br />
Rz ← the influence domain radius by Equ.(4.7)<br />
φz ← use MLS to interpolate φ at z<br />
tan← (-ny,nx) where nx=(nx,ny) by Equ.(3.22)<br />
Cz ← 1 , start ← z<br />
CurveSet ← empty set (Skip if not the first contour)<br />
While ( not a complete loop )<br />
ds ← min(Rz,Cz)<br />
nextPos ← z + tan*ds<br />
Rz ← influence domain radius by Equ.(4.7)<br />
φz ← use MLS to interpolate φ at nextPos<br />
n ← normal of φ at nextPos by Equ.(3.22)<br />
tan1 ← (-ny,nx) is the tangent at nextPos<br />
Cz ← |change of tangents| /ds<br />
tan ← tan1<br />
z ← (nextPos - n*φ) for numerical adjust<br />
record z to the CurveSet<br />
End While<br />
For ( each node xi ∈ NBSet )<br />
z1,z2 ← two nearest z ∈ CurveSet<br />
store the sign of φ(xi)<br />
reset φ(xi) by the average position of z1 <strong>and</strong> z2<br />
restore the sign of new φ(xi) as stored<br />
End For<br />
Note:<br />
Extract another contour if abnormal φ(x) is detected,<br />
but no need to reset the CurveSet.<br />
Re-initialize by gradient descent<br />
This method is suggested by S. Osher in [27]. The level set update equation (2.3) does<br />
not require knowledge of a complete set of front/surface points <strong>and</strong> theoretically can<br />
be repeated indefinitely by its own. But the external velocity F applied on individual<br />
sample can be arbitrary <strong>and</strong> often inconsistent with the embedded structure in level set<br />
function. The inconsistency may due to the inadequacy of sampling nodes to represent<br />
the desired shape implied by noisy F . In practice, the embedded front could be broken<br />
by noisy force <strong>and</strong> fail to maintain a signed distance function with positive inside <strong>and</strong><br />
negative outside. As the purpose of re-initialization is to restore the level set function<br />
28
ack to a distance function, <strong>and</strong> distance function should have a gradient magnitude<br />
|∇φ| =1[27]. We can iteratively perform a smoothing on φ which minimizes the<br />
error of gradient magnitude: (|∇φ|−1), but at the same time leaving the front / surface<br />
intact. Let φ(x) be the level set function distorted by F , <strong>and</strong> ψ(x,τ) is the desired<br />
distance function when τ →∞. So, initialize<br />
Regularize ψ(x,τ) by<br />
dψ<br />
dτ<br />
ψ(x, 0) = φ(x) (3.25)<br />
= −sign(φ(x)) (|∇ψ(x,τ)|−1) (3.26)<br />
where the embedded front is maintain by using<br />
⎧<br />
⎨ 0, if ϕ =0<br />
sign(ϕ) = 1,<br />
⎩<br />
−1,<br />
if ϕ>1<br />
if ϕ
Chapter 4<br />
Application to image segmentation<br />
4.1 Deformable model<br />
A developed strategy is to use a meshed contour, which is moving towards high gradi-<br />
ent areas with shape changing on the way. To represent the contour in digital domain,<br />
we could divide a continuous contour into numerous consecutive identical line pieces<br />
with finite number of end points as vertices, see Fig.2.1. By moving individual vertex<br />
separately, arbitrary shapes of the contour can then be achieved. This can be referred<br />
to a classical tools to solve for various engineering problems, known as Finite Element<br />
Methods (FEM). Snake is the first application of FEM concept on image segmentation<br />
[16, 35].<br />
4.1.1 FEM contour mesh<br />
In the original FEM Snake model, we could regard the vertices as samples of the ac-<br />
tual continuous deformable contour. And the lines between vertices are the mesh that<br />
holding them together.<br />
The advantage of this parametrization is that we could impose certain property on<br />
the curve like the maximum amount of bending it can have or simply stiffness. At<br />
the beginning of the segmentation process, the contour is like a stretched rubber b<strong>and</strong><br />
30
placed around but outside an object on the image <strong>and</strong> intends to shrink inward so as to<br />
reduce the amount of stretching. At the same time, high gradient areas are modelled<br />
as resistance to the contour motion <strong>and</strong> hopefully the inward shrinking force is bal-<br />
anced by the gradient resistance so that the contour vertices stabilized along the object<br />
boundary. Clear edges implied by very high gradient areas will have no problem of<br />
stopping the contour, <strong>and</strong> for those suspected broken edges, the additional imposed<br />
contour property helps to link up clear edges by forcing the contour vertices located at<br />
a path that minimize a pre-defined cost function. In mathematics, the contour location,<br />
bending amount, stiffness <strong>and</strong> image gradient can be precisely derived from evaluating<br />
the two parametric functions: {Sx(s),Sy(s)} for given s, derivative of {Sx(s),Sy(s)}<br />
with respect to s, integration of their derivative along the contour <strong>and</strong> taking spatial<br />
derivative of a 2D scaler function (i.e. the image intensity map) respectively. So we<br />
can set up the cost function using the above measurements. Since we are actually mod-<br />
elling material motion, a system of dynamic equations with energy <strong>and</strong> displacement<br />
terms can be formulated. Therefore, the underlying principle of Snake motion is gov-<br />
erned by the solution of a energy minimization problem. In discrete domain, a general<br />
way to approximate the derivative <strong>and</strong> integration of {Sx(s),Sy(s)} is to use the (fi-<br />
nite) difference <strong>and</strong> summation between {Sx(s),Sy(s)} <strong>and</strong> {Sx(s±∆s),Sy(s±∆s)},<br />
where ∆s is a relatively small constant. This type of FEM active contour model is well<br />
behaved because individual local error can be smoothed out by consistency enforce-<br />
ment among neighboring vertices during the minimization process.<br />
4.1.2 Irregular FEM contour mesh<br />
One obvious limitation of traditional FEM Snake is that if the actual boundary length<br />
is unknown in advance, then how many s values or simply say the number of vertices<br />
31
we should use to represent the moving contour. For example, if only three vertices are<br />
used, then at most a triangle can be represented. In general, the representable shape<br />
variety increases as the number of vertices increases. But it does not necessary mean<br />
that more vertices are better, computational cost <strong>and</strong> smoothness of the contour would<br />
be the tradeoffs for having more samples. In particular, undesired jagging will occur<br />
around sharp corners if the sampling is inappropriate. A quick solution would be a re-<br />
sampling strategy for the moving contour [25] so that the number of samples is adaptive<br />
towards the status of the contour, see Fig.2.1. Unfortunately, re-sampling changes the<br />
relationship among vertices. Since the neighbors of {Sx(s),Sy(s)} are changed, the<br />
energy measurements through their derivative <strong>and</strong> the system of the equation for energy<br />
minimization are also affected. These relevant changes can be collectively described<br />
as re-meshing problems.<br />
4.2 Level set based Geometric Deformable Model (GDM)<br />
If we move back a little bit to consider the input of our generic segmentation problem,<br />
that is, a digital image which itself is a two dimensional function sampled by a 2D rect-<br />
angular grid. The number of boundary points for an object on the image is constrained<br />
<strong>and</strong> finite. So we can change the target contour representation from connected lines to<br />
a finite point set on the image. Again the point set, which consists of boundary pixels,<br />
should also be connectable to form a closed loop. Since these points are originally se-<br />
lected from the 2D rectangular grid, a natural but not definite way to connect points to<br />
form contours is to follow the spatial relationship defined by the sampling grid, that is,<br />
we constrain the next vertex {Sx(s ± ∆s),Sy(s ± ∆s)} of vertex {Sx(s),Sy(s)} must<br />
be one of the pre-connected neighbors of {Sx(s),Sy(s)} which could be up, down,<br />
left or right at one fixed unit apart. So far, this change of contour representation only<br />
32
imposed an additional constraint to reduce the size of solution domain. To avoid re-<br />
meshing, one suggest to use a more flexible water front like modelling for the moving<br />
contour. Instead of moving a curve in 2D, we could imagine to move a continuous<br />
surface in 3D <strong>and</strong> the moving curve is just a particular level of the surface [1, 24, 26].<br />
In other words, the 2D contour is embedded in a 3D surface φ <strong>and</strong> 2D contour move-<br />
ment is governed by a 3D surface motion. If we further constrain the third dimension<br />
z of any point (x, y, z) on the surface φ, we could model the surface motion under a<br />
to-be-defined velocity field F , which is applied along the surface normal direction:<br />
dφ<br />
dt<br />
= F · ˆn = F · ∇φ<br />
|∇φ|<br />
(4.1)<br />
φ could be viewed as a signed distance function that maps a point φ :(x, y) → z where<br />
z is the spatial distance towards the nearest embedding curve position. Therefore, lo-<br />
cations with φ =0is the embedded contour. Further, if we design F to be a function<br />
of ∇φ, <strong>and</strong> then look back into all other terms in the above equations (please refer<br />
to Chapter 2 for more details of F , Equation (2.4) <strong>and</strong> (2.5) for ∇φ <strong>and</strong> |∇φ|), there<br />
does not exist a parameter s to guide you travelling along the moving contour. Instead,<br />
we have an imaginary surface represented by a geometric function φ(x, y). Here we<br />
classify the FEM-like representations as parametric active contours <strong>and</strong> the level set<br />
formulated ones as geometric active contours [36]. Note that here we are not going to<br />
differentiate whether the motion is governed by energy minimization or front propaga-<br />
tion, indeed they can be hybrid together for achieving certain desired properties [13, 8].<br />
But we focus on the representations <strong>and</strong> arithmetics on those equations.<br />
33
Data Input<br />
↓ intensity map<br />
chapter 4.2.3<br />
Preprocessing<br />
↓ gradient, GVF map<br />
chapter 4.3<br />
Node Distribution<br />
↓ unstructured sampling<br />
chapter 3.1.1 & 3.1.2<br />
Level Set initialization<br />
↓ distance map φ within a narrow b<strong>and</strong><br />
chapter 3.2<br />
Domain Interpolation ←−<br />
↓ interpolated φx,φy,φxx...<br />
chapter 2 & 4.2.2<br />
Level Set Update<br />
↓ evolved φ function<br />
chapter 3.1.1<br />
Narrow b<strong>and</strong> Reconstruction<br />
<strong>and</strong> Node refinement<br />
↓ new narrow b<strong>and</strong> nodes<br />
chapter 3.3 If not stabilized<br />
Level Set Re-initialization −→ <br />
↓ regularized distance map<br />
Output<br />
Figure 4.1: A complete flowchart of the segmentation process. Each block represents<br />
a major procedure <strong>and</strong> the arrows indicates the major input/output data of each procedure.<br />
34
4.2.1 GDM segmentation process<br />
In previous chapters, we discussed several key components of our grid-less level set im-<br />
plementation strategy. In order to show how this strategy can actually be implemented<br />
for practical problem. In this chapter, we will put them altogether with additional pro-<br />
cedures specifically for image segmentation. The overall process is summarized in<br />
Figure 4.1. For systems blocks not discussed in this chapter, they are supposed to be<br />
the same as in chapter 3 <strong>and</strong> please refer to the previous chapters for details. Their<br />
chapter numbers are stated in the corresponding blocks.<br />
Common input for segmentation would be intensity map which comprises of many<br />
pixels or voxels [11] indicating the brightness of the captured scene or material density<br />
of a spatial volume. They can be obtained from various image capturing devices like<br />
ordinary camera, range scanner, X-ray or MRI machines. For computational purpose,<br />
analog images such as those using traditional light-sensitive film will first be digitized<br />
to form intensity maps on which computers can be processed.<br />
Preprocessing involves the extraction of image features. We here assume our GDM<br />
formulation is edge-based one. So the preprocessing of input data would be the extrac-<br />
tion of edge information. Both image gradient <strong>and</strong> Gradient Vector Flow maps are<br />
examples of edge information contained in the raw input intensity map. Nevertheless,<br />
GDM can be region-based, see [8], <strong>and</strong> those can also be converted to a grid-less en-<br />
vironment with only small changes in the Level Set formulation. Here, image gradient<br />
is still computed by spatial finite difference. After that we will re-sampled the analysis<br />
domain, <strong>and</strong> use MLS / GFD interpolation instead of finite difference.<br />
35
4.2.2 Level set velocity field for GDM segmentation<br />
To demonstrate the feasibility of our approach in image segmentation, we selected the<br />
formulation for contour evolution suggested in [29]:<br />
F = βgκ− (1 − β) g (ˆv ·∇φ)/|∇φ| (4.2)<br />
g = 1/(1 + |∇G ∗ I|) (4.3)<br />
Equation (4.2) consists of mainly two terms. The first term is the product of image<br />
gradient force g <strong>and</strong> contour curvature κ, serving respectively as external <strong>and</strong> internal<br />
energy constraints during evolution. The second term is the additional external force<br />
using the inner product between image gradient vector flow (GVF) [35, 29] <strong>and</strong> contour<br />
normal direction. The incorporation of GVF enhances the flexibility of choosing initial<br />
contour position <strong>and</strong> h<strong>and</strong>ling concave object boundary. The parameter β ∈ [0, 1]<br />
is just a constant weighting between the two main terms. G denotes the gaussian<br />
smoothing kernel <strong>and</strong> finally contour curvature κ can be obtained from the divergence<br />
of the gradient of the unit normal vector to the contour [24, 26]. In 2D case,<br />
κ = ∇· ∇φ<br />
|∇φ| = −φxxφ2 y − 2φxφyφxy + φyyφ2 x<br />
(φ2 x + φ2 y) 3/2<br />
(4.4)<br />
where φx is the shorth<strong>and</strong>ed notation for first order derivative ∂φ<br />
∂x , φxx denotes ∂2 φ<br />
∂x 2 <strong>and</strong><br />
φxy st<strong>and</strong>s for ∂φx<br />
∂y .<br />
However, our emphasis in this thesis is to offer an alternative domain representation<br />
scheme using point cloud such that spatial calculations can be conducted in a more<br />
efficient <strong>and</strong> more accurate fashion. Many other F formulation can also be used.<br />
36
<strong>Image</strong> intensity Gradient magnitude Gradient vector flow<br />
Figure 4.2: Data inputs for GDM segmentation: intensity, gradient magnitude <strong>and</strong><br />
Gradient Vector Flow (GVF). From top to bottom are examples of synthetic, natural<br />
<strong>and</strong> medical images.<br />
37
4.2.3 Gradient Vector Flow<br />
Enhancements have been made to make the evolving curve less sensitive to its ini-<br />
tial position, such as the introduction <strong>and</strong> adoption of the gradient vector flow (GVF)<br />
as data constraints [29, 35]. The GVF is computed as a diffusion of image gradient<br />
vectors <strong>and</strong> it increases the domain of influence for image-derived edge information.<br />
Every pixel p of the image is affected by all edge points at different levels, where the<br />
influence of different edge points is inversely proportional to the Euclidean distance<br />
from p to the edge, i.e. closer edge point has stronger effect. The output of the GVF<br />
process is a smooth vector field with each vector indicating the most likely direction<br />
towards an edge <strong>and</strong> the vector length denoting how close the edge is, longer means<br />
closer (see Figure 4.2 for examples). Incorporation of the GVF as data constraints into<br />
the level set formulation has been attempted [29], <strong>and</strong> it has shown greater flexibility<br />
on initial contour positioning. The main advantage is that the evolving contour can<br />
move in the reversed normal direction by taking the inner product between the front<br />
normal N = ∇φ/|∇φ| in Equation (4.2) <strong>and</strong> the GVF vector v on the same location<br />
as the active contour evolving velocity Ct =(v · N)N.<br />
4.3 Adaptive unstructured sampling of computational<br />
domain<br />
The formulas in the previous section lay down the background of our active contour<br />
model, but the main discrepancy of our new method is the way we obtain the solution<br />
for those formulas, in terms of both data structures <strong>and</strong> procedures.<br />
Here, we want discuss about the data structure for the level set function φ(x). A<br />
common <strong>and</strong> simple way to store the φ function for computation is using a matrix<br />
which may have the same size as the input image. In particular, a matrix element<br />
38
epresents a sample of the φ function at the corresponding location on the image.<br />
Then φ values are available only at φ(ih, jw) for i, j = 1, 2, 3, ... <strong>and</strong> some con-<br />
stants h, w ∈ R.We may not aware that, by the matrix arrangement, we have implicity<br />
used a rectangular grid to sample the level set φ function. The sampling rate <strong>and</strong> spa-<br />
tial relationship of φ have been pre-established through the use of a matrix or other<br />
special grids, like triangular, refined or deformed level set grids. However, these pre-<br />
establishments are not necessary in our new method. So from now on, we will elimi-<br />
nate the level set grid <strong>and</strong> use only a finite set of points xi, where i is the index of the<br />
point in the set, to sample the φ(x) function. We would like to name xi as the level set<br />
supporting nodes or simply nodes to differentiate them from other types of points on<br />
the image. Note that φ(xi) → R, <strong>and</strong> there is no spatial relationship between xi <strong>and</strong><br />
xi±n for any integer n.<br />
4.3.1 Sampling node distribution<br />
As we are expecting that our final contour will reside at high image gradient areas, it<br />
is essential to get a high precision representation at those regions. It can be done by<br />
improving the sampling rate over there. We quantify our requirement by setting the<br />
node density of a point x ∈ R 2 on the image proportional to the image gradient of that<br />
point 1 :<br />
Node Density at x ∝ |∇I| (4.5)<br />
Nodes<br />
Area<br />
= ρ|∇I|, where ρ is constant (4.6)<br />
In Equation (4.6), determining the Area for node density measurement at any point x<br />
would be a issue of selecting proper scale, so we would like to let the interpolant of φ<br />
to control it by fixing the number of Nodes within an Area to be the minimum amount<br />
1 Bilinear interpolation of image gradient is needed as x may not be a pixel point.<br />
39
of neighboring nodes xi the interpolant needed to interpolate the φ(x) value. Assume<br />
Area must be circular, we get<br />
Area = πR 2 = k<br />
ρ |∇I|<br />
<br />
or R =<br />
k<br />
(πρ) |∇I|<br />
(4.7)<br />
where k is the minimum neighboring nodes amount for interpolate φ(x), <strong>and</strong> R is the<br />
radius that we are going to spread k nodes around x. Before going to the algorithm,<br />
there is a little problem with Equation (4.7) when |∇I| →0, which implies low image<br />
gradient at x (such as image background or smooth regions), the spreading radius goes<br />
unbounded. To guarantee a reasonable coverage of samples at low gradient areas, we<br />
first initialize a base set of nodes which have a uniform separation Rmin apart:<br />
<br />
k<br />
Rmin =<br />
(πρ) mean(|∇I|)/2<br />
(4.8)<br />
To further speed up the distributing process, we also like to include the highest 1-<br />
2% gradient pixels in the base set. Note that the base set might look like grid points<br />
(see Figure 4.3), again we do not assume any relationship among the nodes in the set.<br />
Therefore slightly modifying Rmin or adding other points with no special reason could<br />
be fine.<br />
4.3.2 Feature-adaptive node distribution algorithm<br />
If the image size is large, exhaustive put-<strong>and</strong>-check for node density at every pixel<br />
would be slow. Here, we simplify an existing mesh refining algorithm [22] to a node<br />
distribution algorithm. The original meshing algorithm assumes the actual boundary of<br />
the to-be-meshed object is known <strong>and</strong> the node density at an image point is controlled<br />
by how far that point is away from the known boundary. Using this metrics, regions<br />
closer to actual boundary will have finer triangulation in the final mesh.<br />
40
The two main differences of our simplified version are that, first, the point-to-<br />
boundary measurement is replaced by the image gradient magnitude since actual bound-<br />
ary is unavailable for segmentation problem. Second, the triangulation is not per-<br />
formed as the mesh is not needed in our method. Nevertheless, our method is still<br />
recursive <strong>and</strong> requires a initial coarse node distribution. The idea is that we check ev-<br />
ery initial nodes on the image by counting the amount of neighboring nodes already<br />
existed within an area of radius R defined in Equ. (4.7). If any node failed to have k<br />
neighbors, then that node is removed <strong>and</strong> additional new nodes are r<strong>and</strong>omly added to<br />
fill up the shortage. During the checking, all newly created nodes are stored in a list<br />
for next iteration. After checking all initial nodes, the same checking procedures are<br />
performed on the list of new nodes <strong>and</strong> produces another list of new nodes. The pro-<br />
cess terminates until an empty list is found at the end of an iteration which implies no<br />
new node has been added in the last iteration. Then all nodes have at least k neighbors<br />
in its influence domain, which has been discussed in chapter 3.2.<br />
Algorithm 2 (Gradient-Adaptive Node Distribution)<br />
CurrList ← Base Node Set<br />
NodeSet ← Base Node Set<br />
While( CurrList is not empty )<br />
R<strong>and</strong>omize the order of node in CurrList<br />
NextList ← Empty<br />
For every node x in CurrList<br />
Rx ← min(sqrt(k/(pi*ρ*|∇I(x)|)),Rmin)<br />
N ← No. of Nodes in NodeSet within Rx circle<br />
If( k-N > 0 )<br />
Remove x from NodeSet<br />
NewNodes ← R<strong>and</strong>om (k-N) Nodes in Rx circle<br />
Add NewNodes into NextList<br />
Add NewNodes into NodeSet<br />
End If<br />
End For<br />
CurrList ← NextList<br />
End While<br />
41
Synthetic image Real image Medical image<br />
<br />
<br />
<br />
Figure 4.3: From top to bottom: process of the recursive gradient-adaptive node distribution<br />
algorithm. The gradient magnitudes are used as the background for all images<br />
since they are the actually inputs of the algorithm.<br />
42
Chapter 5<br />
Experimental results <strong>and</strong> conclusion<br />
5.1 General GDM segmentation results in 2D<br />
◮ ◮ ◮<br />
◮ ◮ ◮<br />
Figure 5.1: Segmentation of the endocardium (top) <strong>and</strong> epicardium (bottom) from<br />
cardiac magnetic resonance image<br />
In this chapter, we show some experimental results of grid-less GDM segmenta-<br />
tion on synthetic <strong>and</strong> selected real images. All GDMs in this section used spatial image<br />
gradient magnitude as the only source for shape information, <strong>and</strong> therefore can be re-<br />
garded as boundary-based GDM. The flowchart of these GDM processes can be found<br />
at Figure 4.1. We have not used other prior shape information which cannot be derived<br />
from input image intensity map except the initial circular contours’ sizes <strong>and</strong> positions.<br />
43
◮ ◮ ◮<br />
Figure 5.2: Segmentation of the brain ventricles from noisy BrainWeb [6] image<br />
Segmentation of the endocardium <strong>and</strong> epicardium from cardiac magnetic resonance<br />
image are given in Figure 5.1. GDM process for brain ventricles segmentation is given<br />
in Figure 5.2. The GDM formulation for brain ventricles segmentation does not in-<br />
clude the number of ventricles as prior information except initialization. Due to the<br />
small separation between two ventricles, the gap between two separate initial contours<br />
are smooth out by MLS <strong>and</strong> two contours merged together during GDM evolution pro-<br />
cess. Improved result using Topology Preserving Level Set Surface (TPLSS) will be<br />
described in Appendix A.<br />
We also want to demonstrate our grid-less strategy is valid for different GDM for-<br />
mulations. Hence in Figure 5.3, we show some results after slight modification to the<br />
force term in Equation (2.3). In practice, there are many alternative GDM formulations<br />
which are often application-dependent.<br />
Comparison among different implementation strategies is also one of our studies.<br />
<strong>Image</strong>s with ground truth of boundaries were used to enable error measurements for<br />
quantitative comparison. Selected results are given in Figure 5.4 to show the quality<br />
of segmentations under different implementation strategy <strong>and</strong> input noise level. The<br />
measurements are given in Table 5.1.<br />
The effect of sampling node density on GDM precision is explained in Figure 5.5.<br />
44
◮ ◮ ◮<br />
◮ ◮ ◮<br />
◮ ◮ ◮<br />
◮ ◮ ◮<br />
Figure 5.3: Examples of point-based GDM level set contour evolution. Top two:<br />
Real <strong>and</strong> synthetic image. Middle: Inward shrinking front by modified force F − 1<br />
(SNR=6.9dB). Bottom: Outward exp<strong>and</strong>ing by F +1(SNR=6.9dB)<br />
45
<strong>Image</strong>s with different levels of noise:<br />
SNR:10dB 6.9dB 5.2dB 3.9dB<br />
Point-based Level Set Results with MLS Interpolation<br />
Point-based Level Set Results with GFD Interpolation<br />
Regular Grid Level Set Results with Finite Difference<br />
Figure 5.4: Comparison of segmentation results among different implementation<br />
strategies. The first row are examples of noise-corrupted images in different levels.<br />
Start from the 2nd row, results shown at the same column are using input image with<br />
the same amount of noise, while at the same row are implemented using the same<br />
interpolation scheme.<br />
46
Table 5.1: Error statistics comparison between point-based <strong>and</strong> regular grid level set<br />
implementations<br />
2 Circles SNR MSE S.D. Min Err Max Err Time(s)<br />
MLS 10dB 0.095 0.152 0.002 0.678 60.048<br />
MLS 6.9dB 0.396 0.4 0.005 1.417 57.175<br />
MLS 5.2dB 0.402 0.374 0.000 1.529 77.187<br />
MLS 3.9dB 0.459 0.42 0.002 1.61 82.813<br />
GFD 10dB 0.376 0.231 0.004 1.173 43.577<br />
GFD 6.9dB 1.054 0.571 0.001 2.055 51.062<br />
GFD 5.2dB 1.432 0.623 0.005 2.838 46.619<br />
GFD 3.9dB 1.937 0.718 0.026 3.36 50.551<br />
FD-LS 10dB 1.153 0.533 0.080 2.351 73.425<br />
FD-LS 6.9dB 1.990 0.709 0.080 3.348 70.591<br />
FD-LS 5.2dB 1.921 0.548 0.304 3.384 72.855<br />
FD-LS 3.9dB 2.360 0.694 0.136 3.141 73.145<br />
3 Objects Noise MSE S.D. Min Err Max Err Time(s)<br />
MLS 10dB 0.222 0.324 0.006 2.093 372<br />
MLS 6.9dB 0.402 0.4 0.002 2.084 295<br />
MLS 5.2dB 0.405 0.410 0.002 1.820 224<br />
MLS 3.9dB 0.504 0.464 0.003 2.133 317<br />
GFD 10dB 0.480 0.443 0.000 2.166 164<br />
GFD 6.9dB 1.324 0.606 0.001 2.954 106<br />
GFD 5.2dB 1.218 0.654 0.006 3.322 148<br />
GFD 3.9dB 2.119 1.039 0.014 6.057 126<br />
FD-LS 10dB 1.074 0.617 0.008 3.318 341<br />
FD-LS 6.9dB 2.11 0.999 0.000 5.298 340<br />
FD-LS 5.2dB 2.128 0.994 0.002 5.952 344<br />
FD-LS 3.9dB 5.343 1.300 0.080 7.183 294<br />
MLS: Moving Least Square approximation<br />
GFD: Generalized Finite Difference<br />
FD-LS: Finite Difference Level Set<br />
47
Figure 5.5: An illustration of the relationship between node distribution <strong>and</strong> the precision<br />
of implicit contour. Top: Use only image gradient as the node distribution criteria.<br />
Higher gradient areas have more sampling nodes. The gradient-to-nodes ratio is small.<br />
Middle: Similar to the top one, but the gradient-to-nodes ratio is relatively higher.<br />
Bottom: Use both image gradient <strong>and</strong> image iso-curvature as node distribution criteria.<br />
The sampling ratio is small. The result of the bottom case is comparable to the<br />
middle one but it saves time because less nodes are used.<br />
48
5.2 Segmentation in 3D<br />
Input intensity data<br />
Intensity gradient<br />
magnitude<br />
Adaptive level set nodes<br />
Initial surface Narrow b<strong>and</strong> of nodes Initial implicit surface<br />
Level Set Evolution Level Set Evolution Level Set Evolution<br />
Converged implicit<br />
surface<br />
Putting more nodes to<br />
refine the surface<br />
Refined output<br />
Figure 5.6: Meshfree GDM Segmentation of a synthetic cube. The figures are organized<br />
from left to right, top to bottom. Input data size: 100x100x100, cube<br />
size:30x30x30, sampling nodes: 3.5% of data size, running time: 238 sec on Pentium4<br />
1.5GHz with 768MB memory<br />
49
Input intensity data<br />
Intensity gradient <strong>and</strong><br />
target layer<br />
Adaptive level set nodes<br />
Initial surface Level Set Evolution Level Set Evolution<br />
Segmented layer Densify surface points Refined surface<br />
Figure 5.7: Meshfree GDM Segmentation on BrainWeb [6] image. The figures are<br />
organized from left to right, top to bottom. Input data size: 181x271x181, sampling<br />
nodes: ∼10% of data size, running time: ∼2hrs on Pentium4 1.5GHz with 768MB<br />
memory.<br />
50
Intensity gradient <strong>and</strong><br />
target layer<br />
Reconstruct narrow b<strong>and</strong><br />
<strong>and</strong> evolve<br />
Segmented surface under<br />
coarse sampling<br />
Refined surface<br />
Adding more nodes<br />
around the surface<br />
Further adding more<br />
nodes<br />
Level Set Evolution Level Set Evolution Densify surface<br />
Figure 5.8: Solution refinement using unstructured point cloud. The input data is<br />
the same as Figure 5.7 but with different parameter for the F term. The figures are<br />
organized from left to right, top to bottom.<br />
51
Figure 5.9: Top: Enlarged view of result from BrainWeb [6] image in Figure 5.8.<br />
Bottom: Overlay results from both Figure 5.7 <strong>and</strong> 5.8<br />
5.3 Conclusion<br />
To conclude, we have not introduced new formula for front propagation. Instead, we<br />
have simply connected three simple concepts, namely the sampling rate, influence do-<br />
main <strong>and</strong> surface fitting into old formulations <strong>and</strong> improve their flexibility <strong>and</strong> effi-<br />
ciency. To aim for higher precision, we r<strong>and</strong>omly add more sampling nodes onto the<br />
solution domain. If holding the order of surface polynomial constant, areas with more<br />
52
Figure 5.10: Left: Narrow b<strong>and</strong> of nodes, zero level set (the contour) <strong>and</strong> normal<br />
directions of the zero level set. Right: Overview of variable-sized <strong>and</strong> overlapping<br />
influence domains within the narrow b<strong>and</strong> of nodes same as the left figure.<br />
nodes can have smaller influence domains since the amount of nodes needed for sur-<br />
face reconstruction is relatively constant. Areas of the solution (or the front) which<br />
have smaller influence domains will have higher precisions due to the increased node<br />
density (Figure 5.5 <strong>and</strong> 5.10), the equivalent effect as grid refinement. The difference is<br />
that our partitions is overlapping (Figure 5.10) so that they are loosely interrelated <strong>and</strong><br />
individual error would be smoothed out by neighboring values in its influence domain<br />
when applying MLS approximation. The smoothing scale is controlled by the influ-<br />
ence domain size Rx (Figure 3.2) which is inversely proportional to the node density.<br />
Therefore, the smoothing would be automatically adjusted whenever the node density<br />
changes. By making the sampling rate adaptive to image gradient or other derived<br />
features, we can have greater smoothing for the contour at low gradient area <strong>and</strong> less<br />
for edge-likely location. For images in Figure 5.4, we need to apply large filters for<br />
de-noising the image. But boundary leakage occurs when clear edges are missing. It<br />
does not happened to point-based strategy because node distribution also governs the<br />
de-noising capability of MLS approximation. Although MLS is computational expen-<br />
sive, working on fewer nodes can make the speed of point based strategy comparable<br />
to traditional regular grid finite difference level set (FD-LS). Moreover, the running<br />
speed is also adaptive simply because fewer node values computation around low im-<br />
53
age gradient areas, meaning a fast marching <strong>and</strong> level set refinement are performed<br />
concurrently. Moreover, continuous φ representation enhances the accuracy of ∇φ<br />
which is critical for both the evolution <strong>and</strong> final result in all level set formulations.<br />
5.4 Future work<br />
The extension of grid-less approach to 3D GDM has been attempted (Figure 5.6 <strong>and</strong><br />
5.7. The result also prove the grid-less approach can flexibly enhance the accuracy<br />
of 3D GDM implementation (Figure 5.8). But the challenge we are now facing is<br />
the extra computation brought by using gradient descent to re-initialize 3D GDM (see<br />
chapter 3.3). As a result, re-initialization consumes a lot of time <strong>and</strong> significantly<br />
degrades the efficiency of using grid-less approach comparing to the normal grid-based<br />
3D implementation which allows efficient surface extraction using marching cubes<br />
algorithm [23] before re-initialize by computing actual node-to-surface distance. But<br />
we noticed that re-initialization is only for stabilizing purpose. ∇φ =1is not a definite<br />
condition for the GDM evolution to proceed. Therefore, we could look for a weak<br />
solution ψ where most of the nodes xi have ∇ψ(xi) close to desired value 1 <strong>and</strong> then<br />
we replace φ by new ψ.<br />
Up to now, we have used only simple velocity field F in our experiments that may<br />
not be able to give very good results. We would like to look for a more specific F<br />
formulation for particular type of image that can fully demonstrate the effectiveness of<br />
our strategy.<br />
If we consider MLS is a smoothing filter for φ on point cloud, then similar filtering<br />
can also be applied to the F field so that the stability of the level set evolution can<br />
be improved. Extension to 3D+T is also possible <strong>and</strong> we would be able to offer a<br />
continuous 3D segmentation over time using our point cloud strategy.<br />
54
Appendix A<br />
Topology-preserving GDM through<br />
domain partitioning<br />
A.1 Introduction<br />
The key feature of the geometric deformable models (GDMs) is that they naturally<br />
allow topological changes during the curve evolution process. However, for multi-<br />
ple objects with extremely small spatial separations, these GDMs would not be able<br />
to form separate final contour for each of the object because of the spatial limitation<br />
imposed by the finite difference computational grid. Further, a GDM contour could<br />
possibly leak out from the weaker edges of the target object <strong>and</strong> evolve towards nearby<br />
stronger edges of another object. In this chapter, we present a topology constrain-<br />
ing geometrical deformable model scheme to address these two situations. Utilizing<br />
the prior knowledge on the number of targets <strong>and</strong> their rough spatial positioning, we<br />
add an additional domain partitioning level set surface (DPLSS) which seeks between-<br />
object gaps <strong>and</strong> hence constrains the each boundary finding level set surface (BFLSS)<br />
to reside within its own designated region <strong>and</strong> evolve towards the respective object<br />
boundary. Relying on adaptive meshfree particle representations of the analysis do-<br />
mains for DPLSS <strong>and</strong> BFLSSs, the relative spatial separation between each BFLSS<br />
<strong>and</strong> DPLSS can be flexibly <strong>and</strong> effectively magnified. Further, the evolution of the<br />
55
DPLSS <strong>and</strong> BFLSSs is processed in piecewise continuous fashion, through moving<br />
lease squares (MLS) approximations, to ensure high accuracy.<br />
Geometrical deformable model (GDM) is represented implicitly as the level set<br />
of a higher-dimensional distance function <strong>and</strong> evolves in an Eulerian fashion [7, 24].<br />
Especially because of its topological adaptiveness, which greatly simplifies the curve<br />
initialization problems, GDM has found enormous interests from the computer vision<br />
<strong>and</strong> medical image analysis communities [30, 38].<br />
Nevertheless, there are cases where the topological flexibility is more a burden than<br />
a blessing [13]. For example, when the topological structure of the targeting object is<br />
known a priori, it is often more robust to search for the correct object composition<br />
instead of letting the GDM freely evolve. This is especially important in medical im-<br />
age segmentation where many of the objects of interests have specific topologically<br />
consistent anatomy. Further, for multiple objects with extremely small spatial separa-<br />
tions, such as the two brain ventricles shown in Fig. A.1, GDM would not be able to<br />
form separate final contour for each of the ventricle because of weak edge information<br />
between them (weak separation), <strong>and</strong> because of the spatial limitation imposed by the<br />
finite difference computational grid. Depending on the formulations for the image in-<br />
formation, there are also many situations where GDMs would leak out from the weaker<br />
edges of the target object <strong>and</strong> evolve towards nearby stronger edges of another object.<br />
Based on the simple point concept from digital topology, topology-preserving level set<br />
methods are proposed to specifically maintain pre-determined topological composition<br />
of the evolving GDMs on st<strong>and</strong>ard finite difference grid [13] <strong>and</strong> the refined moving<br />
grid [12]. Several other GDM efforts on surface-coupling [38], shape-prior constraints<br />
[19], <strong>and</strong> neighbor-constraints [33, 37] are also possibly extendable to impose topo-<br />
logical stableness. We have also reviewed these representation schemes in Chapter<br />
56
2.2.<br />
Figure A.1: Brain image <strong>and</strong> the closeup view of the ventricles.<br />
In the following sections, we employ a weak domain partitioning constraint, through<br />
the added domain partitioning level set surface (DPLSS), to enforce topology preser-<br />
vation of the boundary finding level set surfaces (BFLSSs). This formulation allows<br />
us to h<strong>and</strong>le un-balanced close edges (one weak, one strong). Further, relying on the<br />
adaptive meshfree particle representations of the analysis domains for both DPLSS <strong>and</strong><br />
BFLSSs, it can separate objects with, theoretically, infinitely close spatial distance, an<br />
impossibility for finite difference schemes.<br />
A.2 Problem definition<br />
Our goal is to solve the evolutions of multiple geometric deformable active contours,<br />
based on level set formulations <strong>and</strong> on the same image, under an additional topological<br />
constraint that all the individually segmented areas, from their corresponding GDMs,<br />
are mutually exclusive.<br />
Let each GDM be represented by the zero level set of a higher dimensional hyper-<br />
surface φ. Further, region inside the contour has values φ0. The evolution of each active contour is then governed by the level<br />
57
set partial differential equation [30]:<br />
∂φ<br />
∂t<br />
= F |∇φ| (A.1)<br />
where F is the image-data dependent velocity of the φ surface in the normal direction,<br />
<strong>and</strong> ∇ denotes the gradient operator.<br />
Assuming that we are going to segment n objects, we start with n GDMs embedded<br />
in n boundary finding level set surfaces (BFLSSs). We solve for the above level set<br />
equation (Eqn. (A.1)) for the arbitrary c-th GDM by taking the finite difference in time<br />
domain:<br />
φ t+∆t<br />
c = φ t c − ∆tF|∇φc| (A.2)<br />
where c =1, 2...n. Define the region inside an active contour as Φc = {φc < 0}, we<br />
impose the additional constraint on Eqn. A.2 for all c that:<br />
n<br />
Φc = {0} (A.3)<br />
c=1<br />
where denotes the intersect operator among all Φc, <strong>and</strong> {0} denotes an empty set<br />
(we changed the empty set notation to avoid confusion with the level set function).<br />
A.3 Domain representation<br />
To h<strong>and</strong>le the small object separation issue, we need to emphasize that we cannot use<br />
the grid-based finite difference schemes to compute the spatial gradient value ∇φ in<br />
Equ. A.2. Instead, we will first distribute a set of point clouds onto the image <strong>and</strong> the<br />
density of the point clouds is designed to be proportional to the image gradient magni-<br />
tude (see Fig. A.4), so that we would have more points near potential boundary areas<br />
including the gap between objects. We called this set of points the supporting nodes<br />
58
xi of the level set function. We adopted this point-based strategy because there is need<br />
to have higher precision of contour representation when two boundary are too close<br />
to each other. Higher precision of the GDMs requires more sampling along boundary<br />
areas for the level set functions. Since the refinement of nodes distribution is much<br />
easier comparing with traditional grid refinement strategy [12], point-based represen-<br />
tation offers higher flexibility for us to tackle the small boundary gaps problems. At<br />
later stage, point-based strategy also benefits the implementation of the domain parti-<br />
tioning level set surface (DPLSS) since we only need a few nodes to implement the<br />
DPLSS.<br />
A.4 Domain partitioning<br />
The closure of the BFLSS embedded active contour is enforced in both initialization<br />
<strong>and</strong> evolution phase so that we can always decide the inside <strong>and</strong> outside of the contour<br />
using the sign of the level set function. Conventionally, φ0 is outside (or the background). Since the φ function maps each point on<br />
the image to only a single value, the separated object region <strong>and</strong> the background region<br />
are mutually exclusive under the sign convention of BFLSS.<br />
Domain partitioning level set surface can be regarded as a variant of traditional<br />
BFLSS. Constructed from a similar idea, the only difference for DPLSS is that the<br />
embedded curve is assumed to be extended to infinity <strong>and</strong> therefore it is not a closed<br />
contour. The separation drawn by the DPLSS is now called left side <strong>and</strong> right side<br />
instead. Using DPLSS, we assign the BFLSS supporting nodes into different groups<br />
by checking the sign of the DPLSS at those node locations using MLS. Each group<br />
covers a unique region on the image <strong>and</strong> maintains its own level set function φc. The<br />
BFLSS supporting nodes grouped by DPLSS are also mutually exclusive. Since MLS<br />
59
surface approximation for the BFLSS is performed using exclusive node set, neighbor-<br />
ing φc are not going to interfere each others no matter how close they are. Note that<br />
approximated DPLSS using MLS is continuous which allows us to check for its value<br />
at any point x ∈ R 2 .<br />
Figure A.2: Illustration of the saddle map detecting the crossing of intensity derivative<br />
<strong>and</strong> corresponding Gradient Vector Flow. The intensity images are on the left <strong>and</strong> the<br />
saddle maps are on the right. On the saddle maps, bright regions imply high likelihood<br />
of being boundary gaps <strong>and</strong> the vector field provide the guidance for the DPLSS<br />
A.5 Movable partitioning<br />
The key feature of DPLSS is that it is movable <strong>and</strong> also gap-seeking. This ability of<br />
DPLSS allows flexible initialization (weak domain partitioning) for both BFLSS <strong>and</strong><br />
DPLSS which are supposed to work interactively. To define the velocity field F for<br />
DPLSS, we create a saddle map S(x) by detecting the crossing of intensity derivative<br />
60
Ix,Iy along x, y direction respectively:<br />
Sx(x, y)<br />
Sy(x, y)<br />
<br />
|Ix(x + δx, y) − Ix(x − δx, y)|<br />
=<br />
0,<br />
<br />
|Iy(x, y + δy) − Iy(x, y − δy)|<br />
=<br />
0,<br />
if Ix(x, y) =0<br />
otherwise<br />
if Iy(x, y) =0<br />
otherwise<br />
(A.4)<br />
(A.5)<br />
S(x) ≡ S(x, y) =Sx(x, y)+Sy(x, y) (A.6)<br />
S(x) is a scalar gap likelihood map which is then further processed by gradient vector<br />
flow operation (GVF, see chapter 2 <strong>and</strong> [35]) to obtain the force field of DPLSS (see<br />
Fig. A.2). The update of DPLSS is the same as the ordinary BFLSS evolution but<br />
based on a different force field, seeking the gaps between objects.<br />
A.6 Separation enforcement<br />
MLS approximation can extrapolate contour away from its node clouds, so supporting<br />
nodes partitioning only ensures individual contour would not merge, but they can still<br />
be overlapping which is a violation of the physical phenomenon. To ensure the BFLSS<br />
active contours do not cross over the DPLSS embedded domain partitioning curve, we<br />
check the nearest zero set location of each BFLSS supporting nodes to see whether the<br />
BFLSS active contour moves across the partitioning curve. This checking is simple<br />
because we can make use of the sign of the DPLSS function which can tell the left or<br />
right side immediately. If the nearest zero violates the separation rule, we can simply<br />
increase the φc value of that node until the DPLSS function returns a desired sign which<br />
matches with all other nodes in the same group. Define the nearest zero set location of<br />
BFLSS node φ(xi) to be:<br />
zx = xi − ∇φ<br />
|∇φ| φ(xi) (A.7)<br />
61
Let the DPLSS separator be φs, adjust the φ(xi) value if φs(zx) > 0 (for left side node)<br />
using:<br />
φ ′ (xi) =φ(xi)+|φs(zx)| (A.8)<br />
until φs(zx) ≤ 0. For right side nodes, the same adjustment should be applied with the<br />
two inequalities flipped over.<br />
A.7 TPLSS results<br />
Initialization Separation-unconstrained Separation-constrained<br />
Figure A.3: Comparison between separation-unconstrained <strong>and</strong> separation-constrained<br />
segmentation on synthetic <strong>and</strong> real images.<br />
We test our algorithm on various images to perform topology preserving segmen-<br />
tation. Particular interesting to us are those cases where the objects are extremely close<br />
to each other, <strong>and</strong>/or with un-balanced edges. Figs. A.4 show the curve evolution pro-<br />
cesses of the BFLSSs <strong>and</strong> DPLSS on synthetic <strong>and</strong> real images, which demonstrate the<br />
ability of our effort. The final segmentation results of these two experiment are shown<br />
in Fig. A.3. The running time of the experiment in Fig. A.4 using MATLAB is 79.5s<br />
62
Synthetic image with unbalanced edges without separation constraint:<br />
with separation constraint:<br />
◮ ◮ ◮<br />
◮ ◮ ◮<br />
Brain ventricle image with improper initialization without separation constraint:<br />
with separation constraint:<br />
◮ ◮ ◮<br />
◮ ◮ ◮<br />
Figure A.4: Comparison between separation-unconstrained (1st <strong>and</strong> 3rd rows) <strong>and</strong><br />
separation-constrained (2nd <strong>and</strong> 4th rows) GDM evolution processes on synthetic <strong>and</strong><br />
real image. The synthetic example shows close objects with unbalanced edges (one<br />
strong, one weak). The real example shows the effect of improper initialization <strong>and</strong><br />
how the separation enforcement correct the problem.<br />
63
(3rd row without DPLSS) <strong>and</strong> 105.5s (4th row with DPLSS) on a Pentium-M 1.3GHz<br />
computer with 256MB RAM. This shows the overhead time for DPLSS is comparable<br />
to an additional level set contour.<br />
A.8 Summary of the algorithm<br />
1. Pre-process: compute the image gradient magnitude, gradient vector flow of the<br />
saddle map S(x), <strong>and</strong> distribute sampling nodes according to image gradient.<br />
2. Initialize: initialize several closed contours <strong>and</strong> establish their signed distance<br />
functions BFLSSs φc(xi); also initialize the separators in-between all initial con-<br />
tours (through Voronoi diagrams) <strong>and</strong> the corresponding DPLSSs φs(xj)<br />
3. Update: update all φc(xi) <strong>and</strong> φs(xj) independently according to different force<br />
fields <strong>and</strong> MLS approximated φc, ∇φc, φs <strong>and</strong> ∇φs.<br />
4. Enforce Separation: make sure all nearest zeros of BFLSS nodes in the same<br />
group are located in the same region classified by all DPLSSs.<br />
5. Extract Contour: extract all φc(x) =0from the evolved BFLSS by marching on<br />
the MLS approximated φc(x); explicit DPLSS separator can also be extracted in<br />
similar way.<br />
6. Output/Reset: if the curve is not stabilized, reset all BFLSS <strong>and</strong> DPLSS func-<br />
tions by the extracted φc(x) =0<strong>and</strong> φs(x) =0positions, then go back to the<br />
Update procedure; otherwise, output φc(x) =0.<br />
64
Bibliography<br />
[1] D. Adalsteinsson <strong>and</strong> J. Sethian. A fast level set method for propagating inter-<br />
faces. Journal of Computational Physics, 118:269–277, 1995.<br />
[2] T.J. Barth <strong>and</strong> J.A. Sethian. Numerical schemes for the hamilton-jacobi <strong>and</strong> level<br />
set equations on triangulated domains. Journal of Computational Physics, 145:1–<br />
40, 1998.<br />
[3] T. Belystchko, Y. Krongauz, D. Organ, M. Fleming, <strong>and</strong> P. Krysl. Meshless<br />
methods: An overview <strong>and</strong> recent developments. Computer Methods in Applied<br />
Mechanics <strong>and</strong> Engineering, 139(4):3–47, 1996.<br />
[4] J.F. Bonnans <strong>and</strong> H. Zidani. Consistency of generalized finite difference<br />
schemes for the stochastic hjb equations. SIAM Journal for Numerical Analy-<br />
sis, 41(3):1008–1021, 2003.<br />
[5] P. Breitkopf, A. Rassineux, G. Touzot, <strong>and</strong> P. Villon. Explicit form <strong>and</strong> efficient<br />
computationof mls shape functions <strong>and</strong> their derivatives. International Journal<br />
for Numerical Methods in Engineering, 48:451–466, 2000.<br />
[6] R.K.-S. Kwan C.A. Cocosco, V. Kollokian <strong>and</strong> A.C. Evans. Brainweb: Online<br />
interface to a 3d mri simulated brain database. In 3 rd International Conference<br />
on Functional Mapping of the Human Brain, Copenhagen, volume 5 of S425.<br />
Neuro<strong>Image</strong>, May 1997.<br />
65
[7] V. Caselles, R. Kimmel, <strong>and</strong> G. Sapiro. Geodesic active contours. International<br />
Journal of Computer Vision, 22:61–79, 1997.<br />
[8] F. Chan <strong>and</strong> A. Vese. Active contours without edges. IEEE Transactions on<br />
<strong>Image</strong> <strong>Processing</strong>, 10:266–276, 2001.<br />
[9] J. Chessa, P. Smolinski, <strong>and</strong> T. Belytschko. The extended finite element method<br />
(xfem) for stefan problems. International Journal of Numerical Methods in En-<br />
gineering, February 2002.<br />
[10] L.D. Cohen. On active contour models <strong>and</strong> balloons. CVGIP: <strong>Image</strong> Under-<br />
st<strong>and</strong>ing, 53:211–218, 1991.<br />
[11] R. Gonzalez <strong>and</strong> R.Woods. Digital <strong>Image</strong> <strong>Processing</strong>. Prentice Hall, New Jersey,<br />
2002.<br />
[12] X. Han, C. Xu, <strong>and</strong> J. L. Prince. A 2d moving grid geometric deformable model.<br />
In IEEE Conference on Computer Vision <strong>and</strong> Pattern Recognition, volume 1,<br />
pages 153–160, 2003.<br />
[13] X. Han, C. Xu, <strong>and</strong> J. L. Prince. A topology preserving level set method for geo-<br />
metric deformable models. IEEE Transactions on Pattern <strong>Analysis</strong> <strong>and</strong> Machine<br />
Intelligence, 25(6), June 2003.<br />
[14] H.P. Ho <strong>and</strong> P. Shi. Domain partitioning level set surface for topology constrained<br />
multi-object segmentation. In IEEE International Symposium on Biomedical<br />
Imaging (ISBI), 2004.<br />
[15] K.H. Huebner, D.L. Dewhirst, D.E. Smith, <strong>and</strong> T.G. Byrom. The Finite Element<br />
Method for Engineers, pages 138–151. John Wiley & Sons, Inc, 2001.<br />
66
[16] M. Kass, A. Witkin, <strong>and</strong> D. Terzopoulos. Snakes: Active contour models. Inter-<br />
national Journal of Computer Vision, 1:321–331, 1987.<br />
[17] P. Lancaster <strong>and</strong> K. Salkauskas. Surface generated by moving least squares meth-<br />
ods. Mathematics of Computations, 37(155):141–158, 1981.<br />
[18] P. Lancaster <strong>and</strong> K. Salkauskas. Curve <strong>and</strong> Surface Fitting. Academic Press,<br />
London, 1986.<br />
[19] M. Leventon, E. Grimson, <strong>and</strong> O. Faugeras. Statistical shape influence in<br />
geodesic active contours. In IEEE Conference on Computer Vision <strong>and</strong> Pattern<br />
Recognition, pages 316–323, 2000.<br />
[20] G.R. Liu. Mesh free methods : moving beyond the finite element method. CRC<br />
Press LLC, 2002.<br />
[21] H. Liu <strong>and</strong> P. Shi. Meshfree particle method. In Ninth IEEE International Con-<br />
ference on Computer Vision, pages 289–296, Nice, France, October 2003.<br />
[22] R. Lohner <strong>and</strong> E. Onate. An advancing point grid generation technique. Commu-<br />
nications in Numerical Methods in Engineering, 14:1097–1108, 1998.<br />
[23] W.E. Lorensen <strong>and</strong> H.E. Cline. Marching cubes: A high resolution 3d surface<br />
construction algorithm. ACM SIGGRAPH, 21(4), July 1987.<br />
[24] R. Malladi, J. A. Sethian, <strong>and</strong> B. C. Vemuri. Shape modeling with front porpaga-<br />
tion: a level set approach. IEEE Transactions on Pattern <strong>Analysis</strong> <strong>and</strong> Machine<br />
Intelligence, 17:158–175, 1995.<br />
[25] T. McInerney <strong>and</strong> D. Terzopoulos. Topologically adaptable snakes. In Fifth IEEE<br />
International Conference on Computer Vision, pages 840–845, 1995.<br />
67
[26] S. Osher <strong>and</strong> R. Fedkiw. Level set methods <strong>and</strong> dynamic implicit surfaces.<br />
Springer-Verlag New York, Inc., 2002.<br />
[27] S. Osher <strong>and</strong> N. Paragios. Geometric level set methods in imaging, vision, <strong>and</strong><br />
graphics. Springer-Verlag New York, Inc., 2003.<br />
[28] S. Osher <strong>and</strong> J.A. Sethian. Fronts propagating with curvature-dependent speed:<br />
Algorithms based on hamilton-jacobi formulations. Journal of Computational<br />
Physics, 79:12–49, 1988.<br />
[29] N. Paragios, O. Mellina-Gottardo, <strong>and</strong> V. Ramesh. Gradient vector flow fast<br />
geodesic active contours. In Eighth IEEE International Conference on Computer<br />
Vision, pages 67–73, 2001.<br />
[30] J.A. Sethian. Level Set Methods <strong>and</strong> Fast Matching Methods: Evolving Interfaces<br />
in Computational Geometry, Fluid Mechanics, Computer Vision <strong>and</strong> Material<br />
Science. Cambridge Univ. Press, London, 1999.<br />
[31] V. Sochnikov <strong>and</strong> S. Efrima. Level set calculations of the evolution of boundaries<br />
on a dynamically adaptive grid. International Journal for numerical methods in<br />
Engineering, 56:1913–1929, 2003.<br />
[32] D. Terzopoulos <strong>and</strong> M. Vasilescu. Sampling <strong>and</strong> reconstruction with adaptive<br />
meshes. In IEEE Conference on Computer Vision <strong>and</strong> Pattern Recognition, pages<br />
70–75, 1991.<br />
[33] A. Tsai, W. Wells, T. Tempany, E. Grimson, <strong>and</strong> A Willsky. Coupled multi-shape<br />
model <strong>and</strong> mutual information for medical image segmentation. In Information<br />
<strong>Processing</strong> in Medical Imaging, pages 185–197, 2003.<br />
68
[34] M. Weber, A. Blake, <strong>and</strong> R. Cipolla. Initialisation <strong>and</strong> termination of active<br />
contour level-set evolutions. In IEEE Workshop on Variational <strong>and</strong> Level Set<br />
Methods, 2003.<br />
[35] C. Xu <strong>and</strong> L. Prince. Snakes, shapes, <strong>and</strong> gradient vector flow. IEEE Transactions<br />
on <strong>Image</strong> <strong>Processing</strong>, 7:359–369, 1998.<br />
[36] C. Xu, A. Yezzi, <strong>and</strong> J.L. Prince. A summary of geometric level-set analogues<br />
for a general class of parametric active contour <strong>and</strong> surface models. In IEEE<br />
Workshop on Variational <strong>and</strong> Level Set Methods, July 2001.<br />
[37] J. Yang, L.H. Staib, <strong>and</strong> J.S. Duncan. Neighbor-constrained segmentation with<br />
3D deformable model. In Information <strong>Processing</strong> in Medical Imaging, pages<br />
198–209, 2003.<br />
[38] X. Zeng, L.H. Staib, R.T. Schultz, <strong>and</strong> J.S. Duncan. Segmentation <strong>and</strong> mea-<br />
surement of the cortex from 3D MR images using coupled-surfaces propagation.<br />
IEEE Transactions on Medical Imaging, 18(10):927–937, 1999.<br />
69