CBM_1095 - Nanyang Technological University

web.mysites.ntu.edu.sg

CBM_1095 - Nanyang Technological University

This article appeared in a journal published by Elsevier. The attached

copy is furnished to the author for internal non-commercial research

and education use, including for instruction at the authors institution

and sharing with colleagues.

Other uses, including reproduction and distribution, or selling or

licensing copies, or posting to personal, institutional or third party

websites are prohibited.

In most cases authors are permitted to post their version of the

article (e.g. in Word or Tex form) to their personal website or

institutional repository. Authors requiring further information

regarding Elsevier’s archiving and manuscript policies are

encouraged to visit:

http://www.elsevier.com/copyright


Author's personal copy

ARTICLE IN PRESS

Interactive surface-guided segmentation of brain MRI data

Konstantin Levinski, Alexei Sourin , Vitali Zagorodnov

Nanyang Technological University, Singapore

article info

Article history:

Received 5 August 2008

Accepted 14 October 2009

Keywords:

Segmentation

MRI data

Brain

3D visualization

1. Introduction

abstract

Magnetic Resonance Imaging (MRI) is a common technique used

to obtain in-vivo information about the structure and function of

various body tissues. Essentially, MRI performs 3D sampling of

densities in a given volume, providing object visualization in any

plane. Each point (voxel) on an MRI scan corresponds to a certain

position in the object being scanned. The process of establishing

relationships between the MR image voxels and their meaning (what

tissue or organ they belong to) is called segmentation.

Our work was motivated by the problem of human brain MRI

data segmentation, which seeks to estimate cortical thickness/

volume of the functional areas on the surface of the brain as well

as volumes of the white matter and subcortical structures.

Existing segmentation methods can be classified according to

the degree of operator intervention required. Methods which do

not require an operator are usually called automatic, while those

requiring various degrees of guidance are called interactive.

Automatic segmentation is a well attended area of research with

a variety of established techniques. For example, a generic brain

model has been used for the automatic brain segmentation in [1],

with the toolkit presented in [2]. Some methods rely on statistical

properties of different brain areas to determine voxels classification

[3]. The algorithm based on graph cuts [4] treats MRI volume

as a graph and uses maximum flow partitioning for segmentation.

Edge-based approaches classify voxels by locating the boundaries

between classes, using the fact that these boundaries are likely to

Corresponding author.

E-mail address: assourin@ntu.edu.sg (A. Sourin).

0010-4825/$ - see front matter & 2009 Elsevier Ltd. All rights reserved.

doi:10.1016/j.compbiomed.2009.10.008

Computers in Biology and Medicine 39 (2009) 1153–1160

Contents lists available at ScienceDirect

Computers in Biology and Medicine

journal homepage: www.elsevier.com/locate/cbm

MRI segmentation is a process of deriving semantic information from volume data. For brain MRI data,

segmentation is initially performed at a voxel level and then continued at a brain surface level by

generating its approximation. While successful most of the time, automated brain segmentation may

leave errors which have to be removed interactively by editing individual 2D slices. We propose an

approach for correcting these segmentation errors in 3D modeling space. We actively use the brain

surface, which is estimated (potentially wrongly) in the automated FreeSurfer segmentation pipeline. It

allows us to work with the whole data set at once, utilizing the context information and correcting

several slices simultaneously. Proposed heuristic editing support and automatic visual highlighting of

potential error locations allow us to substantially reduce the segmentation time. The paper describes

the implementation principles of the proposed software tool and illustrates its application.

& 2009 Elsevier Ltd. All rights reserved.

coincide with the areas of high image intensity gradient [5].

Watershed-based algorithms [6] gradually assemble finer parts

into one segmented area to obtain the brain segmentation.

Wavelet decomposition [7] can be combined efficiently with

other approaches including the watershed algorithms.

There are several commonly used software tools which

implement automatic brain segmentation, such as FreeSurfer

(http://surfer.nmr.mgh.harvard.edu) by the Athinoula Martinos

Center for Biomedical Imaging, and fMRI-focused AFNI (http://

afni.nimh.nih.gov/afni) from NIMH Scientific and Statistical

Computing Core.

Interactive methods may involve the operator at the final stage

and/or during the actual segmentation process [8,9]. Hence in

[10–12], to detect the surface of the border of a certain segment,

the energy related to this surface is defined and minimized

interactively using the pixel operations similar to Adobe Photoshop

(http://www.adobe.com/products/photoshop/family) lasso

tool. In [13], the interactively defined original surface evolves till

the energy minimum is achieved.

There are several software tools commonly used for interactive

correction of the segmentation results in 2D slices such as Analyze

(http://www.mayo.edu/bir/Software/Analyze), 3D Slicer (http://

www.slicer.org) and 3D-DOCTOR (http://www.3d-doctor.com)

providing functions similar to 2D image editor’s lasso, erosion,

and area propagation. We have come across of only one extension

to 3D [14], however, with the interactions still done with 2D slices.

Interactive tools are often used together with automatic

segmentation algorithms to check and correct the resulting

segmentation.

The authors are involved in such a project which requires over

350 MRI images to be automatically segmented and interactively


1154

checked and corrected within several weeks. The data processing

pipeline of this project is based on FreeSurfer (Fig. 1) and includes

the following steps:

1. Automated skull stripping resulting in a brain mask.

2. Fully automatic pial surface (the outer brain surface) extraction.

3. 2D slice-by-slice interactive checking, and if necessary,

correction of the brain mask and re-running automatic

extraction.

Skull stripping algorithm from FreeSurfer removes the skull and

other non-brain tissues properly most of the time as they are

usually well separated from the brain by a rim of the dark cerebrospinal

fluid (CSF). The main problem at this step is the dura mater,

which is often located very close to the gray matter and has a

similar intensity.

Surface generation step uses the skull-stripped volume to locate

gray matter/CSF and white matter/gray matter surfaces, which are

subsequently used for cortical thickness estimation. Using the

surfaces instead of the direct voxel labeling facilitates more

precise cortical thickness measurements, as the surfaces also

encode curvature information about the brain. Using the skullstripped

volume instead of the original one has primarily effect on

gray matter/CSF surface extraction, making it more reliable by

preventing its expansion beyond the brain mask.

The whole pipeline is fully automated but computationallyintensive,

taking about 24 h per surface. Common defects are

white matter surface undergrowth and gray matter surface

overgrowth (inclusion of pieces of dura). The first problem can

be corrected by adjusting parameters of intensity non-uniformity

[15]. The dura matter inclusion is mostly the consequence of

inadequate skull-stripping. To fix it, a step back to the skullstripped

MRI is usually required with manual, slice-by-slice,

clearing extra dura to prevent it from inclusion during the surface

extraction step.

Manual slice-by-slice verification of a skull-stripped volume

can take as long as 15–20 min. If editing is required, the operator

has to spend another 1–2 h per brain volume. It is quite common

to have around 50 incorrect segmentations per 350 subjects,

which accounts for two weeks to review and correct the results.

2. Method description

Since the segmentation defects are thin (1–5 voxels in depth)

pieces of dura mater and skull located immediately outside the

outer (pial) surface of the brain or directly on it (Fig. 2), we

Author's personal copy

ARTICLE IN PRESS

K. Levinski et al. / Computers in Biology and Medicine 39 (2009) 1153–1160

Fig. 1. Common brain MRI processing pipeline.

Fig. 2. Brain before and after the attached dura was cleared.

hypothesize that removal of such defects can be done faster and

easier while working with the brain in 3D space rather than by

editing numerous 2D slices individually. This is similar to how

stubborn dirt or metal tarnish can be removed either by dissolving

it using a chemical agent, or by scraping it away to expose the

underlying surface again. We have successfully worked on a

similar project [16] where the whole volume of the brain had to be

edited in 3D. Now that only the outer layers are going to be

checked and corrected, there is no need in displaying the internal

brain structure.

Such interactive 3D operation will be fast and reliable if the

operator is guided to the potential segmentation errors on the

brain surface. To accomplish this, we propose an active use of the

pial surface which geometry is estimated in the segmentation

pipeline (Fig. 3). Specifically, after calculation of the pial surface,

we will use it for color-marking on its surface the potential

segmentation errors. These visual hints will be calculated based

on the location of the surface with reference to the actual MRI

volume. Next, we will interactively edit the pial surface in 3D to fit

it to the MRI volume geometry. Finally, we will use the corrected

surface as a guide for 3D editing of the MRI data and produce the

new pial surface by re-running the automated segmentation

pipeline. We anticipate that in the most cases the whole process

can be completed with a single iteration.

This requires the following work to be done:

1. To propose and implement methods for visual highlighting of

potential problematic areas on the brain surface.

2. To propose and implement methods for 3D editing of the pial

surface with reference to the actual MRI data.


3. To propose and implement methods for 3D editing of the MRI

data based on the corrected pial surface.

These tasks are described in Sections 3–5 of the paper. Section 6

highlights the implementation and performance details followed

by the conclusion section.

3. Highlighting segmentation errors on the pial surface

According to the new processing pipeline (Fig. 3), the input to

the module ‘‘Surface verification and correction in 3D’’ consists of

the skull-stripped MRI data and the pial surface data. The surface

is represented by a polygonal mesh produced during the

automatic mesh generation phase of the segmentation process.

A correct mesh should coincide with the visible outer surface of

the brain in the MRI data. This correspondence is somewhat

difficult to establish visually and should be facilitated by

additional guidance from the interactive algorithm. Since the

generated mesh has no embedded color, i.e. can be rendered with

any uniform color, another distinct, clearly visible color can be

used to highlight the segmentation errors on the pial surface.

This highlighting can be accomplished as follows. Since the

brain is a physical object and has a well-defined surface, its

density data should exhibit certain characteristics when sampled

along a normal in the vicinity of a point on the brain surface. Let

us consider the polygonal mesh interpolating the pial surface,

where for each vertex we sample several intensity values along

the normal. These values are then processed together as an ndimensional

vector. If the direction of the vector at the current

position is very different from the average direction of its

neighbors, the respective part of the surface is likely to be

incorrect. The degree of incorrectness can be visualized by varying

the brightness of the coloring—the farther the vector is from the

average vector the brighter the color for the respective vertex of

the mesh. More formally, let rj be a sequence of MRI samples along

the normal at point j. Then, the error marks q k can be defined as

q k ¼ r k

P n

i ¼ 1 r i

n

Since the surface mesh does not contain normals, they have to

be approximated based on the mesh polygons. Then, each vertex

is assigned a normal based on the estimated polygon normals.

Since the transition between the brain and the non-brain matters

occurs within about 3 mm distance, to be able to detect the

segmentation defects it is sufficient to use three samples along

Author's personal copy

ARTICLE IN PRESS

K. Levinski et al. / Computers in Biology and Medicine 39 (2009) 1153–1160 1155

Fig. 3. Proposed correction of the common brain MRI processing pipeline. New modules and data flow are highlighted.

ð1Þ

the normal, rj ¼ðr0 j ; r1 j ; r2 j Þ: 1 mm below, at the vertex, and 1 mm

above it. The samples are derived from the MRI data using

trilinear interpolation. The nearest neighbor approach is not

sufficiently precise enough and yields jittery results.

Samples rj contain three components each. To obtain colors

cj ¼ðcr j ; cg

j ; cb j Þ corresponding to rj, we perform normalization as

follows:

ðc 0 j ; c1 j ; c2 j Þ¼ ðr0 j ; r1 j ; r2 j Þ

maxðr0 j ; r1 j ; r2 j Þ

ð2Þ

Once the color error marks on the brain surface are computed,

the operator needs to visually check whether there is a

segmentation error at the highlighted location. To accomplish

this, we need to provide a tool that allows the operator to quickly

explore a section of relevant MRI data in the vicinity of the

marked region. The traditional 2D sections are not suitable for this

task as they need to be constantly moved and re-oriented in order

to face the viewer. Instead, we propose a novel method of

visualization based on using a sampling sphere. The MRI data is

projected onto the surface of this sphere, thus making any

repositions unnecessary. Such a spherical sampling tool (Fig. 4)

is convenient since it is view-independent and permits easy

interpretation directly in 3D. Fig. 4a illustrates the proposed

approach.

While 2D slices of the MRI data can be easily displayed as the

data can be obtained directly from the MRI scan layers, displaying

the MRI data onto the surface of the sphere requires interpolation

at every point. Since this can be time consuming, we shifted the

load to the GPU by using 3D textures. To avoid overloading the

texture memory with MRI data, we propose to load there a portion

of the data slightly larger than the current sphere. Then, while the

sphere is being moved by the surface, we check if loading of a new

portion of data is needed.

To indicate where exactly the surface intersects with the

underlying MRI data, we display the intersection curve between

the sphere and the pial surface. Since the surface is represented

with a set of triangles, each triangle is checked for intersection

with the sphere and the calculated intersection lines form the

whole intersection curve, as shown in Fig. 4b.

4. Interactive 3D pial surface editing

When a discrepancy between the surface and the MRI data is

found, we need to correct the surface locally by adjusting it to the

surface of the brain. To perform this editing in 3D space, we


1156

propose to use the same sampling sphere (Fig. 4b) to select on it a

new location of the intersection curve. To provide an immediate

feedback, the surface color will change accordingly: for the

properly corrected parts the color will change back to the brain

surface color and for wrong corrections—to the error-marking

color. The restriction imposed on the correction algorithm

requires that the number of vertices in the surface mesh must

remain the same to be able to use it in further segmentation steps.

Since the segmentation defects are small, the correction can be

performed without exceeding the polygonal quota.

We propose the following correction algorithm:

1. Take the vertex on the surface nearest to the user-defined

location and move it to this location. The surface will pass

through the selected point but its shape may become irregular.

2. Determine which other vertices in the vicinity of the modified

vertex should be moved to guarantee regularity of the

resultant surface.

3. Move these vertices so that the resultant surface is smooth.

In order to determine which vertices should be moved, we

propose to combine two criteria: their distance from the initial

point, and direction of their normals. The normal criterion

guarantees that only vertices located on the same gyrus (ridge

on the cerebral cortex) will be moved, as the defects are most

commonly limited to a single gyrus. The same criterion will also

guarantee that no points on sulci (depressions or fissures in the

surface of the brain) will be moved, as these locations do not

contain defects. The two criteria can be combined using a

weighted propagation, as shown in Fig. 5. Each edge is assigned

a weight depending on the difference between the normals and

the error metric. Then, the resulting weighted graph is used to

calculate a distance from the vertex to the initial point. This

distance, together with a threshold, can then be used to determine

the area where the mesh modification will be performed. Finally,

with the information on the relocation of the initial vertex, we

calculate the translation required for each vertex in the affected

area, minimizing the difference between neighbors.

This approach can be expressed more formally in the following

way. Let us denote the user-defined point where the modified

surface should pass as v0 0 , and name v0 the closest vertex from the

polygonal mesh of the current surface. Then, let us denote as C(vi,

vj) the cost of moving from some existing vertex vi to vertex vj. As

we move from the tip of the gyrus to its side, the normals will

start to diverge, which should result in higher cost. The cost

should also depend on the error likelihood at the location, so that

the vertices, which have previously been identified as correct, are

Author's personal copy

ARTICLE IN PRESS

K. Levinski et al. / Computers in Biology and Medicine 39 (2009) 1153–1160

Fig. 4. Spherical sampling tool: (a) the tool is moving on the pial surface, (b) the tool with the MRI data mapped on its surface and the contours of the pial surface displayed.

Fig. 5. Weighted graph. Darker lines represent less weight for the edge. Only a

single gyrus is affected as the weight increases abruptly to the side of the brain

fold.

not affected by the relocation. For neighboring vertices, all these

conditions can be expressed as

Cðvi; vjÞ¼jni njj jqij jqjj ð3Þ

where ni, nj are the respective normals at vi and vj, and qi, qj are the

error marks (see Section 3, Eq. (1)). Other ways to define the cost

function can be considered as well, for example, by taking an

initial normal n 0 as a reference:

Cðv i; v jÞ¼jn0 n jjþjn0 n ij jq ij jq jj ð4Þ

For vn, vm, which do not share a common edge, we define the

cost as the minimum sum of the costs over all paths connecting

these vertices:

Cðv i; v jÞ¼ min

8paths

X

CðedgeÞ ð5Þ

edges


Fig. 6. Surface-guided volume correction: (a) rasterized surface, (b) filled surface,

(c) additional 2 pixels are grown to prevent erroneous removal of brain voxels and

(d) final mask.

Then, each vertex vi is moved by R(vi), which is defined as follows:

Rðv0Þ¼v0 0

v0

8

< 0 Cðv0; viÞ4Cmax RðviÞ¼ Rðv0Þ ðCmax

:

Cðv0; viÞÞ Cðv0; viÞrCmax Cmax

where Cmax is a user-defined parameter.

5. Interactive 3D MRI correction

The corrected pial surface cannot be used to determine the

cortical thickness directly, as all the measurements must be

obtained in a uniform way by using the FreeSurfer toolkit.

However, by removing the voxels which led to errors in the

automatic algorithm in the previous run we can produce a skullstripped

MRI data which will most likely yield a correct surface

when used in this algorithm. Note, however, that special measures

should be taken since the cerebellum and sulcal CSF are not

included inside the pial surface but should be included in the

brain mask.

To perform volume editing, we first convert the surface into a

set of voxels. Then, the set of voxels can be grown outwards to

remove the outside part of the known-good surface. To avoid

removing areas located deep inside the skull-stripped volume, a

voxel is removed only if it is located no deeper than 5 voxels,

which is the largest known defect size as it is used in our

considerations. By combining the growth and depth conditions,

we can use one automatic step to obtain the desired skullstripped

MRI image, as shown in Fig. 6.

Author's personal copy

ARTICLE IN PRESS

K. Levinski et al. / Computers in Biology and Medicine 39 (2009) 1153–1160 1157

ð6Þ

To find all voxels contained inside the corrected pial surface,

we first mark all the voxels which intersect the surface. To do so,

for each triangle we have to:

1. Mark voxels intersecting with the triangle’s vertices.

2. Subdivide the triangle into four smaller ones using midpoints

between edges.

3. Repeat recursively until the size of the triangles becomes equal

to the pixel size.

A typical set of voxels that can be obtained using this approach

is shown in Fig. 6a.

Since the pial surface is ‘‘watertight’’, we can flood-fill voxels

into the marked area by starting from a voxel inside it (Fig. 6b). As

a precaution measure, we also include additional two layers of

voxels outside the pial surface to recover a layer of the external

CSF (Fig. 6c).

The marked voxels obtained in such a way exclude the

cerebellum which must be included in the brain mask. As the

cerebellum is located deep inside the original mask while

the defects are always on the surface, we can correct the mask

by using the original uncorrected mask and the marked voxels

together. The distance from the surface of the original mask is

calculated by propagation. If a voxel is not marked and is closer

than 10 mm to the original mask surface, it does not belong to the

cerebellum and can be safely removed from the final mask (see

Fig. 6d). All other voxels are kept intact.

6. Implementation and performance highlights

We have implemented the proposed algorithms as the surface

and volume editing tools which use the standard FreeSurfer data

layout for interoperability.

The surface editing is implemented in two versions: standalone

(Fig. 7a) and shared (Fig. 7b) tools. Both versions have the

same functionality but the shared version permits several

networked operators to collaboratively edit the surface. Hence

while in Fig. 7a, the surface editing is performed on a personal

computer by one operator, in Fig. 7b two networked operators

participate in the session. The networked operators are visually

aware of each other’s presence by seeing avatars of the sampling

spherical tools (displayed as 3D crosshair markers) and changes

on the surface being edited. The simplified avatars rather than

spheres are displayed to avoid overcrowding the scene and to

allow the operators to work in the same area. The shared

implementation is based on Virtual Reality Modeling Language

(VRML, http://www.web3d.org), its Function-based Extension

FVRML [17], and Server-Client framework previously presented

in [18].

After the pial surface is edited and saved, it is used as a

reference object in the volume editing tool, which is a standalone

application running on a personal computer (Fig. 8).

According to the processing pipeline in Fig. 3, automatic skullstripped

volume generation is performed using FreeSurfer automated

brain stripping techniques. The success of these techniques

is influenced by several various factors which depend on patient’s

anatomy and his/her behavior in the scanner. Commonly, about 50

out of 300 scans result in incorrectly skull-stripped images, which

have to be corrected by the operator who can now run the

developed interactive surface correction tool followed by the

volume correction tool, instead of using 3D Slicer (http://www.

slicer.org), as in the common processing pipeline (Fig. 1).

To illustrate the efficiency of our method, let us consider an

example of the segmentation error spreading over approximately

50 slices (Fig. 9). If the common processing pipeline based on 2D


1158

slice correction is used, each slice would take approximately 10–

15 s to correct, which amounts to around 10–15 min per MRI data

set. Given 50 erroneous images per batch, it would take more than

10 h to correct one batch. Our approach requires on average 2 min

from the operator to locate and remove a similar defect when it is

edited in 3D space. Therefore, we reach up to five-fold

productivity increase for the correction phase. Extending the

Fig. 7. Surface editing tool: (a) standalone version, (b) shared version: two

networked users participate in editing the surface. Avatars of the tools of the

concurrent users are displayed as 3D crosshair markers.

Author's personal copy

ARTICLE IN PRESS

K. Levinski et al. / Computers in Biology and Medicine 39 (2009) 1153–1160

Fig. 8. Volume editing tool: (a) uncorrected mask and (b) corrected mask.

software to handle different segmentation tasks would save even

more time.

In some cases, initial automatic segmentation of the white

matter has only slight defects which are easier to correct than the

mask itself. While we can correct such minor voxel misclassification,

it is still necessary to remove non-brain voxels from the mask

in order to run the white matter surface estimation algorithms

reliably. We can replace the interactive mask correction process

with the correction of the white and gray matter segmentation,

and then use its result to obtain the mask for the second

automatic segmentation run. The mask is obtained by removing

every voxel located not farther than 5 mm of the white matter and

no deeper than 5 mm from the surface of the initial mask. Only

the voxels that might cause incorrect segmentation in the

subsequent automatic run are removed, as it is shown in Fig. 6.

Simple removal of every voxel outside the corrected white and

gray matter segmentation would discard the cerebellum. This

method provides significant advantages over manual editing of

each slice.

The accuracy of our approach was evaluated using false

positive and false negative error rates against the ground truth,

provided by manual tracing by the operator. The evaluation set

contained 10 scans where the automatic algorithm produced

wrong segmentation. We then compared the operator time

required for the proposed correction and also counted the false

negatives rate to verify the validity of the resulting segmentation.

As it is presented in Table 1, the developed software provides

significant time savings. Our users, who are the project

participants from different research and medical institutions,

Fig. 9. Common defect and its correction: (a) different colors on the surface mark

the potential segmentation errors and (b) dashed line shows surface before

correction.


Table 1

Performance evaluation.

Case Proportion of brain voxels

erroneously removed (%), manual

have also reported that it is significantly more convenient to work

with the developed software. While the proposed approach is

characterized by slightly higher error rate, the errors are,

nevertheless, within the acceptable boundaries for the project

and, if required, can be improved by growing extra layers during

the interactive volume correction step.

7. Conclusion

We have developed a novel method of 3D interactive correction

of brain segmentation errors introduced by the fully

automatic segmentation algorithms. The idea of the method is

to actively use the pial surface, make visual marks of the potential

segmentation errors on it, interactively modify the surface with a

visual feedback in form of the colors changing on the surface, and

finally use the corrected surface for the semi-automatic volume

editing.

3D visualization of the misclassification hints allows the user

to focus attention on the problematic areas on the surface of the

brain rather than on its separate 2D slices.

Implementing our method, we have developed the interactive

3D MRI segmentation tools which fit into the automatic

segmentation pipeline of FreeSurfer, allowing for editing MRI

scans and segmented surfaces manually when required. The user

manual and videos illustrating how the developed software tools

work can be seen at http://sites.google.com/site/vxxsoftware and

http://intune.ntu.edu.sg/SCE/courses/Alexei/Video/segmentation.

wmv.

Acknowledgments

This project is supported by SBIC Innovative Grant RP C-012/

2006 ‘‘Improving Measurement Accuracy of Magnetic Resonance

Brain Images to Support Change Detection in Large Cohort

Studies’’ and partially by the Singapore National Research

Foundation Interactive Digital Media R&D Program, under research

Grant NRF2008IDM-IDM004-002 ‘‘Visual and Haptic

Rendering in Co-Space’’.

References

[1] T. Rohlfing, J.C.R. Maurer, Multi-classifier framework for atlas-based image

segmentation, Pattern Recognition Letters 26 (13) (2005) 2070–2079.

[2] P.-L. Bazin, et al., Free software tools for atlas-based volumetric neuroimage

analysis, Medical Imaging 2005: Image Processing, vol. 5747, 2005, pp.

1824–1833.

Author's personal copy

ARTICLE IN PRESS

K. Levinski et al. / Computers in Biology and Medicine 39 (2009) 1153–1160 1159

Proportion of brain voxels erroneously

removed (%), our method

Time (min), manual Time (min), our

method

1 0.017 0.078 30 3

2 0.006 0.031 40 6

3 0.007 0.027 34 4

4 0.006 0.004 30 5

5 0.005 0.090 43 8

6 0.012 0.260 44 5

7 0.004 0.010 41 7

8 0.023 0.035 50 10

9 0.013 0.025 32 6

10 0.002 0.038 46 8

[3] M. Ibrahim, et al., Hidden markov models-based 3D MRI brain segmentation,

Image and Vision Computing 24 (10) (2006) 1065–1079.

[4] S.A. Sadananthan, W. Zheng, M.W.L. Chee, V. Zagorodnov, Skull stripping

using graph cuts, NeuroImage 49 (1) (2010) 225–239.

[5] M. Kass, A. Witkin, D. Terzopoulos, Snakes: active contour models, International

Journal of Computer Vision 1 (4) (1988) 321–331.

[6] C. Lukas, et al., Sensitivity and reproducibility of a new fast 3D segmentation

technique for clinical MR-based brain volumetry in multiple sclerosis,

Neuroradiology 46 (11) (2004) 906–915.

[7] C.R. Jung, Combining wavelets and watersheds for robust multiscale image

segmentation, Image and Vision Computing 25 (1) (2007) 24–33.

[8] H.K. Hahn, H.-O. Peitgen, IWT-interactive watershed transform: a hierarchical

method for efficient interactive and automated segmentation of multidimensional

gray-scale images, Medical Imaging 2003: Image Processing, vol.

5032, 2003, pp. 643–653.

[9] C.J. Armstrong, B.L. Price, W.A. Barrett, Interactive segmentation of image

volumes with live surface, Computers and Graphics 31 (2) (2007)

212–229.

[10] G. Giraldi, E. Strauss, A. Oliveira, Dual-T-Snakes model for medical

imaging segmentation, Pattern Recognition Letters 24 (7) (2003)

993–1003.

[11] P.W. de Bruin, et al., Interactive 3D segmentation using connected orthogonal

contours, Computers in Biology and Medicine 35 (4) (2005) 329–346.

[12] A.X. Falcao, J.K. Udupa, A 3D generalization of user-steered live-wire

segmentation, Medical Image Analysis 4 (4) (2000) 389–402.

[13] P.A. Yushkevich, et al., User-guided 3D active contour segmentation of

anatomical structures: significantly improved efficiency and reliability,

NeuroImage 31 (3) (2006) 1116–1128.

[14] Y. Kang, K. Engelke, W.A. Kalender, Interactive 3D editing tools for image

segmentation, Medical Image Analysis 8 (1) (2004) 35–46.

[15] W. Zheng, M.W.L. Chee, V. Zagorodnov, Improvement of brain segmentation

accuracy by optimizing non-uniformity correction using N3, NeuroImage 48

(1) (2009) 73–83.

[16] K. Levinski, A. Sourin, V. Zagorodnov, 3D visualization and segmentation of

brain MRI data, In: 2009 International Conference on Computer Graphics

Theory and Applications (GRAPP 2009), Lisboa, Portugal, 5–8 February, 2009,

pp. 111–118.

[17] L. Wei, A. Sourin, O. Sourina, Function-based visualization and haptic

rendering in shared virtual spaces, The Visual Computer, Springer 24 (10)

(2008) 871–880.

[18] L. Wei, A. Sourin, H. Stocker, Collaboration in 3D shared spaces using X3D and

VRML, in: Proceedings of the 2009 International Conference on Cyberworlds,

Bradford, 7–11 September 2009, pp. 36–42.

Konstantin Levinski is a research fellow with the

School of Computer Engineering at the Nanyang

Technological University, Singapore. He received his

B.Sc. and M.Eng. degrees in computer science from the

Moscow Institute of Physics and Technology, Russia

and his Ph.D. degree from the Nanyang Technological

University in Singapore. His research interests are in

interactive free form shape modeling, grid-based

rendering and MRI segmentation.


1160

Alexei Sourin received his M.Eng. and Ph.D. degrees in

computer graphics from the Moscow Engineering

Physics Institute, Russia in 1983 and 1988, respectively.

Currently, he is an Associate Professor with the School

of Computer Engineering at the Nanyang Technological

University, Singapore. His research interests are in

function-based shape modeling, shared virtual environments,

haptic interaction, web visualization and

visualization on the Grid, virtual surgery, scientific

visualization, and cyber-learning. He is a co-founder of

the function representation (FRep) concept which he

further developed and applied in several shape

modeling, interactive free-form shape modeling, medical

visualization and virtual surgery training projects. He has also proposed and

developed, together with his students, a function-based web-visualization

technique where analytical formulas are used for defining geometry, appearance

and physical properties of the objects in shared virtual scenes. Dr. Sourin published

over 100 referred research papers and was invited to give talks to many scientific

events. He is a Senior Member of IEEE and a member of ACM SIGGRAPH. He is a

recipient of various research awards. He is a coordinator and a few times chair of

Author's personal copy

ARTICLE IN PRESS

K. Levinski et al. / Computers in Biology and Medicine 39 (2009) 1153–1160

the International Conferences on Cyberworlds, a co-chair of the 2009 ACM

Symposium on Web3D, as well as a member of the program committees of over 60

international conferences and an editor of several international journals. More

details are available at http://www.ntu.edu.sg/home/assourin.

Vitali Zagorodnov received the B.S. and M.S. degrees

in electrical engineering from the Moscow Institute of

Physics and Technology, Moscow, Russia, in 1995 and

1997, respectively, and the Ph.D. degree in electrical

engineering from Princeton University, Princeton, NJ, in

2003. During his Ph.D. studies, he investigated various

problems in image registration, streaming, and segmentation.

He is currently an Assistant Professor with

the School of Computer Engineering, Nanyang Technological

University, Singapore. His research interests

include image processing and computer vision, with

the focus on structural and functional brain imaging.

More magazines by this user
Similar magazines