Model Transduction for Triangle Meshes

Wu HY, Pan CH, Zha HB et al. **Model** transduction **for** triangle meshes. JOURNAL OF COMPUTER SCIENCE ANDTECHNOLOGY 25(3): 584–595 May 2010**Model** **Transduction** **for** **Triangle** **Meshes**Huai-Yu Wu 1,2 (), Chun-Hong Pan 2 (), Hong-Bin Zha 1 (), and Song-De Ma 2 ()1 Key Laboratory of Machine Perception (MOE), Peking University, Beijing 100871, China2 National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, ChinaE-mail: wuhy@cis.pku.edu.cn; chpan@nlpr.ia.ac.cn; zha@cis.pku.edu.cn; songde.ma@mail.ia.ac.cnReceived December 1, 2009; revised March 5, 2010.Abstract This paper proposes a novel method, called model transduction, to directly transfer pose between differentmeshes, without the need of building the skeleton configurations **for** meshes. Different from previous retargetting methods,such as de**for**mation transfer, model transduction does not require a reference source mesh to obtain the source de**for**mation,thus effectively avoids unsatisfying results when the source and target have different reference poses. Moreover, we show othertwo applications of the model transduction method: pose correction after various mesh editing operations, and skeleton-freede**for**mation animation based on 3D Mocap (Motion capture) data. **Model** transduction is based on two ingredients: modelde**for**mation and model correspondence. Specifically, based on the mean-value manifold operator, our mesh de**for**mationmethod produces visually pleasing de**for**mation results under large angle rotations or big-scale translations of handles. Thenwe propose a novel scheme **for** shape-preserving correspondence between manifold meshes. Our method fits nicely in aunified framework, where the similar type of operator is applied in all phases. The resulting quadratic **for**mulation canbe efficiently minimized by fast solving the sparse linear system. Experimental results show that model transduction cansuccessfully transfer both complex skeletal structures and subtle skin de**for**mations.Keywordsretargetting, mesh de**for**mation, mean-value manifold operator, cross-parameterization, model transduction1 IntroductionWith the significant increase in the number of 3Ddata produced by artists, scanning devices or visionbasedcapture and reconstruction, reusing existing 3Ddata has recently been popular in computer modelingand animation. Currently, 3D data retargetting techniquesmainly consist of two categories: motion retargettingand de**for**mation retargetting.Gleicher et al. [1] present a technique **for** adapting themotion of one articulated figure to another figure withidentical structure but different segment lengths. Recently,[2-4] extend this work by using a hierarchical displacementtechnique, a physically-based motion trans**for**mationmethod, and a data-driven model approach,respectively. Except retargetting from one articulatedfigure to another articulated figure, the motion series ofan articulated figure can also be retargetted to a triangularmesh by using so-called skinning techniques, suchas skeleton subspace de**for**mation (SSD) [5] , pose spacede**for**mation (PSD) [6] , EigenSkin [7] .Inspired by the motion retargetting techniques thatfocus on skeleton-based articulated body motions, [8-9] present the de**for**mation retargetting techniques **for**triangular meshes. Given a reference source mesh Sand a de**for**med source mesh S ′ , the source de**for**mationis firstly computed by vertex displacements [9] orlocal affine trans**for**mations [8] . Then, the source de**for**mationfrom S to S ′ is applied onto a different targetmesh T to generate the de**for**med target mesh T ′ .In this paper, different from motion retargetting andde**for**mation retargetting techniques, a novel skeletonfreepose retargetting technique, model transduction,is proposed. This method not only can directly retargetpose between different models without buildingthe skeleton configurations, but also can be used**for** pose correction of 3D models after various editingoperations, and skeleton-free de**for**mation animationbased on 3D Mocap (Motion capture) data, etc.This method is a natural extension of the de**for**mationtransfer work [8] . Another contribution of this paperis to present the mean-value manifold operator, whichis used **for** shape-preserving mesh editing and shapepreservingmesh correspondence.Regular PaperThis work is supported by the National Natural Science Foundation of China under Grant Nos. 60903060 and 60675012, theNational High-Tech Research and Development 863 Program of China under Grant No. 2009AA012104, and the China PostdoctoralScience Foundation under Grant No. 20080440258.©2010 Springer Science + Business Media, LLC & Science Press, China

Huai-Yu Wu et al.: **Model** **Transduction** **for** **Triangle** **Meshes** 585Below, we give a brief overview of works related toours.• De**for**mation Transfer and **Model** **Transduction**De**for**mation transfer [8-11] applies source mesh’s de**for**mationonto a different target mesh. In order togenerate the de**for**med target model T ′ , one must provideall of the three models: the reference source modelS, the de**for**med source model S ′ and the reference targetmodel T , as showed in Fig.1(a). Note that in de**for**mationtransfer, S is indispensable **for** obtaining thesource model’s de**for**mation to be transferred to T . Onelimitation of de**for**mation transfer is that the source andtarget reference models must have the same kinematicreference pose, since the reference target model T reproducesthe change in shape induced by the sourcede**for**mation between S and S ′ . Only when the tworeference poses are the same can this kind of transferringoperation be valid. Another drawback is thatif the source de**for**mation itself has artifacts, evidentlythe de**for**med target model T ′ generated by the sourcede**for**mation can hardly be satisfying.Fig.1. (a) De**for**mation transfer [8] cannot produce satisfying resultswhen the source and target have different reference poses.(b) Using model transduction, a lion model successfully imitatesthe pose of a cat model even if the reference source mesh is absent.Now, let us consider a more challenging problem:with the absence of the reference source mesh S (Is Sessentially redundant?), how to get the de**for**med targetmesh T ′ which acts like the de**for**med source mesh S ′while looking like the reference target mesh T ? Weshow that a feasible solution to this problem is themodel transduction method introduced in this paper(see Fig.1(b), Fig.2, Fig.12). This method is a naturalextension of the de**for**mation transfer work [8] . However,[8] does not explicitly consider surface details’ preservation,and requires a reference pose **for** the source mesh’sde**for**mation. Our method address both issues.Similar to the human perception, the transferringprocess is like first inducting the knowledge from theobservation, and then deducing it to a new example.Can we make a shortcut, and go from an existing exampledirectly to a new example? This shortcut is called a“transduction” [12] . Similarly, by using the model transductionmethod, we can directly produce the de**for**medtarget model T ′ while not needing the reference sourcemodel S.• Cross-ParameterizationEstablishing cross-parameterization (or consistentcorrespondence, inter-surface mapping) [8,13-17] betweendifferent shapes is a fundamental task in a vast numberof applications. Most existing cross-parameterizationmethods belong to indirect schemes, i.e., an intermediateparameterization domain is required. The differencesamong algorithms lie in the choice of the intermediatedomains, such as the plane [14] , the sphere [13,18] ,the cylinder, the triangle patch [15] , the quadranglepatches. However, **for** indirect schemes, the difficulty tocompatibly construct well-shaped patch layouts makesthese methods hard to keep balance between efficiencyand robustness. Tricky topological operations are requiredto deal with genus, intersections, blocking, cyclicorders, branch mismatch etc. Furthermore, besidesthe drawback of discontinuity when transiting interpatchboundaries, the error of sub-mappings also canbe amplified through the mapping composition, so thefinal inter-surface mapping may have large errors somewhere.Except building intermediate domains, crossparameterizationcan also be constructed directly, i.e.,using the target mesh as the common domain, thusavoids explicit cross-parameterization. In [17], bysmoothing local affine trans**for**mations, Allen et al.use the template fitting technique to directly constructcross-parameterization **for** a set of human models.Sumner et al. [8] propose a similar algorithm tobuild correspondence map between meshes. However,direct schemes so far do not explicitly take the shapepreservationproperty into account, thus will introducelarge approximation errors when the input models havesignificantly different geometries.In this paper, we apply a novel shape-preserving operator,the mean-value manifold operator, to directlyconstruct the shape-preserving cross-parameterizationwithout needing to build the intermediate domains andpartitions **for** input meshes, which effectively avoids erroramplification, discontinuity along partition boundaries,and large approximation distortion.• Differential Mesh De**for**mationShape de**for**mation [19-30] has various applications incomputer modeling, simulation and animation. Recently,differential in**for**mation as local intrinsic featuredescriptors (e.g., Laplacian coordinates or gradientfield) has been used **for** mesh processing, especially

586 J. Comput. Sci. & Technol., May 2010, Vol.25, No.3**for** detail-preserving mesh editing. However, Laplaciandifferential coordinates are the average differencevectors of adjacent vertices, which are not rotationinvariantand must somehow be trans**for**med by heuristicmethods (or user’s adjustment) [10,20-21,23] to matchthe desired new orientations, otherwise the detailsin the de**for**med mesh are distorted. Though somerotation-invariant representations [22,31-35] have beenproposed, they are usually involved in complicated andtime-costuming nonlinear optimizations.In this paper we adopt the novel mean-value manifoldoperator to construct the minimal-magnitude(close to zero) vector field. With the nice properties(fair encoding weights and small ratios) of this meanvalueshape-preserving representation, our editing systemis efficient and stable even when the handles areunder large angle trans**for**mations or moved rapidly bythe user.The rest of the paper is structured as follows. Wefirst describe the overall flowchart of the model transductionmethod in Section 2. Then, in Section 3 andSection 4, we concretely describe the two basic ingredientsof model transduction: model correspondence andmodel de**for**mation. Section 5 demonstrates more applicationsof model transduction. Section 6 presents theexperimental results. We conclude the paper in Section7.2 **Model** **Transduction****Model** transduction is designed to directly transferpose between different meshes without the need ofbuilding the skeleton configurations **for** meshes. Differentfrom de**for**mation transfer [8] , our method does notrequire an extra reference source mesh to obtain thesource de**for**mation, as shown in Fig.1.Fig.2. The flowchart of the model transduction method. (a)Matching mesh. (b) De**for**med source mesh. (c) Reference targetmesh. (d) Pose “mesh”. (e) De**for**med target mesh.Fig.2 describes the overall flowchart of model transduction,which is composed of three sequential steps.First, we establish the consistent correspondence betweenthe de**for**med source mesh S ′ and the referencetarget mesh T by using the cross-parameterizationmethod to be proposed in Section 3. Thus, we get acompatible matching mesh ˜T having the identical connectivitywith T and the same geometry with S ′ . Then,T ’s triangles are rotated and translated to appropriatepositions on ˜T to generate the pose “mesh” T p ,which extracts rigid trans**for**mation components fromthe mesh T and retains the overall pose in**for**mationof the mesh S ′ . Finally, T p ’s triangles are pieced togetherto generate the final con**for**ming mesh T ′ , bysolving an optimizing process subject to a series of energyconstraints (including the detail-preserving de**for**mationconstraint to be proposed in Section 4).In the following subsections, we describe each stepof model transduction in detail.2.1 Step One: Generating the Matching MeshIn the first step, we need to establish a correspondencebetween the source mesh S ′ and the targetmesh T . We adopt the shape preserving crossparameterizationmethod (to be proposed in Section3) to generate the compatible matching mesh ˜T . Notethat our correspondence method is a kind of iteratedclosest point algorithm. To accelerate the closest-pointmatching, we adopt the ANN library [36] **for** both exactand approximate nearest neighbor searching, whichper**for**ms quite efficiently in linear time.2.2 Step Two: Obtaining the Pose “Mesh”In the second step, our aim is to obtain a noncon**for**ming“mesh” T p to be used **for** pose representation(see Fig.2(d)). This noncon**for**ming “mesh” is generatedby firstly detaching all the triangles of the referencetarget mesh T without distorting them, and thenreattaching each triangle onto its corresponding triangleon the matching mesh ˜T . In order to achieve therigid trans**for**mation (i.e., without distorting), we firstcompute an affine trans**for**mation **for** each triangle, thenextract the translation and rotation components fromthe affine trans**for**mation.This step can be described as follows. First, let D kbe a triangle in the reference target mesh T , ˜Dk beD k ’s corresponding triangle in the matching mesh ˜Tgenerated in Step One. Q k is a 3×3 affine matrix [17,37]which describes the local affine trans**for**mation from D kto ˜D k .Then, we adopt the singular value decomposition(SVD) method to extract the rotation part R k and theshearing-scaling part S k from the affine matrix Q k :Q k = A k D k B k = (A k B k )(B T k D k B k ) = R k S k (1)

Huai-Yu Wu et al.: **Model** **Transduction** **for** **Triangle** **Meshes** 587where all of A k , B k , R k are rotation matrices; D k =diag(λ k1 , λ k2 , λ k3 ) is the diagonal matrix resulted fromSVD, λ k1 , λ k2 and λ k3 are the eigenvalues of Q k .For each pair of the corresponding triangles D k and˜D k , we rotate the triangle D k with the matrix R k , andthen translate it onto the triangle ˜D k to produce thetriangle D p k, called pose triangle:v p k i= R k (v ki − c k ) + ˜c k i = 1, 2, 3 (2)where c k (˜c k ) is the centroid of the triangle D k ( ˜D k );v p k iis v ki ’s corresponding vertex on D p k .Thus we get a set of disconnected pose triangles D p k s,which constitute the pose “mesh” T p .2.3 Step Three: Translating Pose **Triangle**swith the Detail-Preserving Constraintand the Smoothness ConstraintIn Step Two, we have generated T p by rotatingand translating the triangles of T to appropriate positions.The integrity of the triangles of T p representsS ′ ’s pose to be retargetted onto the final resultT ′ . However, these triangles are disconnected and scattered,and there**for**e do not **for**m a real “mesh”. So,in Step Three, in order to piece together these trianglesto **for**m the final mesh, we translate the adjacenttriangles close to one another, and meanwhile imposeboth the detail-preserving constraint and the smoothnessconstraint on adjacent triangles to **for**m the finalmesh T ′ .Conceptually, what we want to solve is the followingerror minimization problem:E(V ′ ) =|T |∑k=1 i=1|T |∑w sk=1|T |∑w tk=13∑‖Q k δ ki − δ ′ k i‖ 2 +( ∑j∈adj (k)( ∑(i,j)∈{(1,2),(2,3)})‖Q k − Q j ‖ 2 F +‖v ′ k i− v p k i−v ′ k j+ v p k j‖ 2) (3)where k is the index of a triangle of the mesh, i is theindex of a vertex of this triangle; δ ki (δ ′ k i) is the detailencodeddifferential coordinates (see Section 4) of thei-th vertex of mesh T (T ′ )’s k-th triangle D k (D ′ k); thematrix norm ‖ · ‖ F is the Frobenius norm; adj (k) is theset of facet neighbors of the triangle k; w s = 0.001 andw t = 5 are the item weights.The first term indicates the shape details after trans**for**mationsare preserved. The second term indicatesthat the change in trans**for**mations **for** adjacent trianglesshould be smooth. The last term en**for**ces translationalmotions to the noncon**for**ming triangles. Specifically,we want to minimize the displacement differencebetween the three vertices of each triangle in T ′ andthe three vertices of the corresponding triangle in T p .That is to say, under ideal conditions, we should have:v ′ k 1− v p k 1= v ′ k 2− v p k 2= v ′ k 3− v p k 3(4)where {vk ′ 1, vk ′ 2, vk ′ 3} is the triangle D ′ k on the mesh T ′ ,{v p k 1, v p k 2, v p k 3} is the triangle D p kon the pose “mesh”T p .As a whole, the minimization problem (3) amountsto the solution of a sparse linear system and it returnsoptimal vertex positions. The final results of modeltransduction are shown in Figs. 1, 2, 12.3 Shape Preserving Cross-ParameterizationNow we describe the first ingredient of themodel transduction method, shape preserving crossparameterization,which builds the consistent correspondence(inter-surface mapping) between the sourcemesh S and the target mesh T . Note that the symbolS in this section is equivalent to the symbol S ′ inFig.2. The resulting compatible mesh ˜T will have theidentical topology (i.e., identical number of vertices andtriangles) with the target mesh T and have the similargeometry with the source mesh S.Our cross-parameterization technique is based onthe energy minimization framework, which containsthree terms: mean-value energy term E m , global constraintterm E g , and data fitting term E d . In the following,we **for**mulate the three terms, respectively.The first term is the mean-value energy item E m .Fig.3. The example of mapping a lion model (a) to a cat. Compared with the Laplacian representation (b), the mean-value manifoldoperator can obtain a shape-preserving correspondence (c).

588 J. Comput. Sci. & Technol., May 2010, Vol.25, No.3Fig.4. With a high-quality cross-parameterization between meshes, a women’s face surface is being gradually trans**for**med into a man’sface surface just by the linear interpolation.Initially, we consider the Laplacian energy. As will bedescribed in Section 4, the Laplacian coordinate of avertex v i is the average difference vector of its adjacentvertices to this vertex. If setting the vector tozero at each vertex, we build a smoothness energy, i.e.,smoothly distributing each vertex as close as possibleto the barycenter of its immediate neighbors. In matrix**for**m, the **for**mulations **for** all the vertices can berewritten as:E l = ‖LV ‖ 2 (5)where V = [v 1 , v 2 , . . . , v N(VT )] T , L is the Laplaciancoefficient matrix of the target mesh T , i.e., L mn =−δ {m=n} + 1/d i · δ {(m,n)∈ET }, δ is the Dirac function.However, as shown in Fig.3(b), this representationcannot reflect geometric properties of the target mesh.In the following, we try to incorporate in**for**mationabout the original shape, **for** instance encoding the in**for**mationabout the size, angle and orientation of localsurface shape. Here, we adopt the mean-value manifoldoperator to construct the matrix:E m = ‖MV ‖ 2 (6)where M is the mean-value manifold (see Section 4)matrix of the target mesh T .As shown in Fig.3(c), it can be seen that comparedwith Laplacian matrix L, the mean-value shape preservationrepresentation successfully captures the shapein**for**mation of the target mesh, such as the thread-likestripes on the legs and belly of the lion model.The second term E g focuses on the global constraintvertices. A few pairs of markers are specified by the useron both input meshes, which serve as the global constraints,i.e., a priori, to initialize the correspondence.The global constraint term E g is defined as:E g = ∑ i∈G|v i − g i | 2 (7)where G is the index set of global constraint vertices,g i is the position of the correspondent marker on thesource mesh.Note that except user-specified markers, after havingobtained an initially aligned compatible mesh usingthose initial markers, the important feature verticesFig.5. The initial compatible mesh (a) may not well align the targetmesh (c) on the boundary region, if only a few marker pairsare provided by the user. We detect important feature vertices onthe object mesh and then project them onto (a) to generate moreglobal constraints. Thus, the profiles of the compatible mesh willnot collapse, as shown in (b).(e.g., high curvature vertices) of the target mesh canfirst be automatically detected by the sorting algorithmand then projected to the initially aligned compatiblemesh to act as extra global constraints **for** the finerfitting. Take Fig.5 **for** instance, which is the correspondenceresult of the source and target models in Fig.4.The user specified only 29 marker pairs on both models,and the boundary of the models has few markers.There**for**e, the initial compatible mesh (Fig.5(a)) maynot well align the object mesh (Fig.5(c)) on the boundaryregion.To overcome this, we detect salient feature verticeson the object mesh based on curvature. Thus, importantfeatures of the object mesh are automatically obtained,e.g., nose tip, eye corner, and the boundary ofthe face model. Then, we map the salient vertices ofthe object mesh onto the initially aligned compatiblemesh to ensure sufficient matches. First, **for** each salientvertex, we choose the closest vertex on the compatiblemesh as its counterpart, thus extra global constraintsare generated (701 salient points are detected **for** modelsin Fig.3, 946 salient points are detected **for** models inFig.4). Similar to **for**egoing global constraint item, wealso **for**mulate the salient constraint as the quadraticenergy function. Once the compatible mesh approximatesthe object mesh at the salient vertices (e.g., theboundary vertices), the profiles of the compatible meshwill not collapse, as shown in Fig.5(b).

Huai-Yu Wu et al.: **Model** **Transduction** **for** **Triangle** **Meshes** 589The third term is the data fitting term E d , whichfits T to the source mesh S as close as possible. E d is**for**mulated as:E d = ∑ i∈D|v i − d i | 2 (8)where D is the index set of the vertices in T except theglobal constraint vertices, d i is the closest valid pointon the source mesh to vertex v i .Our complete objective function E is the weightedsum of the three error functions:As mentioned above, the Laplacian coordinate ζ i ofa vertex v i is the average difference vector of its adjacentvertices. Though translation-invariant, the vector representationsare not rotation-invariant and they mustsomehow be trans**for**med locally to match the desirednew orientations. Furthermore, the uni**for**m Laplacianoperator only reflects the topology in**for**mation of itsneighboring vertices, while not capturing the local parameterizationin**for**mation (such as local size, angle,and orientation) of the mesh.min E(V ) = E m + w g E g + w d E d (9)where w g , w d are weights. We solve the objective functionin two phases. First, we ignore the data fittingterm E d by using weights w g = 0.3, w d = 0, and obtainan initial mapping result. Then, we increase w deach time and update ˜T ’s vertices. In our experiments,increasing w d step by step from 0.001 to 0.01 generatesgood results. Each time the objective function is minimized,˜T is updated from its original position and moreclosely approximates S. Note that our correspondencesystem is actually designed to de**for**m the target meshT into the source mesh S to produce the compatiblemesh ˜T , thus implicitly guaranteeing that ˜T has theidentical connectivity with T .4 Mesh De**for**mation with the Mean-ValueManifold OperatorThen we describe the other ingredient of modeltransduction, detail-preserving model de**for**mation, indetail. Our transduction method is based on the differentialframework **for** mesh de**for**mation. We first reviewthe common differential representation: Laplaciancoordinate. The Laplacian linear operator [20] providesa differential representation of the mesh, and allows efficientconverting between absolute and intrinsic representationsby solving a sparse linear system. As shownin Fig.6(a), the Laplacian differential coordinates ζ i ofvertex v i are represented by the difference between v iand the average of its neighbors:v i − 1 d i∑j∈N(i)v j = ζ i = (ζ (x)i, ζ (y)i , ζ (z)i ) (10)where N(i) = {j|(i, j) ∈ E} are the edge neighbors,d i = |N(i)| is the valence of a vertex, i.e., the numberof edges that emanate from this vertex. Note that insteadof uni**for**m weights, geometric discretizations ofthe Laplacian will have better approximation qualities,such as the cotangent weights proposed by Pinkall andPolthier [38] .Fig.6. The illustration of Laplacian coordinates, mean-value coordinates,and mean-value manifold operator.The mean-value coordinate [14] , a generalization ofthe barycentric coordinate, is derived from an applicationof the mean value theorem **for** harmonic functions.Let v i and its neighboring vertices {v j |j ∈ N(i)} bepoints in the plane. The vertices {v j |j ∈ N(i)} **for**m astar-shaped polygon with v i in its kernel. The meanvalueweights of v i are defined as:∑j∈N(i)w ij (v j − v i ) = 0 (11)w ij = tan(α ij/2) + tan(β ij /2)‖v j − v i ‖(12)where w ij is the mean-value coefficient, α ij and β ij arethe angles shown in Fig.6(a). The weights can be guaranteedto be positive, and have the nice properties of dependingcontinuously and smoothly on the vertices [39] .The right-hand side in (11) is just the constant zerovector, i.e., a “weakened” vector. In other words, whileflattening v i into the 1-ring plane, the mean-value coordinatevector vanish, i.e., become zero. However,(11) is strictly satisfied in the star-shaped planar case.For the common 2-manifold case (e.g., the 3D trianglemesh), the right-hand side cannot guarantee to be thezero, that is, it may become a non-zero vector too. Todeal with this, we present a novel mean-value shapepreservingrepresentation, called the mean-value manifoldoperator, to obtain a “weakened” (close to zero)vector field which is more suitable **for** the manifoldmesh operation. The mean-value manifold operator insome sense can be seen as the LLE [40] (Locally Linear

590 J. Comput. Sci. & Technol., May 2010, Vol.25, No.3Embedding) of the original differential representation.As shown in (13), similar to the mean-value weightsin 2D star-shaped case, it is desirable **for** the meanvaluemanifold representation {M ij |j ∈ N(i)} in 3Dto satisfy the following requirements: (a) they arepositive, because the negative weights may lead to“foldover” in the mapping [14] and there**for**e undesiredconvergence per**for**mance [41] , especially **for** detailed andhighly irregular meshes; (b) the distribution of relativemagnitudes is fair, e.g., in the bad case ofmin{M ij } ≪ max{M ij }, min{M ij } will become negligible;(c) σ i / ∑ j∈N(i) |M ij| is small enough to makethe vector field as “weak” as possible (at best zero), asone of our main goals is to reduce the magnitude of thedifferential coordinate vector. All these conditions willeffectively avoid possible degeneration or disappearanceof the triangles during varied complex mesh operations,especially **for** irregular meshes.∑j∈N(i)M ij (v j − v i ) =σ i (13)where M is the mean-value manifold operator.As illustrated in Fig.6(b), we describe this **for**mulationin detail. Specifically, starting from a vertex v i ,we look **for** a new point v ′ i related to v i in the normaldirection n i :v ′ i − v i = λ i n i (14)where λ i is a factor used to decide v i v ′ i ’s length.Since the mean curvature flow direction ξ i lies exactlyin the linear space spanned by the normals of theincident triangles, we set the normal direction n i =ξ i /|ξ i |:ξ i =∑j∈N(i)w ξ ij (v j − v i ) (15)w ξ ij = cot θ ij + cot γ ij (16)where w ξ ij is the mean curvature flow coefficient[38,42] ;θ ij and γ ij are the angles shown in Fig.6(b).Finally, we encode the new point v i ′ with mean-valueweights:∑w ij (v j − v i) ′ = σ i (17)j∈N(i)where w ij is the mean-value coefficient in (12).Thus, the vertex v i ’s mean-value manifold representationis rewritten as:∑ ( ∑(w ij − λ i w ik)w ξ ij )(v j − v i )j∈N(i)= ∑j∈N(i)k∈N(i)M ij (v j − v i ) =σ i (18)where M ij is the mean-value manifold operator; thevariable λ i has been weighted by the normalization |ξ i |of the normal component. Note that in 2D plane case,the mean-value manifold operator M ij is actually themean-value coordinate w ij .Fig.7. The mesh editing results **for** various models, such as thedinosaur model, the feline model, the armadillo model, and thelion model. (a) illustrates the metaphor. The red region indicatesfixed global constraints. The blue region indicates the manipulationhandle.Fig.8. The input model (a) has irregular sampling quality (leftmostcolumn). The cot Laplacian iterative editing framework willproduce a poor de**for**mation result, as shown in (c) [41] . In contrast,our mean-value manifold operator can achieve a satisfyingresult.From (18), it can be seen that we need to find asuitable single-variate parameter λ i that satisfies theabove constraint conditions. In fact, it can be easilyachieved just by dynamic search along the normal direction.Note that λ i can be a positive, zero, or negativevalue.Due to just the single-variate search, this optimizationprocess is very fast. And in our tests, comparedwith the Laplacian coordinates, the magnitude of themean-value manifold vector σ i is usually “weakened”(reduced) to about 1/10.

Huai-Yu Wu et al.: **Model** **Transduction** **for** **Triangle** **Meshes** 591Now, we describe the mesh editing algorithm basedon the mean-value manifold representation. Mathematically,differential reconstruction of the resulting meshcan be **for**mulated as the following energy minimization:E(V ) =‖ MV − σ(V ) ‖ 2 + ∑i∈C mw 2 ‖ v i − c i ‖ 2 (19)where M is the differential operator (M ij ) matrix (n ×n, n is the number of vertices on the mesh) constructedfrom the original mesh be**for**e editing; σ(V ) are the differentialcoordinate vector; C m is the global constraint.Solving the quadratic minimization problem in (19)results in a sparse linear system:[AV =MwI m×m |Θ] [ σd (V )]V = = b (20)wC dwhere d ∈ {x, y, z}. Note that this system is fullrankand thus has a unique solution in the least-squaressense, which can be solved with V = (A T A) −1 A T b.Then, similar to [41], we further per**for**m a simpleand fast iteration process **for** better results by automaticallyadjusting the weakened vector field (our editingmethod is there**for**e an iteratively linear system). Theiteration process is to first pre-compute the factorizationof the system matrix A **for** once and iteratively per**for**mefficient updating of the differential coordinatesduring editing. Note that during the iterative process,M is fixed, there**for**e A is also fixed. That is, we onlyneed to solve λ i **for** the original mesh just **for** once andmake it fixed.Each iteration includes two steps: updating the vertexpositions and updating the differential coordinates.In the first step, we use the current differential coordinatesto compute the vertex positions. That is,we en**for**ce the handle constraints and update the vertexpositions so that the rotation-invariant in**for**mation(e.g., the local parameterization) encoded in M is similarto the original mesh. In the second step, we updatethe differential coordinates to match the current de**for**medmesh. In order to preserve the original featuresizes, we keep the magnitudes of the differential coordinatesunchanged by per**for**ming normalization. To conclude,the two-step iteration minimizes the change ofrotation-invariant in**for**mation (e.g., parameterizationdistortion) while keeping the rotation-variant features(that have been weakened by the mean-value manifoldoperator) similar to those of the original mesh.The iterations stop when the maximum ratio of thechanges in vertex positions between two successive iterationfalls below a certain threshold. Usually very fewiterations (t < 5 in our experiments, in comparisonwith t ≈ 20 in [41]) suffice to converge and achievevisually satisfying de**for**med results. From our analysis,the level of nonlinearity of a vertex depends on theratio of the magnitude of its differential coordinate tothe support size of its one-ring neighbors. The iterativeprocess will be faster and more stable (see Fig.8) ifthe vertices have small ratios and optimized (e.g., positive)weights. Though existing weighting schemes (e.g.,cotan Laplacian) address the detail preserving so as totake irregularity in sampling into account, the level ofnonlinearity still cannot be well controlled. As a result,they may be not robust in some complex conditions.On the other hand, our mean-value manifold editingmethod is designed to put the details in**for**mation oflocal surface into the rotation-invariant component asmuch as possible, while the remaining rotation-variantfeatures are weakened (to zero or near-zero) as muchas possible by the mean-value manifold operator. And**for** those near-zero features, we further keep the magnitudesof the differential coordinates unchanged in orderto preserve the original feature sizes.5 More Applications of **Model** **Transduction**Except retargetting pose between different meshes,the model transduction method also has more applicationsin shape modeling and processing, such as, posecorrection after various mesh editing operations, andskeleton-free de**for**mation animation based on 3D Mocapdata.5.1 Additional Application I: Pose Correctionof 3D **Model** after Various EditingOperationsWe first describe how to implement pose correction**for** 3D model after various editing operations. Thismotivation originates from the fact that although variousmesh editing methods have been presented, to ourknowledge, there are few techniques proposed to correctexisting dissatisfactory results produced by various de**for**mationmethods. In this subsection, we introduce ageneral refinement approach that requires no knowledgeof the actual method used to edit the original shape.Our pose correction method can be taken as an effectivepost-step toward the better de**for**mation result.The model transduction approach can be applied topose correction straight**for**wardly. Fig.9(b) shows aninitially de**for**med result generated by IK (inverse kinematics)skeletal technique in MAYA. It can be noticedthat some geometric details of the triceratops modelare lost and visual artifacts can be found, especially inthe leg parts. Nevertheless, the integrity of the meshin Fig.9(b) has contained the overall pose in**for**mation(referred to the original mesh in Fig.9(a)). Note thatvarious mesh editing methods usually do not change the

592 J. Comput. Sci. & Technol., May 2010, Vol.25, No.3Fig.9. (a) is the original triceratops model. (b) is an initially de**for**medresult generated by IK (inverse kinematics) skeletal techniquein MAYA. It can be noticed that some geometric details ofthe triceratops model are lost and some artifacts can be found,especially in the leg parts. With our approach, these artifacts aresuccessfully corrected and the satisfactory result (d) is obtained.(c) is the intermediate result.topology of the mesh, so there naturally exists a consistentcorrespondence between the original and de**for**medmeshes. Thus, similar to the transduction process, first,an intermediate noncon**for**ming “mesh” is generated byextracting the rigid components from the original meshand mapping them onto the initially de**for**med mesh.Then this noncon**for**ming mesh is pieced together to**for**m a 2-manifold mesh while satisfying both the translatingconstraint and the differential constraint. As aresult, the corrupt mesh is successfully corrected anda better de**for**mation result is achieved (as shown inFig.9(d)).5.2 Additional Application II: Skeleton-FreeDe**for**mation Animation with 3D MocapDataAnother application of our transduction method isthe skeleton-free de**for**mation animation based on 3DMocap data. By using our skeleton-free skinning technique,the overall pose of marker points from motioncapture (Mocap) is used to directly drive a 3D avatarmodel, without needing to build skeleton structures **for**these points. Moreover, the computation complexity ofour technique is independent of the number of markers,Fig.10. Our stereo-vision motion capture system.as the markers just serve as the space constraints of thesolution system in a least squares sense.Although the skeleton structure contains abundantpose in**for**mation and provides intuitive controls [5-7] ,defining and manipulating a skeleton structure **for** a3D model, which is usually represented by triangularmesh, is not a trivial task [43] . In the motion retargettingscenario, it is usually cumbersome to constructthe skeleton structure **for** lots of marker points. Moreover,the number of markers used in Mocap often variesin different scenes, so does the skeleton structure builtfrom those markers. In addition, many 3D objects donot have obvious skeleton structures or their metamorphosescannot be described in terms of a skeleton.The skeleton-free retargetting method can effectivelyovercome the above drawbacks. In the following foursteps, we discuss how to use our transduction techniqueto drive a 3D mesh model while not needing to buildthe skeleton structures **for** the Mocap data.1) Given a mesh model and the Mocap data, the userfirstly builds a correspondence between the markers ofthe Mocap data and the counterpart vertices on themesh. Note that these feature markers are typicallyplaced on the subject at anthropometric landmarks,such as the shoulders, elbows and wrists. The correspondenceprovides the landmark positions **for** a subsetof the model’s vertices in each frame of the motion sequence.There**for**e, we adopt markers’ positions as thespatial constraints of our mesh de**for**mation system.2) Then we generate an initial result by using somemesh de**for**mation technique. Here, we adopt the meanvaluemanifold de**for**mation method proposed in Section4. Note that in the case of the overall de**for**mation ofhuman motion, the user need to neither specify the desiredregion of interest (ROI), nor adjust the trans**for**mationof the handle frame to the trans**for**mation of thehandle position such as [21].Note that artifacts may appear in the initially de**for**medmesh ˜M because of incorrect vertex/marker correspondences,large de**for**mations, or exaggerated movements.So in the following steps, we need to adjust theinitial result that has already captured the overall posein**for**mation.3) In the third phase, we will generate an intermediate“mesh” M temp , which acts the same role as thepose mesh (Fig.2(d)) in model transduction.4) Having obtained the intermediate mesh M temp ,we now piece together the triangles in M temp accordingto the differential constraint and the translating constraint.The final 2-manifold mesh M ′ is produced.Thus, after the above four steps, the final drivenresult is generated. Fig.11 shows an example whenthe identical Mocap data is retargetted to different

Huai-Yu Wu et al.: **Model** **Transduction** **for** **Triangle** **Meshes** 593high sampling rate is advisable when fast motions arecaptured. Then the motion capture data are inputinto our 3D skeleton-free de**for**mation animation program.We do not construct the skeleton structure **for**the Mocap data. In Fig.11, the identical Mocap datais retargetted to different characters, a woman and aman, respectively.Fig.11. The identical Mocap data (a) is retargetted to two different3D characters, a woman (b) and a man (c).characters, a woman and a man, respectively.6 Results and DiscussionThe model transduction results are demonstrated inFigs. 1, 2, 12. It can be seen that even though thereference source mesh is absent, our method still successfullyretargets both gross skeletal structure and subtleskin de**for**mation, and is effective **for** large de**for**mations.Fig.1(a) and Fig.1(b) give a comparison betweende**for**mation transfer and our method, by transferring acat’s pose to a lion. As discussed in Section 1, the limitationof de**for**mation transfer is that the source andtarget reference meshes must have the same kinematicreference pose, since the reference target mesh T reproducesthe change in shape induced by the source de**for**mationbetween S and S ′ . Only when the two referenceposes are similar can this kind of transferring operationbe valid. While our method successfully overcomes thisdrawback. Another advantage of model transductionis that because of explicitly handling surface details’preservation (see (3)), model transduction can achievebetter results than de**for**mation transfer when visualartifacts already exist on the source de**for**med model.As **for** the limitation of our method, similar to de**for**mationtransfer [8,10] , model transduction currently isdesigned **for** the case where there is a similar semanticcorrespondence between the source and target meshes.Our method may not yield convincing results **for** modelswith very different semantics (e.g., one animal witha tail versus another animal without tail, or very shortlegs versus very long legs).Except **for** pose correction after various mesh editingoperations (Fig.9), our transduction method is alsoused **for** skeleton-free de**for**mation animation based on3D Mocap data. The motion data are firstly obtainedby our stereo-vision motion capture system (Fig.10),and then **for**matted into TRC point **for**mat files. Thesample rate of our Mocap system is up to 100 Hz. ThisFig.12. With the model transduction method. (a) Old manmodel directly imitates the expression of a young man model. (b)Muscular man model directly imitates the fetus model’s pose.We also test our model correspondence method andmodel de**for**mation method, respectively. First, fromFig.3, it can be seen that our mean-value manifoldcross-parameterization scheme successfully establishesa shape-preserving correspondence between input surfaces,such as the thread-like stripes on lion’s legs andbelly. In this example, eighteen markers are provided bythe user to serve as the global constraints. Fig.4 showsa few snapshots from the morphing series of two models(a women face and a man face). Thanks to the well establishedcross-parameterization, the gradual changesof the shape are natural and visually appealing resultis obtained.Then, we showcase the detail-preserving mesh editingresults in Fig.7. Meanwhile, Fig.7(a) illustrates themetaphor. In our system, point handles (Fig.7(a)) aresupported to provide a simple interface such that thereis no need **for** the user to specify the orientations (localframes) of the handles. That is, the local orientationat the point handle is automatically decided bythe system. Owing to the nice properties of the meanvaluemanifold operator, our editing system is effectiveand stable, and visually pleasing de**for**mation resultsare achieved.Our transduction method is numerically efficient,because the solution to the optimization problem canbe obtained by fast solving a sparse linear system. And

594 J. Comput. Sci. & Technol., May 2010, Vol.25, No.3the linear system is separable in the three coordinatesof the vertices, which reduces system’s scale to 1/3.We first compute the factorization of the normal equationsand then find the solution by back-substitution.With a sparse LU decomposition solver [44] , **for** example,5K/14K/20K vertices only require 0.281/0.829/1.407seconds **for** factorization and 0.008/0.016/0.032 seconds**for** back-substitution on an Intel P4/3.0 GHz. Moreover,**for** very large models, running the algorithm firston a simplified model and then using the hierarchicalcorrespondence between the simplified mesh and theoriginal mesh can significantly reduce the times.7 ConclusionsThis paper presents an effective method **for** posetransfer between triangle meshes. Our approach ispurely mesh-based. Using model transduction, the posein**for**mation is obtained from the source model and directlyretargetted to the target model, without the needof building skeleton configurations or extracting sourcede**for**mation with an extra reference source mesh. Experimentalresults show that our method can successfullytransfer both complex skeletal structures and subtleskin de**for**mations. We also demonstrate another twoapplications of the model transduction method: posecorrection after various mesh editing operations, andskeleton-free de**for**mation animation based on 3D Mocapdata. Furthermore, we propose the novel meanvaluemanifold operator to per**for**m shape-preservingcross-parameterization and mesh de**for**mation as thecomponents of model transduction. The results showthat our methods are effective and efficient **for** commonapplications.Acknowledgment We would like to thank Dr.Yong Wang **for** his valuable advice.References[1] Gleicher M. Retargeting motion to new characters. In Proc.SIGGRAPH, Orlando, USA, July 19-24, 1998, pp.33-42.[2] Lee J, Shin S Y. A hierarchical approach to interactive motionediting **for** human-like figures. In Proc. SIGGRAPH,Los Angeles, USA, Aug. 8-13, 1999, pp.39-48.[3] Popovic Z, Witkin A P. Physically based motion trans**for**mation.In Proc. SIGGRAPH, Los Angeles, USA, Aug. 8-13,1999, pp.11-20.[4] Park S I, Hodgins J K. Capturing and animating skin de**for**mationin human motion. ACM Transactions on Graphics,2006, 25(3): 881-889.[5] Magnenat-Thalmann N, Laperrière R, Thalmann D. Jointdependentlocal de**for**mations **for** hand animation and objectgrasping. In Proc. Graphics Interface, Edmonton, Canada,June 6-10, 1988, pp.26-33.[6] Lewis J P, Cordner M, Fong N. Pose space de**for**mations: Aunified approach to shape interpolation and skeleton-drivende**for**mation. In Proc. SIGGRAPH, New Orleans, USA, July23-28, 2000, pp.165-172.[7] Kry P G, James D L, Pai D K. EigenSkin: Real time largede**for**mation character skinning in hardware. In Proc. ACMSIGGRAPH Symposium on Computer Animation, San Antonio,USA, July 21-22, 2002, pp.153-160.[8] Sumner R W, Popović J. De**for**mation transfer **for** trianglemeshes. ACM Trans. Graphics, 2004, 23(3): 399-405.[9] Noh J, Neumann U. Expression cloning. In Proc. SIG-GRAPH, Los Angeles, USA, Aug. 12-17, 2001, pp.277-288.[10] Zayer R, Rossl C, Karni Z, Seidel H P. Harmonic guidance **for**surface de**for**mation. Eurographics, 2005, 24(3): 601-609.[11] Shi X, Zhou K, Tong Y, Desbrun M, Bao H, Guo B. Meshpuppetry: Cascading optimization of mesh de**for**mation withinverse kinematics. ACM Trans. Graph., 2007, 26(3): 81.[12] N Vladimir Vapnik. The Nature of Statistical Learning Theory.New York: Springer-Verlag, 2000.[13] Praun E, Hoppe H. Spherical parametrization and remeshing.In Proc. SIGGRAPH, San Diego, USA, July 27-31, 2003,pp.340-349.[14] Floater M S. Mean value coordinates. Computer Aided GeometricDesign, 2003, 20(1): 19-27.[15] Kraevoy V, Sheffer A. Cross-parameterization and compatibleremeshing of 3D models. ACM Trans. Graphics, 2004,23(3): 861-869.[16] Wu H Y, Pan C, Yang Q, Ma S. Consistent correspondencebetween arbitrary manifold surfaces. In Proc. ICCV, Rio deJaneiro, Brazil, Oct. 14-20, 2007.[17] Allen B, Curless B, Popović Z. The space of human bodyshapes: Reconstruction and parameterization from rangescans. In Proc. SIGGRAPH, San Diego, USA, July 27-31,2003, pp.587-594.[18] Nielson G M, Zhang L Y, Lee K, Huang A. Spherical parameterizationof marching cubes isosurfaces based upon nearestneighbor coordinates. Journal of Computer Science andTechnology,, 2009 24(1): 30-38.[19] Sederberg T W, Parry S R. Free-**for**m de**for**mation of solid geometricmodels. In Proc. SIGGRAPH, Dallas, USA, Aug. 18-22, 1986, pp.151-160.[20] Sorkine O, Lipman Y, Cohen-Or D, Alexa M, Rössl C, SeidelH P. Laplacian surface editing. In Proc. EurographicsSymposium on Geometry Processing, Nice, France, July 8-10,2004, pp.179-188.[21] Yu Y, Zhou K, Xu D, Shi X, Bao H, Guo B, Shum H Y. Meshediting with poisson-based gradient field manipulation. ACMTrans. Graphics, 2004, 23(3): 644-651.[22] Alla Sheffer, Vladislav Kraevoy. Pyramid coordinates **for**morphing and de**for**mation. In Proc. 3DPVT, Thessaloniki,Greece, Sept. 6-9, 2004, pp.68-75.[23] Lipman Y, Sorkine O, Levin D, Cohen-Or D. Linear rotationinvariantcoordinates **for** meshes. In Proc. SIGGRAPH, LosAngeles, USA, July 31-Aug. 4, 2005, pp.479-487.[24] Angelidis A, Wyvill G, Cani M P. Sweepers: Swept de**for**mationdefined by gesture. Graphical **Model**s, 2006, 68(1): 2-14.[25] Botsch M, Pauly M, Rossl C, Bischoff S, Kobbelt L. Geometricmodeling based on triangle meshes. In SIGGRAPHCourses, Boston, USA, July 30-Aug. 8, 2006.[26] Sifakis E, Shinar T, Irving G, Fedkiw R. Hybrid simulation ofde**for**mable solids. In Proc. ACM SIGGRAPH/EurographicsSymposium on Computer Animation, San Diego, USA,Aug. 2-4, 2007, pp.81-90.[27] Mezger J, Thomaszewski B, Pabst S, Straßer W. Interactivephysically-based shape editing. In Proc. ACM Symposiumon Solid and Physical **Model**ing, New York, USA, June 2-4,2008, pp.79-89.[28] Xu W W, Zhou K. Gradient domain mesh de**for**mation — Asurvey. Journal of Computer Science and Technology, 2009,24(1): 6-18.

Huai-Yu Wu et al.: **Model** **Transduction** **for** **Triangle** **Meshes** 595[29] Zhao Y, Liu X G, Peng Q S, Bao H J. Rigidity constraints **for**large mesh de**for**mation. Journal of Computer Science andTechnology, 2009, 24(1): 47-55.[30] Cohen-Or D. Space de**for**mations, surface de**for**mations andthe opportunities in-between. Journal of Computer Scienceand Technology, 2009, 24(1): 2-5.[31] Huang J, Shi X, Liu X, Zhou K, Wei L Y, Teng S H, BaoH, Guo B, Shum H Y. Subspace gradient domain mesh de**for**mation.ACM Transactions on Graphics, 2006, 25(3): 1126-1134.[32] Botsch M, Pauly M, Gross M, Kobbelt L. PriMo: Coupledprisms **for** intuitive surface modeling. In Proc. EurographicsSymposium on Geometry Processing, Cagliari, Italy, June26-28, 2006, pp.11-20.[33] Sorkine O, Alexa M. As-rigid-as-possible surface modeling.In Proc. Eurographics Symposium on Geometry Processing,Barcelona, Spain, July 4-6, 2007, pp.109-116.[34] Kraevoy V, Sheffer A. Mean-value geometry encoding. InternationalJournal of Shape **Model**ing, 2006, 12(1): 29-46.[35] Lipman Y, Cohen-Or D, Gal R, Levin D. Volume and shapepreservation via moving frame manipulation. ACM Transactionson Graphics, 2007, 26(1): 5.[36] Mount D M, Arya S. Ann: A library **for** approximate nearestneighbor searching, version 1.1.1. Aug.4, 2006, http://www.cs.umd.edu/∼mount/ANN/.[37] Botsch M, Sorkine O. On linear variational surface de**for**mationmethods. IEEE Transactions on Visualization and ComputerGraphics, 2007, 14(1): 213-230.[38] Pinkall U, Polthier K. Computing discrete minimal surfacesand their conjugates. Experimental Mathematics, 1993, 2(1):15-36.[39] Hormann K, Floater M S. Mean value coordinates **for** arbitraryplanar polygons. ACM Transactions on Graphics, 2006,25(4): 1424-1441.[40] Roweis S, Saul L. Nonlinear dimensionality reduction by locallylinear embedding. Science, 2000, 290: 2323-2326.[41] Au O K C, Tai C L, Liu L, Fu H. Dual Laplacian editing **for**meshes. IEEE Transactions on Visualization and ComputerGraphics, 2006, 12(3): 386-395.[42] Desbrun M, Meyer M, Schröder P, Barr A H. Implicit fairingof irregular meshes using diffusion and curvature flow.In Proc. SIGGRAPH, Los Angeles, USA, Aug. 8-13, 1999,pp.317-324.[43] Au O K C, Tai C L, Chu H K, Cohen-Or D, Lee T Y. Skeletonextraction by mesh contraction. ACM Trans. Graphics,2008, 27(3): 44.[44] TOLEDO S. Taucs: A library of sparse linear solvers,version 2.2. Tel Aviv University, Sept.4, 2003,http://www.tau.ac.il/∼stoledo/taucs.Huai-Yu Wu is now workingat Peking University, Key Laboratoryof Machine Perception (MOE),School of Electronics Engineeringand Computer Science. He receivedthe Ph.D. degree in computer sciencefrom the Institute of Automation,Chinese Academy of Sciences,Beijing, in 2008. He received theB.E. and M.E. degrees from the BeijingUniversity of Aeronautics and Astronautics in 2000and 2003, respectively. His current research interests fallin the fields of geometric modeling and processing, interactivecomputer graphics, and computer vision.Chun-Hong Pan received hisB.S. degree in automatic control fromTsinghua University, Beijing, China,in 1987, his M.S. degree in 1990,and his Ph.D. degree in patternrecognition and intelligent systemfrom Institute of Automation, ChineseAcademy of Sciences in 2000.Now he is a professor at the NationalLaboratory of Pattern Recognition ofInstitute of Automation, Chinese Academy of Sciences. Hisresearch interests lie in computer vision, pattern recognition,and remote sensing.Hong-Bin Zha received the B.E.degree in electrical engineering fromthe Hefei University of Technology,China, in 1983 and the M.S. andPh.D. degrees in electrical engineeringfrom Kyushu University, Japan,in 1987 and 1990, respectively. Afterworking as a research associate atKyushu Institute of Technology, hejoined Kyushu University in 1991 asan associate professor. He was also a visiting professor inthe Centre **for** Vision, Speech, and Signal Processing, SurreyUniversity, Unite Kingdom, in 1999. Since 2000, he hasbeen a professor at the State Key Laboratory of MachinePerception, Peking University, China. His research interestsinclude computer vision, digital geometry processing, androbotics. He has published more than 160 technical publicationsin journals, books, and international conference proceedings.He received the Franklin V. Taylor Award fromthe IEEE Systems, Man, and Cybernetics Society in 1999.Song-De Ma is a professor atthe National Laboratory of PatternRecognition, Institute of Automation,Chinese Academy of Sciences.He was the president of the Instituteof Automation of Chinese Academyof Sciences in 1996∼2000. He was the**for**mer vice-minister of the Ministryof Science and Technology (MOST)of China in 2000∼2006. His researchinterests include computer vision, computer graphics, patternrecognition, and robotics.