13.07.2015 Views

Using Articulated Models for Tracking Multiple C. elegans in ...

Using Articulated Models for Tracking Multiple C. elegans in ...

Using Articulated Models for Tracking Multiple C. elegans in ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

J Sign Process SystDOI 10.1007/s11265-008-0182-x<strong>Us<strong>in</strong>g</strong> <strong>Articulated</strong> <strong>Models</strong> <strong>for</strong> <strong>Track<strong>in</strong>g</strong> <strong>Multiple</strong>C. <strong>elegans</strong> <strong>in</strong> Physical ContactKuang-Man Huang & Pamela Cosman &William R. SchaferReceived: 19 December 2007 /Revised: 17 March 2008 /Accepted: 3 April 2008# 2008 Spr<strong>in</strong>ger Science + Bus<strong>in</strong>ess Media, LLC. Manufactured <strong>in</strong> The United StatesAbstract We present a method <strong>for</strong> track<strong>in</strong>g and dist<strong>in</strong>guish<strong>in</strong>gmultiple C. <strong>elegans</strong> <strong>in</strong> a video sequence, <strong>in</strong>clud<strong>in</strong>gwhen they are <strong>in</strong> physical contact with one another. Theworms are modeled with an articulated model composed ofrectangular blocks, arranged <strong>in</strong> a de<strong>for</strong>mable configurationrepresented by a spr<strong>in</strong>g-like connection between adjacentparts. Dynamic programm<strong>in</strong>g is applied to reduce thecomputational complexity of the match<strong>in</strong>g process. Ourmethod makes it possible to identify two worms correctlybe<strong>for</strong>e and after they touch each other, and to f<strong>in</strong>d the bodyposes <strong>for</strong> further feature extraction. All jo<strong>in</strong>t po<strong>in</strong>ts <strong>in</strong> ourmodel can be also considered to be the pseudo skeletonpo<strong>in</strong>ts of the worm body. It solves the problem that apreviously presented morphological skeleton-based reversaldetection algorithm fails when two worms touch each other.The algorithm has many applications <strong>in</strong> the study ofphysical <strong>in</strong>teractions between C. <strong>elegans</strong>.Keywords <strong>Articulated</strong> model . C. <strong>elegans</strong> .Computer vision . Dynamic programm<strong>in</strong>g .Image process<strong>in</strong>g . Model match<strong>in</strong>gK.-M. Huang (*) : P. CosmanDepartment of Electrical and Computer Eng<strong>in</strong>eer<strong>in</strong>g,University of Cali<strong>for</strong>nia at San Diego,La Jolla, CA 92093-0407, USAe-mail: houston21@hotmail.comW. R. SchaferCell Biology Division,MRC Laboratory of Molecular Biology at Cambridge,Cambridge CCB2 0QH, UK1 IntroductionThe nematode Caenorhabditis <strong>elegans</strong> is widely used <strong>for</strong>studies of nervous system function and behavior. The C.<strong>elegans</strong> nervous system has a small number of neurons, allof which have been <strong>in</strong>dividually identified and characterized.Moreover, the ease of genetic manipulation <strong>in</strong> these animalsmakes it straight<strong>for</strong>ward to isolate mutant stra<strong>in</strong>s withabnormalities <strong>in</strong> behavior and rapidly identify the mutantgene by molecular clon<strong>in</strong>g. However, <strong>in</strong> order to rigorouslystudy the relationship between genes and behavior <strong>in</strong> C.<strong>elegans</strong>, precise quantitative assays <strong>for</strong> behaviors such aslocomotion, feed<strong>in</strong>g and egg-lay<strong>in</strong>g are required. Becausesome of these behaviors occur over long time scales that are<strong>in</strong>compatible with real-time scor<strong>in</strong>g by a human observer,automated systems consist<strong>in</strong>g of a track<strong>in</strong>g microscope andimage process<strong>in</strong>g software have been developed and used tofollow and analyze animals. Some of these systems [1–4]per<strong>for</strong>m track<strong>in</strong>g of <strong>in</strong>dividual worms at high magnification,while others have been designed to track multiple worms atlower magnification [5, 6].Both types of system currently <strong>in</strong> use have disadvantages<strong>for</strong> the collection of behavioral data. S<strong>in</strong>gle-worm systemscan provide a considerable amount of <strong>in</strong><strong>for</strong>mation abouteach animal that is recorded, but s<strong>in</strong>ce statisticallysignificantcharacterization of any worm type requires theanalysis of multiple animals, collect<strong>in</strong>g data one animal at atime is often frustrat<strong>in</strong>gly slow. On the other hand,multiple-worm record<strong>in</strong>gs do not typically provide as much<strong>in</strong><strong>for</strong>mation as s<strong>in</strong>gle worm record<strong>in</strong>gs due to their lowermagnification. In addition, <strong>in</strong> most exist<strong>in</strong>g multi-wormsystems, any time two <strong>in</strong>dividuals touch, segmentation ofseparate animals is difficult and so their <strong>in</strong>dividualidentities are lost by the system. When the animals separate,the system is unable to determ<strong>in</strong>e the correspondence


K.H. Huang et al.between <strong>in</strong>dividuals be<strong>for</strong>e and after touch<strong>in</strong>g. This is theproblem we solve <strong>in</strong> this paper. The <strong>in</strong>ability to separatelysegment and track <strong>in</strong>dividual animals when they touchseriously limits the ability of a multi-worm system tocharacterize the behavior of an <strong>in</strong>dividual <strong>in</strong> a populationover time. Moreover, some behaviors of significant <strong>in</strong>terestto researchers, such as mat<strong>in</strong>g and social feed<strong>in</strong>g [6, 7], bytheir very nature <strong>in</strong>volve physical <strong>in</strong>teraction betweenanimals. For any automated system to be useful <strong>in</strong>characteriz<strong>in</strong>g these behaviors, it is essential that theposition and body posture of a worm can be followeddur<strong>in</strong>g and after physical contact with another animal.Here we describe a new method <strong>for</strong> track<strong>in</strong>g multiple C.<strong>elegans</strong> that makes it possible to accurately resolve the<strong>in</strong>dividual body postures of two worms <strong>in</strong> physical contactwith one another by us<strong>in</strong>g a model<strong>in</strong>g algorithm. Modelmatch<strong>in</strong>g algorithms can be roughly divided <strong>in</strong>to twocategories [8, 9]. The first class is “contour based” whichrepresents objects <strong>in</strong> terms of their boundaries. One popularapproach is called active contours or “snakes”, which de<strong>for</strong>melastically to fit the contour of the target structure [10].Another method uses comb<strong>in</strong>ations of trigonometric functionswith variable coefficients to represent objects [11, 12].The second class consists of “appearance-based”approaches. In this case a model is used to simulate thecomplete appearance (shape, color, texture, etc.) of thetarget object <strong>in</strong> the image. Bajcsy and Kovacic [13]describe a volume model that de<strong>for</strong>ms elastically to coverthe object region. In [14], a model composed of a collectionof parts is used to represent objects <strong>in</strong> terms of aconstellation of local features. All parts <strong>in</strong> this model areconstra<strong>in</strong>ed with respect to a central coord<strong>in</strong>ate system.There have been other part-based model<strong>in</strong>g algorithms.Fischler and Elschlager [15] <strong>in</strong>troduce an articulated modelwith all parts arranged <strong>in</strong> a de<strong>for</strong>mable configuration whichis represented by spr<strong>in</strong>g-like connections between pairs ofparts. This articulated model has recently been used <strong>for</strong>track<strong>in</strong>g an <strong>in</strong>dividual person <strong>in</strong> videos [16, 17]. In [18],this method is further improved with an efficient algorithm<strong>for</strong> f<strong>in</strong>d<strong>in</strong>g the best match of a person and a car <strong>in</strong> images.A number of methods to track a 3D human figure us<strong>in</strong>garticulated models also have been proposed. [19] uses a 3Darticulated model comb<strong>in</strong>ed with a modified particle filterto recover full articulated human body motion. [20] tracks a3D human body with 2D contours us<strong>in</strong>g <strong>in</strong><strong>for</strong>mation frommultiple cameras with different viewpo<strong>in</strong>ts. [21] presents analgorithm which projects 3D motion of a figure onto animage plane <strong>in</strong>stead of us<strong>in</strong>g multiple cameras.<strong>Multiple</strong> object track<strong>in</strong>g has been an <strong>in</strong>tensive area ofresearch due to its many applications [22–24]. However,traditional methods fail when the objects are <strong>in</strong> closeproximity or present occlusions. A snake, <strong>for</strong> example, canbe used to track <strong>in</strong>dividual de<strong>for</strong>mable mov<strong>in</strong>g b<strong>in</strong>aryobjects. But when two b<strong>in</strong>ary objects touch each other, theboundary between them is not represented <strong>in</strong> the imagedata, and so a snake algorithm would likely fail withoutusable image <strong>in</strong><strong>for</strong>mation at the boundary. <strong>Track<strong>in</strong>g</strong>multiple objects separately <strong>in</strong> videos is achievable by us<strong>in</strong>gappearance-based models. Some methods track and separatepeople with occlusions by us<strong>in</strong>g cues such asappearance (color and texture of cloth<strong>in</strong>g, etc.) from frameto frame [25–28]. Some approaches are able to trackmultiple rigid objects <strong>in</strong> simple <strong>in</strong>teractions. In [29], Khanuses particle filter<strong>in</strong>g comb<strong>in</strong>ed with a pairwise penaltyfunction that only depends on the number of pixels whichoverlap between the appearance templates associated withthe two targets to track multiple ants. In [30], Qu uses the“magnetic potential” and “<strong>in</strong>ertia potential” to solve the“error merge” and “false object label<strong>in</strong>g” problems aftersevere occlusion of targets.Although there have been numerous algorithms <strong>for</strong>multiple-object track<strong>in</strong>g, additional issues arise whentrack<strong>in</strong>g C. <strong>elegans</strong> worms. The worm track<strong>in</strong>g problemdiffers from previous track<strong>in</strong>g experiments <strong>in</strong> severalrespects: (1) The worms are not rigid bodies; they arehighly de<strong>for</strong>mable, (2) the actual worm body is transparent,and (3) the worm moves almost entirely <strong>in</strong> a 2D plane. Thefact that the worm body is transparent means that color<strong>in</strong><strong>for</strong>mation is completely unavailable and even grayscalecan be unreliable. There<strong>for</strong>e we cannot use the color ortexture to easily dist<strong>in</strong>guish the animals from the backgroundof transparent bacteria layers. To solve thistechnical problem, the illum<strong>in</strong>ation of the microscope ischosen to make the worm body look dark and thebackground look bright <strong>in</strong> grayscale images. This meansthe grayscale images are very nearly b<strong>in</strong>ary, and <strong>in</strong> fact weconvert them to b<strong>in</strong>ary as a pre-process<strong>in</strong>g step. While theb<strong>in</strong>ary nature of the images makes it relatively easy tocompute body posture features <strong>for</strong> s<strong>in</strong>gle-worm videos, it isa difficult problem to track and dist<strong>in</strong>guish touch<strong>in</strong>gde<strong>for</strong>mable b<strong>in</strong>ary blobs when there is more than oneobject <strong>in</strong> the scene. The fact that the worm moves almostentirely <strong>in</strong> a 2D plane also makes this track<strong>in</strong>g problemsomewhat different from many prior studies. Althoughportions of a worm’s body can cross over/under its ownbody or the body of another worm, the worm’s body issufficiently flat that such cross<strong>in</strong>gs do not <strong>in</strong>volve appreciablecurvature up and out of the plane of the agar plate.Thus the projection of the worm’s body onto the plane ofthe plate has a roughly constant length.These three significant differences make the wormtrack<strong>in</strong>gproblem quite unique compared to other multipleobject track<strong>in</strong>g problems, such as those <strong>in</strong>volv<strong>in</strong>g peopleand cars. Prior research on track<strong>in</strong>g multiple C. <strong>elegans</strong> islimited to [31] and [32], of which the latter is a prelim<strong>in</strong>aryversion of the research presented here. In [31], a de<strong>for</strong>m-


<strong>Us<strong>in</strong>g</strong> <strong>Articulated</strong> models <strong>for</strong> track<strong>in</strong>g multiple C. <strong>elegans</strong>able worm model and several motion patterns are used todef<strong>in</strong>e an overall locomotory space of C. <strong>elegans</strong> and todescribe its general dynamic movements. <strong>Us<strong>in</strong>g</strong> a comb<strong>in</strong>ationof “predicted energy barrier” and multiple-hypothesistrack<strong>in</strong>g, multiple touch<strong>in</strong>g worms can be identified andseparated with a high success rate. The algorithm presented<strong>in</strong> [31] focuses on the crawl<strong>in</strong>g mechanism, which meanssudden position-shift of the worm bodies caused by theswimm<strong>in</strong>g mechanism of the worm or even the movementof the stage may lead to loss of a track. Our work differsfrom [31] <strong>in</strong> that the algorithm presented here canaccommodate both crawl<strong>in</strong>g and swimm<strong>in</strong>g mechanismsor other causes of sudden shifts such as stage movement.Our work also differs <strong>in</strong> that we <strong>in</strong>troduce various pixelbased,feature-based, and human observation based methods<strong>for</strong> evaluat<strong>in</strong>g the accuracy of the body match<strong>in</strong>g algorithm,and we evaluate some features (angle change rate, reversals)extracted from the matched body positions.In this paper, we also choose an appearance-basedmethod to match the bodies of C. <strong>elegans</strong>. We comb<strong>in</strong>e apart-based articulated model [18, 33] with a dynamicprogramm<strong>in</strong>g algorithm to determ<strong>in</strong>e the correct locationof the <strong>in</strong>dividual worm bodies. Our work accuratelyresolves the <strong>in</strong>dividual body postures of two worms <strong>in</strong>physical contact with one another and identifies themcorrectly be<strong>for</strong>e and after they touch each other, and canstill ma<strong>in</strong>ta<strong>in</strong> track of the worms when their bodies havenon-crawl<strong>in</strong>g displacement. Furthermore, we solve theproblem that the skeleton-based reversal detection algorithm<strong>in</strong> [34] fails when two worms touch each otherbecause of the difficulty of obta<strong>in</strong><strong>in</strong>g morphologicalskeletons. The basic algorithms described here appeared<strong>in</strong> abbreviated <strong>for</strong>m <strong>in</strong> a prelim<strong>in</strong>ary paper [32]. This longerversion <strong>in</strong>cludes an expanded exposition of the algorithm,as well as completely new results (<strong>in</strong> Tables 2 and 4) andexpansions to the earlier results presented <strong>in</strong> Tables 1 and 3.In particular, we now show that a parameter such as anglechange rate (related to body curvature) can be extractedfrom the model, verify<strong>in</strong>g that our algorithm accuratelysimulates worm body poses, and we also exam<strong>in</strong>e the rateof movement be<strong>for</strong>e and after touch<strong>in</strong>g, and we extractparameters dur<strong>in</strong>g the time the animals touch (reversal rate,duration of touch<strong>in</strong>g). Our algorithm shows good per<strong>for</strong>mance<strong>in</strong> the analysis of real worm videos, and should havemany applications <strong>in</strong> the study of C. <strong>elegans</strong> behavior.2 Materials and Methods2.1 Stra<strong>in</strong>s and Culture MethodsRout<strong>in</strong>e cultur<strong>in</strong>g of C. <strong>elegans</strong> was per<strong>for</strong>med as described[35]. All worms analyzed <strong>in</strong> these experiments were youngadults; fourth-stage larvae were picked the even<strong>in</strong>g be<strong>for</strong>ethe experiment and tracked the follow<strong>in</strong>g morn<strong>in</strong>g. Exactlytwo experimental animals were placed on a plate and theywere allowed to acclimate <strong>for</strong> 5 m<strong>in</strong>utes be<strong>for</strong>e theirbehavior was analyzed. Plates <strong>for</strong> track<strong>in</strong>g experimentswere prepared fresh the day of the experiment; a s<strong>in</strong>gledrop of a saturated LB (Luria broth) culture of E. coli stra<strong>in</strong>OP50 was spotted onto a fresh NGM (nematode growthmedium) agar plate and allowed to dry <strong>for</strong> 30 m<strong>in</strong>utesTable 1 Comparison results between automatically and manually generated models.File name 005 006 007 008 011 012 015 016 017 018 020Automatically generatedmodel aga<strong>in</strong>st manuallygenerated modelAutomaticallygenerated model(predicted positive; %)Automatically generatedmodel (true positive; %)Manually generated model(predicted positive; %)Manually generated model(true positive; %)Non-touch<strong>in</strong>g (%) 82.7 77.8 81.1 74.7 77.8 79.6 74.1 75.3 79.8 72.1 82.477.5 76.8 81.3 80.1 80.7 81.1 69.2 79.2 81.1 74.9 67.7(78) (25) (45) (5) (104) (38) (113) (46) (139) (144) (56)Touch<strong>in</strong>g (%) 80.2 73.3 76.6 76.0 78.2 76.0 76.6 77.8 79.4 80.2 79.473.8 75.3 78.4 77.2 78.7 79.2 78.4 80.7 78.6 81.9 77.7(122) (176) (155) (144) (55) (94) (64) (51) (61) (55) (143)89.5 92.4 87.9 91.2 92.0 90.5 94.7 85.0 90.9 84.2 92.186.2 89.3 88.5 91.5 87.5 86.5 80.5 90.9 90.2 85.4 86.483.3 82.8 89.8 75.3 68.3 76.9 70.6 77.7 82.3 83.3 83.177.9 78.3 88.2 82.4 83.5 83.0 74.5 80.4 81.7 80.1 73.787.1 86.2 80.9 82.4 92.4 88.4 88.2 86.6 84.9 80.7 87.586.2 88.8 85.2 85.9 85.0 84.9 83.8 89.3 87.3 89.1 87.181.8 75.5 83.9 68.8 67.6 73.9 68.0 78.5 76.8 78.6 80.176.6 76.7 85.1 77.2 80.0 80.4 72.0 75.9 78.3 76.9 71.3The correct percentages between automatically generated model and manually generated model <strong>for</strong> the two worms are listed <strong>in</strong> rows 1–3 and 4–6.Predicted positive values and true positive rates <strong>for</strong> the two worms are listed <strong>in</strong> rows 7–8 and 9–10 (automatically generated model) and rows 11–12 and 13–14 (manually generated model). Values <strong>in</strong> parenthesis are the numbers of image frames.


K.H. Huang et al.be<strong>for</strong>e use. The worms used <strong>in</strong> these experiments were npr-1(ky13) mutants. Unlike the laboratory standard N2 stra<strong>in</strong>which is a solitary feeder, tend<strong>in</strong>g to disperse on encounter<strong>in</strong>gbacterial food, npr-1(ky13) mutants are socialfeeders, strongly aggregat<strong>in</strong>g together, thus provid<strong>in</strong>g anopportunity to study touch<strong>in</strong>g behavior.2.2 Acquisition of Image DataC. <strong>elegans</strong> locomotion was tracked with a Zeiss Stemi2000-C microscope mounted with a Cohu high per<strong>for</strong>manceCCD video camera essentially as described [4]. Themicroscope was outfitted <strong>for</strong> brightfield illum<strong>in</strong>ation from a12 V 20 W halogen bulb reflected from a flat mirrorpositioned at an angle of approximately 45°. To record thelocomotion of animals, an image frame of the animals wascaptured every 0.125 s (8 Hz) with a resolution of 640×480 pixels and then saved as AVI video files. Themicroscope was fixed to the magnification of 2.5× dur<strong>in</strong>gobservation. In order to focus on the goal of identify<strong>in</strong>g thebody postures of two worms <strong>in</strong> physical contact, each videoused <strong>in</strong> this experiment conta<strong>in</strong>s exactly two animals. Foreach video, the record<strong>in</strong>g is <strong>in</strong>itiated when the worms areseparated. The record<strong>in</strong>g cont<strong>in</strong>ues until they touch andthen move apart. The length of every video is different,rang<strong>in</strong>g from 30 to 178 s because the time length <strong>for</strong> eachpair of worms to aggregate and separate is different. Allacquired images are grayscale with <strong>in</strong>tensity level from 0 to255 and conta<strong>in</strong> exactly two animals of low <strong>in</strong>tensity (dark)surrounded by bright background. The images are sometimescluttered with other objects such as worm eggs, blobsof food, or irregularities <strong>in</strong> the surface of the agar. In orderto save a great deal of computation and make furtherprocess<strong>in</strong>g easier, a local threshold<strong>in</strong>g was applied on thegrayscale images by us<strong>in</strong>g a 5×5 mov<strong>in</strong>g w<strong>in</strong>dow. Thecenter pixel <strong>in</strong>side the mov<strong>in</strong>g w<strong>in</strong>dow was assigned to 1 ifthe mean value of the w<strong>in</strong>dow was less than 70% of thebackground pixel value or the standard deviation was largerthan 30% of the mean value. Otherwise, the center pixelwas assigned to 0 as background [4].2.3 Worm ModelBecause the C. <strong>elegans</strong> worm body is more or lesscyl<strong>in</strong>drical, <strong>in</strong> this paper, we model its projection as be<strong>in</strong>gcomposed of N rectangular parts with length L and width W<strong>in</strong> the ratio 2:1. To generate a more accurate and robustworm body model, parameters N, L and W are learned fromthe image data. They can differ <strong>for</strong> different <strong>in</strong>put videos.For a given video, let l and w be the average length andwidth of the worm body calculated from all non-touch<strong>in</strong>gframes. These values are calculated automatically us<strong>in</strong>g themethod described <strong>in</strong> [4]. Typical values of l and w wereapproximately 95 and 6 pixels respectively. Then we set W=0.9w and L=2W and N ¼ round l L. In this experiment, Wranged from 4 to 8 pixels (L ranged from 8 to 16 pixels) andN ranged from six to n<strong>in</strong>e parts. The position of each part <strong>in</strong>the image can be def<strong>in</strong>ed by the triple (x, y, θ), whichspecifies the coord<strong>in</strong>ate of the center and the orientation ofthe part (Fig. 1a). Adjacent parts are connected by two jo<strong>in</strong>tpo<strong>in</strong>ts (Fig. 1b), which may co<strong>in</strong>cide (Fig. 1c) but also mightbe chosen to not be co<strong>in</strong>cident. When (x, y, θ) isdeterm<strong>in</strong>ed<strong>for</strong> each of the N rectangular parts compos<strong>in</strong>g the wormbody model, we refer to this as a worm body pose.We seek to f<strong>in</strong>d the best match of the worm model to theactual b<strong>in</strong>ary worm image data. The concept of best match<strong>in</strong>corporates both how well the rectangular parts fit theimage pixel data, and also how well the rectangular parts fitworm body anatomy. By “worm body anatomy” we meanhow well the parts fit with each other <strong>in</strong>to a smooth wormbody (<strong>for</strong> example, adjacent parts should not have largegaps between their jo<strong>in</strong>t po<strong>in</strong>ts) [15, 18, 33], as will bediscussed <strong>in</strong> the next paragraphs.We beg<strong>in</strong> by consider<strong>in</strong>g how well a rectangular partfits the image pixel data and determ<strong>in</strong>istically exam<strong>in</strong><strong>in</strong>gthe set of K most plausible pixel positions from the object(K can vary based on different image resolutions used <strong>in</strong>different experiments). These are the positions which havethe lowest match cost to place our rectangular part. Thematch cost m(I,p i )ofapartp i with 12 different possibleorientation angles (15°, 30°, 45°,…, 180°) at everyFigure 1 a One rectangular partand its parameters, L and W arethe length and width of therectangle and (x, y, θ) specifiesthe coord<strong>in</strong>ates of the center andthe orientation of the part, b twoparts of the worm model andtheir jo<strong>in</strong>t po<strong>in</strong>ts, c two parts ofthe worm model with the jo<strong>in</strong>tpo<strong>in</strong>ts co<strong>in</strong>cid<strong>in</strong>g.wp mL(x n ,y n ) θ(xnp mn ,y mn )n(x nm,y nm )p nJo<strong>in</strong>t po<strong>in</strong>ts(a) (b) (c)p m


<strong>Us<strong>in</strong>g</strong> <strong>Articulated</strong> models <strong>for</strong> track<strong>in</strong>g multiple C. <strong>elegans</strong>possible <strong>in</strong>teger pixel position (x,y) can be computed byconvolv<strong>in</strong>g the b<strong>in</strong>ary worm image I with a convolutionkernel composed of a “match” rectangle with differentorientation angles (with the same size as our rectangularpart) embedded <strong>in</strong> a larger “no match” rectangle [18, 33].The entries of this convolution kernel are def<strong>in</strong>ed by thefollow<strong>in</strong>g equation:(kx; ð yÞ ¼1 ; if ðx; yÞ 2 No Match:; if ðx; yÞ 2 MatchWLS e2x 2Wþ 2y22 L 2P2xwhere S ¼ e2W 2þ2y2 L, 2 W and L are the width andðx;yÞ2Matchlength of a rectangular part and (x,y) are the coord<strong>in</strong>atesWrelative to the center of the kernel2 x W 2 ;L2 y L 2Þ. In this convolution kernel, po<strong>in</strong>ts close tothe y axis (x=0) have larger weights. Despite the fact thatwe set W=0.9w which means that the rectangular modelpart has a width that is only 90% of the average width ofthe animal <strong>in</strong> non-touch<strong>in</strong>g frames, <strong>in</strong> some images, it ispossible that the orig<strong>in</strong>al worm body width is slightlysmaller than the width W of the rectangular part. By us<strong>in</strong>gthis kernel, pixel positions close to the middle of the bodywill have relatively smaller match cost than positions closeto the edge of the body, which means the likelihood <strong>for</strong> apart to be placed along the middle l<strong>in</strong>e is larger than otherpossible positions. By exhaustively evaluat<strong>in</strong>g the matchcost <strong>for</strong> each <strong>in</strong>teger pixel position and each of the 12orientation angles, we generate a list of the K mostplausible positions <strong>for</strong> <strong>in</strong>dividual rectangular body parts.For each image frame, as will be discussed <strong>in</strong> Section 2.3.2,we generate a list of M plausible body poses, from which wewould like to choose the best ones <strong>for</strong> each of the twoworms. These M poses are the ones which have the lowestvalues of the match cost plus the de<strong>for</strong>mation cost. Thede<strong>for</strong>mation cost measures how each worm body poseagrees with worm body anatomy. The pairwise de<strong>for</strong>mationcost is def<strong>in</strong>ed as the follow<strong>in</strong>g:dp ð m ; p n Þ ¼ W x x m;n x n;m þ Wy y m;n y n;mþ W q jq m q n j ð1Þwhere jjdenotes : absolute value, and x m,n , x n,m , y m,n and y n,mare the xy coord<strong>in</strong>ates of jo<strong>in</strong>t po<strong>in</strong>ts between adjacent partsp m and p n (Fig. 1a). The angle θ m is the orientation angle <strong>in</strong>degrees of the part p m . W x , W y and W q are weights <strong>for</strong> the costassociated with a horizontal (x-direction) offset between jo<strong>in</strong>tpo<strong>in</strong>ts of adjacent parts, a vertical (y-direction) offset betweenjo<strong>in</strong>t po<strong>in</strong>ts of adjacent parts, and a difference <strong>in</strong> theorientation angle between the two parts. The de<strong>for</strong>mation costatta<strong>in</strong>s its m<strong>in</strong>imum value of 0 when two parts have the sameorientation angle and the jo<strong>in</strong>t po<strong>in</strong>ts between them co<strong>in</strong>cide.In the next two sections, we show how to generate a list of Mplausible body poses, which is decided by how well the modelmatches the object <strong>in</strong> the image and how well it lowers thede<strong>for</strong>mation cost. To make f<strong>in</strong>d<strong>in</strong>g the best match computationallyefficient, we use a dynamic programm<strong>in</strong>g algorithm.2.3.1 Dynamic Programm<strong>in</strong>gDynamic programm<strong>in</strong>g is a class of methods <strong>for</strong> solv<strong>in</strong>gsequential decision problems with a compositional coststructure <strong>in</strong> which the decisions at one stage become theconditions govern<strong>in</strong>g the succeed<strong>in</strong>g stages [36, 37]. We tryto m<strong>in</strong>imize the cost function def<strong>in</strong>ed by both the matchquality of the model and the restrictions between pairs ofparts [15, 38]. The model conta<strong>in</strong>s N rectangular parts. Wesuppose the general cost function of each part p i can beexpressed by the follow<strong>in</strong>g equation:E i ðp i Þ ¼ dp ð i 1 ; p i ÞþmI; ð p i ÞþE i 1 ðp i 1 Þ <strong>for</strong> i ¼ 1toN 1ð2Þwhere p i is def<strong>in</strong>ed by (x i , y i , θ i ) as previously described,d(p i-1 ,p i ) def<strong>in</strong>ed <strong>in</strong> Eq. 1 measures how much the adjacentparts p i-1 and p i contribute to the de<strong>for</strong>mation cost, andm(I,p i ) measures how well the part p i matches the image Iwith its current position.2.3.2 Worm Body Poses Sampl<strong>in</strong>gTo generate a list of body poses, we beg<strong>in</strong> by tak<strong>in</strong>g the K 3 (x 0,y 0 , θ 0 ) triples with the lowest match cost from the previouslygenerated list of K positions, to place our first part p 0 ,whichis one of the two end parts (it could be either the head or thetail). The reason why the lowest match cost <strong>for</strong> placement ofone s<strong>in</strong>gle part will correspond almost surely to one of thetwo ends is because background pixels on three sides of thecentral “match” region will be correctly <strong>in</strong>cluded <strong>in</strong> the “nomatch” region of the kernel, whereas <strong>for</strong> a typical part <strong>in</strong> themiddle of the worm, background pixels only on two sides ofthe central “match” region will appropriately correspondwith the “no match” region of the kernel. <strong>Us<strong>in</strong>g</strong> only theone-third best positions of all K positions only <strong>for</strong> part p 0slightly reduces complexity without significantly <strong>in</strong>creas<strong>in</strong>gthe chance of generat<strong>in</strong>g bad worm body poses. After thepart p 0 ,allK positions will be possible candidates to placeparts from p 1 to p N-1 .The cost function E 0 (p 0 ) of each part p 0 is just the matchcost at the position because p 0 is the first part of the model.If E 0 (p 0 ) of each p 0 is known and we def<strong>in</strong>e p i 1 ðp i Þ to bethe best position <strong>for</strong> the part p i-1 as a function of theposition of the next part p i , then the best position of the partp 0 <strong>in</strong> the <strong>in</strong>put image I is:p 0 ðp 1Þ ¼ arg m<strong>in</strong>ðdp ð 0 ; p 1 ÞþmI; ð p 1 ÞþE 0 ðp 0 ÞÞ ð3Þp 0


K.H. Huang et al.That is, the best position of the part p 0 can be decidedgiven the position of its next part p 1 . The m<strong>in</strong>imum costfunction E * i (p 1 ) of the given part p 1 (m<strong>in</strong>imized over allpossible values of p 0 ) becomes:E * 1 ðp 1Þ ¼ m<strong>in</strong>ðdðp 0 ; p 1 ÞþmI; ð p 1 ÞþE 0 ðp 0 ÞÞ ð4Þp 0Based on the same concept, the best location of the part p i-1(1


<strong>Us<strong>in</strong>g</strong> <strong>Articulated</strong> models <strong>for</strong> track<strong>in</strong>g multiple C. <strong>elegans</strong>Close/touch<strong>in</strong>g framesErode the imageto removeNon-touch<strong>in</strong>g partsSubtract erodedimage from orig<strong>in</strong>alb<strong>in</strong>ary imageGenerate a listof 1000 body posesF<strong>in</strong>d 10 bestbody poses<strong>for</strong> the primary wormFor each primaryworm pose, f<strong>in</strong>d thebest body pose <strong>for</strong>the secondary wormChoose the bodyposes that m<strong>in</strong>imizecost functionMulti-wormvideoApply 5 by 5Gaussiansmooth<strong>in</strong>g (σ =1.4)on imagesFrames where worms are separatedGenerate a listof 1000 body posesF<strong>in</strong>d the bestsequence <strong>for</strong>the first wormRemove the firstworm from imagesF<strong>in</strong>d the bestsequence <strong>for</strong>the second wormFigure 3 Block diagram of the multi-worm track<strong>in</strong>g algorithm.lapp<strong>in</strong>g with the non-filled area. If the two worms are onlyclose to each other without touch<strong>in</strong>g or cross<strong>in</strong>g each other,the new image will be just equal to the orig<strong>in</strong>al b<strong>in</strong>aryimage because there will be no object <strong>in</strong> the eroded image.The dynamic programm<strong>in</strong>g approach can make theprocess of f<strong>in</strong>d<strong>in</strong>g the best sequence more computationallyefficient, and there<strong>for</strong>e we use it <strong>for</strong> the portion of the videowhere the worms do not touch, which is the majority of thevideo and also the portion of the video where the bodyfitt<strong>in</strong>gis easier. However, <strong>for</strong> touch<strong>in</strong>g/close frames, it is amore difficult problem to simulate the two worm bodyposes correctly. When worms touch, the b<strong>in</strong>ary <strong>for</strong>egroundblob can be much larger than a typical s<strong>in</strong>gle worm body,and there<strong>for</strong>e choos<strong>in</strong>g good body poses <strong>in</strong> a frame requiresnot only (as <strong>in</strong> Section 2.3.2) the match cost between imageblob pixels and model pixels, and the de<strong>for</strong>mation cost ofde<strong>for</strong>m<strong>in</strong>g the model, but also must rely on the overlapp<strong>in</strong>gcost of the two worm bodies (a cost which doesn't enter <strong>in</strong>tothe case where the two worms are far apart), as well as thedistance between the current possible pose and the previousposes (a cost which only entered <strong>in</strong>to the dynamicprogramm<strong>in</strong>g <strong>in</strong> the second step, as we try to f<strong>in</strong>d thesequence of poses).For this reason, we modify our approach. For eachtouch<strong>in</strong>g/close frame, one of the two worms will be chosenas the primary worm. As discussed <strong>in</strong> Section 2.3.2, wehave a set of M plausible body poses <strong>for</strong> the frame,obta<strong>in</strong>ed us<strong>in</strong>g the dynamic programm<strong>in</strong>g approach tom<strong>in</strong>imize the match cost and de<strong>for</strong>mation cost. From thisset, first the best H body poses are chosen to be candidates<strong>for</strong> the primary worm based on both the match cost of thewhole body pose and the distance between the pose <strong>in</strong> thecurrent frame and the pose <strong>in</strong> the previous frame. For eachof these H candidates, we f<strong>in</strong>d the best body pose <strong>for</strong> thesecondary worm from those M poses to fill the rema<strong>in</strong>der ofthe object based on the overlapp<strong>in</strong>g cost. The overlapp<strong>in</strong>gcost is def<strong>in</strong>ed to be the sum of two terms: (1) the numberof object pixels covered by both the primary worm bodyand the secondary worm body, (2) the number of objectpixels not covered by any worm body. Then we choose thebest s<strong>in</strong>gle set of body poses <strong>for</strong> the two worms, which isthe set that achieves m<strong>in</strong>imum value of the overlapp<strong>in</strong>g costplus the Euclidean distance between the current pose andthe previous pose <strong>for</strong> both worms from those H sets. Toavoid the whole result be<strong>in</strong>g dom<strong>in</strong>ated by only one worm,the assignment of the primary worm and the secondaryworm alternates from one frame to the next.As a f<strong>in</strong>al step <strong>for</strong> all frames <strong>in</strong> both the separatedportion and the close/touch<strong>in</strong>g portion of the video, weapply a two dimensional Gaussian filter to smooth the f<strong>in</strong>alresults of every frame. The block diagram of the wholemulti-worm match process is shown <strong>in</strong> Fig. 3 and thepseudo code of the ma<strong>in</strong> program is written <strong>in</strong> the Appendix.For the reversal detection discussed later, after the bestmatch configurations of both worms are decided, we canmanually assign one of the two end parts to be the head/tailpart by us<strong>in</strong>g the orig<strong>in</strong>al images as references. The manualassignment does not need to be done on each frame. It isdone only once per video.3 ResultsThe experiments were per<strong>for</strong>med us<strong>in</strong>g Matlab on a2.33 GHz Pentium-IV desktop computer. The algorithmwas run on 29 different videos which conta<strong>in</strong>ed 10,579frames <strong>in</strong> total and each sequence varies from 131 to 498frames. K, M, and H were set to be 3,000, 1,000, and 10respectively, where K is the number of sampled pixelpositions, M is the number of sampled worm body poses <strong>for</strong>each image, and H is the number of the candidate bodyposes <strong>for</strong> each worm <strong>in</strong> each frame as previously described.The weights W x , W y , and W q <strong>in</strong> Eq. 1 were chosen <strong>in</strong> theratio 4:4:3 based on limited subjective evaluation over a setof values. The choice of relative weight W q should allowthe worm body pose to have appropriate bend<strong>in</strong>g but stillagree with the worm body anatomy which can be verifiedby human observers. B<strong>in</strong>ary images are obta<strong>in</strong>ed from the


K.H. Huang et al.(a)(b)(c)(d) (e) (f)(g)(h)Fig. 4 N<strong>in</strong>e images from three videos show the best match<strong>in</strong>gconfiguration. Images a–c are frames 13, 111, 134 extracted from thefirst video, images d–f are frames 86, 143, 206 from the second video,(i)and images g–i are frames 16, 63, 91 from the third video (with twoworms cross<strong>in</strong>g each other). In each image pair, the left side shows theorig<strong>in</strong>al grayscale image and the right side shows the match<strong>in</strong>g result.orig<strong>in</strong>al images by us<strong>in</strong>g the threshold<strong>in</strong>g algorithm from[4]. Some pictorial results are shown <strong>in</strong> Fig. 4. Images a, band c are frames 13, 111, 134 extracted from the first video,images d, e and f are frames 86, 143, 206 from the secondvideo, and images g, h, and i are frames 16, 63, 91 from thethird video (with two worms cross<strong>in</strong>g each other). In eachimage, the left side shows the orig<strong>in</strong>al grayscale image andthe right side shows the match<strong>in</strong>g result. These examplesillustrate the ability to identify two worms correctly be<strong>for</strong>eand after they touch each other. Their body poses aresimulated and clearly dist<strong>in</strong>guished dur<strong>in</strong>g the time the twoanimals are touch<strong>in</strong>g.First, we evaluate how well the algorithm per<strong>for</strong>ms onthe three major goals: (1) to f<strong>in</strong>d the best body poses <strong>for</strong>both worms <strong>for</strong> further extraction of motion relatedfeatures, (2) to identify two worms correctly be<strong>for</strong>e andafter they touch each other <strong>in</strong> every video and (3) to detectreversals even when morphological skeletons are notavailable due to the two worms touch<strong>in</strong>g.3.1 Good Estimated Pose and Motion-Related FeaturesIn [31], track<strong>in</strong>g accuracy is evaluated us<strong>in</strong>g the “edit<strong>in</strong>gextent,” which is def<strong>in</strong>ed as the distance between theautomatically and manually detected head locations, normalizedby the worm length <strong>for</strong> those worms that areconsidered <strong>in</strong>correctly segmented by the human observer.In order to emphasize that our algorithm can simulate thehighly de<strong>for</strong>mable nature of the worm body, we use pixelbased,feature-based and human observer based methods toevaluate the match quality of our algorithm. We beg<strong>in</strong> theevaluation of the algorithm by compar<strong>in</strong>g how well it doesaga<strong>in</strong>st a manual fit of the body model frame by frame. Thealgorithm was tested on 1,913 images (<strong>in</strong>clud<strong>in</strong>g 793 nontouch<strong>in</strong>gframes and 1,120 touch<strong>in</strong>g frames) randomlychosen from 11 videos with a different pair of worms <strong>in</strong>each video. Given an orig<strong>in</strong>al image and the number of(b)(a)Figure 5 a Eight jo<strong>in</strong>t po<strong>in</strong>ts chosen manually <strong>for</strong> the seven parts <strong>in</strong>this frame, b body pose built from manually selected jo<strong>in</strong>t po<strong>in</strong>ts.Figure 6 An example of thecomparison between theautomatically generated modeland the manually generatedmodel. The black area iscovered by both models; thegray area is the differencebetween these two models.


<strong>Us<strong>in</strong>g</strong> <strong>Articulated</strong> models <strong>for</strong> track<strong>in</strong>g multiple C. <strong>elegans</strong>parts N <strong>in</strong> it, we first chose N+1 jo<strong>in</strong>t po<strong>in</strong>ts (<strong>in</strong>clud<strong>in</strong>g twoend po<strong>in</strong>ts) manually <strong>in</strong> every frame by click<strong>in</strong>g with themouse on the image (Fig. 5a). Then these po<strong>in</strong>ts are used tocompute x, y and θ of each part and to build worm bodyposes (Fig. 5b). This is followed by Gaussian smooth<strong>in</strong>g.We compare the results from our algorithm to thesemanually built body poses and evaluate the accuracy bycomput<strong>in</strong>g the correct percentages def<strong>in</strong>ed by the follow<strong>in</strong>gequation:correct percentage ¼ N AMN Awhere N AM is def<strong>in</strong>ed as the number of object pixelscovered by both the automatically generated model and themanually generated model, and N A is def<strong>in</strong>ed as the numberof pixels covered by the automatically generated model. Ahigher value <strong>for</strong> this percentage <strong>in</strong>dicated that the bodyposes decided by the algorithm agree more with themanually generated body poses (Fig. 6).For frames without touch<strong>in</strong>g, because the two worms areseparated and each worm can be easily extracted tocalculate its area, we also compare our match<strong>in</strong>g resultsaga<strong>in</strong>st both worm bodies <strong>in</strong> the orig<strong>in</strong>al images. Wecompute two different scores:ðÞpredicted 1 positive value ¼ N MON Mand ð2Þ true positive rate ¼ N MON Owhere N MO is the number of pixels covered by both themodel (either manually or automatically generated) and theworm body <strong>in</strong> the orig<strong>in</strong>al image, N M is the number ofpixels covered by the model, and N O is the number ofpixels covered by the worm body <strong>in</strong> the orig<strong>in</strong>al b<strong>in</strong>aryimage. Predicted positive value is an <strong>in</strong>dication of theprobability that a given pixel which our model says is partof the worm body actually is part of the worm body. Truepositive rate tells us the percentage of the actual worm bodycovered by our model. The predicted positive value willdecrease and the true positive rate will <strong>in</strong>crease if we usemodels with larger sizes (<strong>for</strong> example, if we choose modelpart width W = average body width w <strong>in</strong>stead of W=0.9w).All results are listed <strong>in</strong> Table 1. We notice that the correctpercentages between automatically generated model andmanually generated model all range from 72% to 83%except <strong>in</strong> two videos (015 and 020). In these two videos, theresult is better <strong>for</strong> touch<strong>in</strong>g frames than non-touch<strong>in</strong>g frames.That is because the background is not very clear due tothe un-evenness <strong>in</strong> the bacteria layer or crawl track left bythe worms <strong>in</strong> these videos, which may cause the size of theb<strong>in</strong>ary worm body <strong>in</strong> some images to be abnormally largerthan its usual size and predicted positive values to be verylow. Predicted positive values are also over 85% and truepositive rates are higher than 70% <strong>for</strong> almost all frames. Inorder to reduce the possibility of two worms overlapp<strong>in</strong>g <strong>in</strong>our results <strong>for</strong> touch<strong>in</strong>g frames, the width W of the model isalways chosen to be 90% of the actual average body widthas calculated from non-touch<strong>in</strong>g frames. For this reason, themodel tends to be covered by the whole worm body whichwill cause the predicted positive value to be generally largerthan the true positive rate. The results of the comparisonbetween manually generated models and orig<strong>in</strong>al images arealso listed <strong>in</strong> the last four rows <strong>in</strong> this table, which clearlyshows that our automatic match<strong>in</strong>g algorithm outper<strong>for</strong>mshuman observers.The angle change rate is also computed from both themanually generated model and the model generated by ouralgorithm. The results are shown <strong>in</strong> Table 2. The anglechange rate is def<strong>in</strong>ed <strong>in</strong> [1, 4] as the ratio of the averageangle difference between every pair of consecutive segmentsconnected by skeleton po<strong>in</strong>ts. The angle change rateis an important feature characteriz<strong>in</strong>g body postures ofmutants, and was shown <strong>in</strong> [1], <strong>for</strong> example, to beTable 2 Angle change rate verification results.File name 005 006 007 008 011 012 015 016 017 018 020Automatically generated model Separated 32.6 33.7 37.9 37.0 29.8 29.2 31.7 33.1 32.3 36.1 36.633.0 30.6 31.0 39.0 30.9 35.1 34.9 29.3 32.4 34.3 41.3Close 30.0 33.7 36.2 32.4 27.6 30.6 25.0 30.0 30.2 28.8 38.031.4 30.3 31.8 33.6 33.6 32.1 34.3 24.8 29.3 32.3 35.4Manually generated model Separated 33.5 34.5 40.7 34.6 33.3 30.7 31.9 33.5 34.0 37.5 36.136.2 32.7 32.5 40.5 29.8 34.3 38.9 24.7 34.6 34.2 43.4Close 30.4 33.6 41.0 34.2 28.6 31.0 29.9 30.9 32.5 29.4 42.230.6 29.4 32.3 34.4 30.8 34.3 31.2 24.5 33.3 31.7 38.6Difference Percentage Separated (%) 2.6 2.2 6.9 6.8 10.4 4.9 0.7 1.0 5.0 3.8 1.28.6 6.5 4.6 3.7 3.8 2.5 10.4 18.5 6.3 0.2 4.8Close (%) 10.4 2.6 0.6 1.2 16.4 0.9 6.9 8.3 4.4 27.6 14.418.2 11.4 0.6 17.8 3.4 0.2 24.7 0.9 3.8 7.9 12.3The two numbers <strong>for</strong> each parameter are the angle change rates <strong>for</strong> the two worms


K.H. Huang et al.Table 3 Identification results and reversal verification results.Number of tested videos 29Number of correct identifications 26Number of wrong identifications 3Number of reversals correctly Touch<strong>in</strong>g Overalldetected with results from our 57 (98.3%) 86 (96.6%)algorithm and the method <strong>in</strong> [22]Number of wrong detections 6 (9.5%) 7 (7.5%)Number of reversals missed 1 (1.7%) 3 (3.4%)significant <strong>in</strong> dist<strong>in</strong>guish<strong>in</strong>g between goa-1(n1134) mutantsand wild-type. In this paper, we use the jo<strong>in</strong>t po<strong>in</strong>ts of theparts to be our skeleton po<strong>in</strong>ts to calculate angle changerate. From Table 2, we see that the difference percentagesof average angle change rate between our algorithm andmanually generated models are lower than 10% <strong>for</strong> nontouch<strong>in</strong>g(separated) frames and 20% <strong>for</strong> touch<strong>in</strong>g frames(close) <strong>in</strong> most of the videos.3.2 Correct IdentificationBoth worms <strong>in</strong> all videos are also tracked by a humanobserver to see if our method can correctly identify wormsseparately after they touch. From Table 3, we see that ourprogram can identify both worms correctly <strong>in</strong> 26 of 29videos.3.3 ReversalsC. <strong>elegans</strong> usually moves <strong>in</strong> a s<strong>in</strong>usoidal wave. When aworm is touched or presented with a toxic chemicalstimulus, it will switch the direction of the wave, caus<strong>in</strong>gthe animal to <strong>in</strong>stantaneously crawl backward <strong>in</strong>stead of<strong>for</strong>ward. This backward movement is def<strong>in</strong>ed as a reversal.In [34], we used two skeleton po<strong>in</strong>ts near the two ends asour reference po<strong>in</strong>ts to decide if the worm was mov<strong>in</strong>g<strong>for</strong>ward or backward. However, this reversal detectionalgorithm requires the skeleton po<strong>in</strong>ts from each wormbody which can not be obta<strong>in</strong>ed us<strong>in</strong>g the method <strong>in</strong> [39]when two worms touch each other. By us<strong>in</strong>g our model<strong>in</strong>galgorithm, after all parameters of all parts are obta<strong>in</strong>ed withour algorithm, all jo<strong>in</strong>t po<strong>in</strong>ts can be considered to bepseudo skeleton po<strong>in</strong>ts. We can use the two jo<strong>in</strong>t po<strong>in</strong>tsnearest the two ends to be our reference po<strong>in</strong>ts to detectreversals. Table 3 shows that there were 86 reversalscorrectly detected by our automatic algorithm. Of these,57 occurred dur<strong>in</strong>g the close/touch<strong>in</strong>g portion of the video.In addition, the automatic algorithm <strong>in</strong>correctly declaredseven events to be reversals (six of them were <strong>in</strong> the close/touch<strong>in</strong>g portion of the video). Only three actual reversalswere missed (of which one was <strong>in</strong> the close/touch<strong>in</strong>gportion of the video). So our algorithm has a high rate ofcorrect detections while ma<strong>in</strong>ta<strong>in</strong><strong>in</strong>g a low rate of falsealarms and false dismissals.3.4 Analyz<strong>in</strong>g Physical ContactAs described <strong>in</strong> Section 2.3.3, our algorithm can automaticallylocate those portions of the video where two wormsare close to or touch<strong>in</strong>g each other. We can there<strong>for</strong>ecalculate the length of time they are <strong>in</strong> physical contact. Wealso exam<strong>in</strong>e the average speed and rate of reversals fromthe video portions be<strong>for</strong>e and after the two worms touch.The results are shown <strong>in</strong> Table 4. All worms were npr-1(ky13) mutants. In Table 4, we show the data <strong>for</strong> 6<strong>in</strong>dividual videos as well as the average values over all 29videos <strong>in</strong> the last column. Row 2 shows the length of timethe worms touch each other. The average speed and the rateTable 4 Experimental results from six <strong>in</strong>dividual videos as well as average values over all 29 videos.File name 001 002 003 004 005 006 AverageTime length of two worms touch<strong>in</strong>g (s) 17.4 21.6 19.8 10.6 16.4 27.0 22.14Ave speed be<strong>for</strong>e touch<strong>in</strong>g (pixel/s) 5.7 6.0 9.2 8.0 7.8 12.1 8.216.8 3.9 8.0 10.6 6.8 5.8 8.11Rate of reversals be<strong>for</strong>e touch<strong>in</strong>g (1/s) 0.17 0.14 0 0.19 0.06 0 0.070.11 0.09 0.05 0.19 0 0.04 0.07Ave speed after touch<strong>in</strong>g (pixel/s) 7.6 7.3 13.2 8.6 9.2 7.0 10.339.9 11.3 12.7 7.1 10.4 9.7 10.55Rate of reversals after touch<strong>in</strong>g (1/s) 0 0 0 0 0.12 0.04 0.050 0 0 0 0.06 0 0.04Row 2 shows the length of time the worms touch each other. The average speed and the rate of reversals be<strong>for</strong>e touch<strong>in</strong>g events are <strong>in</strong> rows 3–4 and5–6. The average speed and the rate of reversals after touch<strong>in</strong>g events are <strong>in</strong> rows 7–8 and 9–10. Other than the time length of touch<strong>in</strong>g, theparameters listed have two numbers <strong>for</strong> each video because they represent results <strong>for</strong> the two worms <strong>in</strong> the video.


<strong>Us<strong>in</strong>g</strong> <strong>Articulated</strong> models <strong>for</strong> track<strong>in</strong>g multiple C. <strong>elegans</strong>of reversals be<strong>for</strong>e touch<strong>in</strong>g events are <strong>in</strong> rows 3–4 and 5–6.The average speed and the rate of reversals after touch<strong>in</strong>gevents are <strong>in</strong> rows 7–8 and 9–10. One can observe that theanimals move faster after physical contact than be<strong>for</strong>e.4 ConclusionThis paper presents a method that comb<strong>in</strong>es articulatedmodels and dynamic programm<strong>in</strong>g <strong>for</strong> simulat<strong>in</strong>g the bodyposes of C. <strong>elegans</strong> <strong>in</strong> multi-worm videos. The models arecomposed of a number of rectangular parts arranged <strong>in</strong> ade<strong>for</strong>mable configuration. For each video, we beg<strong>in</strong> byus<strong>in</strong>g a dynamic programm<strong>in</strong>g algorithm to generate manyworm body poses <strong>in</strong> every frame. For those portions of thevideo where the two worms are clearly separated, adynamic programm<strong>in</strong>g algorithm is used aga<strong>in</strong> to f<strong>in</strong>d thebest sequences over time <strong>for</strong> both worms. For thoseportions of the video where the two worms are close ortouch<strong>in</strong>g each other, we f<strong>in</strong>d the best match configuration<strong>for</strong> the two worms based on the Euclidean distancesbetween pairs of body poses <strong>in</strong> adjacent image frames.There are several contributions <strong>in</strong> this paper. First, thepresented method allows us to identify two worms correctlybe<strong>for</strong>e and after they touch each other <strong>in</strong> 90% of our videos.Second, we can use these models to accurately resolve the<strong>in</strong>dividual body postures of two worms <strong>in</strong> physical contactwith one another. We note that this track<strong>in</strong>g algorithm <strong>for</strong> twoworms is fully automated and requires no human annotation atany po<strong>in</strong>t (<strong>in</strong>clud<strong>in</strong>g no need <strong>for</strong> human annotation of the firstframe). When this algorithm was comb<strong>in</strong>ed with a previouslydescribed algorithm <strong>for</strong> detection of reversals, human annotationwas required to identify head and tail once per video,which was needed to identify reversals. However the track<strong>in</strong>galgorithm itself <strong>in</strong>volves no human <strong>in</strong>tervention, so it issuitable <strong>for</strong> analyz<strong>in</strong>g large numbers of videos where humanannotation would be tedious. We also showed that reversalbehaviors of multiple worms can be accurately detected byus<strong>in</strong>g our model even when their bodies are <strong>in</strong> physicalcontact with one another.This algorithm will provide many applications towardscharacteriz<strong>in</strong>g physical <strong>in</strong>teractions between animals. Previousresearch on automated analysis of C. <strong>elegans</strong> videos hasshown that large numbers of biologically relevant featurescan be automatically extracted. These features <strong>in</strong>clude, <strong>for</strong>example, body length and width, average speed, curvature,depth of body bends, and frequency and duration ofreversals. These features and others have been shown to beimportant <strong>in</strong> classify<strong>in</strong>g and characteriz<strong>in</strong>g many differentmutant stra<strong>in</strong>s [4]. <strong>Us<strong>in</strong>g</strong> the algorithm described <strong>in</strong> thispaper, the body poses of two worms can be identified, so thevarious features extracted <strong>in</strong> prior research <strong>for</strong> s<strong>in</strong>gle-wormvideos can now be extended to videos with two worms. Inthis paper, we exam<strong>in</strong>ed two of these features (angle changerate and reversals) <strong>in</strong> these two-worm videos. In addition,new features characteriz<strong>in</strong>g the <strong>in</strong>teractions themselves suchas average duration of bodily contact can be extracted acrossdifferent mutant stra<strong>in</strong>s and different environmental conditions.We <strong>in</strong>tend to characterize how <strong>in</strong>teractions betweenanimals affect various features of body movement andposture <strong>in</strong> future work.Acknowledgment The authors would like to thank the CaenorhabditisGenetics Center <strong>for</strong> stra<strong>in</strong>s, and K. Quast <strong>for</strong> data collection.Appendix: Pseudo Code <strong>for</strong> the Multi-Worm <strong>Track<strong>in</strong>g</strong>AlgorithmThis appendix provides pseudo code <strong>for</strong> the multi-wormtrack<strong>in</strong>g algorithm. The worm video data and source code<strong>in</strong> Matlab <strong>for</strong> our algorithms are available from the web site[40].READ l—learned length of worm bodyREAD w—learned width of worm bodyCOMPUTE W—width of rectangle as 0.9×wCOMPUTE L—length of rectangle as 2×WCOMPUTE N—number of rectangles as round(l/L)SET Index1 to 1—the <strong>in</strong>dices of subsections where twoworms are separatedSET Index2 to 0—the <strong>in</strong>dices of subsections where twoworms are close or touch<strong>in</strong>gPseudo code <strong>for</strong> divid<strong>in</strong>g the whole video <strong>in</strong>to twosubsections (close/touch<strong>in</strong>g and non-touch<strong>in</strong>g)FOR each image frame <strong>in</strong> the videoCOMPUTE the number of wormsIF the number of worms is equal to 2COMPUTE match cost at each <strong>in</strong>teger pixel positionOBTAIN K <strong>in</strong>teger pixel positions with the lowest matchcostOBTAIN M worm body poses with the lowest matchcost plus de<strong>for</strong>mation cost.COMPUTE Dc—the distance between two centroids <strong>in</strong>the current image frameREAD Dp—the distance between two centroids <strong>in</strong> theprevious image frameIF Dc ≧ l and Dp ≧ lADD the image frame to subsection of Index1ELSE IF Dc ≦ l and Dp ≧ lINCREMENT <strong>in</strong>dex2ADD the image frame to subsection of Index2ELSE IF Dc ≧ l and Dp ≦ lINCREMENT <strong>in</strong>dex1ADD the image frame to subsection of Index1END IF


K.H. Huang et al.ELSEOBTAIN new b<strong>in</strong>ary image where two worms arepartially separatedCOMPUTE match cost at each <strong>in</strong>teger pixel positionOBTAIN K <strong>in</strong>teger pixel positions with the lowest matchcostOBTAIN M worm body poses with the lowest matchcost plus de<strong>for</strong>mation costADD the image frame to subsection of Index2END IFENDFORPseudo code <strong>for</strong> f<strong>in</strong>d<strong>in</strong>g the best sequences with<strong>in</strong> onenon-touch<strong>in</strong>g subsectionFOR subsection 1 to Index1F<strong>in</strong>d<strong>in</strong>g the best sequence <strong>for</strong> one wormFOR each image <strong>in</strong> the subsectionIF the current image is the first image <strong>in</strong> the subsectionFOR each worm body pose <strong>in</strong> the current imageSET accumulated cost function = 0END FORELSEFOR each worm body pose <strong>in</strong> the current imageFOR each possible worm body pose <strong>in</strong> the previousimageREAD accumulated cost function of the worm bodypose <strong>in</strong> the previous imageCOMPUTE Euclidean distance d between the currentand the previous body posesCOMPUTE C = Euclidean distance d + match cost ofthe current body pose + accumulated cost function of theprevious body poseEND FOROBTAIN the m<strong>in</strong>imum of C and the correspond<strong>in</strong>g bodypose <strong>in</strong> the previous imageSAVE accumulated cost function of the current wormbody pose = m<strong>in</strong>(C)SAVE the correspond<strong>in</strong>g worm body pose <strong>in</strong> theprevious imageEND FOREND IFEND FOROBTAIN the m<strong>in</strong>imum overall accumulated cost functionand the correspond<strong>in</strong>g worm body pose <strong>in</strong> the last imageOBTAIN correspond<strong>in</strong>g body poses <strong>in</strong> previous imageframesF<strong>in</strong>d<strong>in</strong>g the best sequence <strong>for</strong> the second wormOBTAIN the image sequence with the first worm be<strong>in</strong>gremovedOBTAIN the best image sequence <strong>for</strong> the second wormus<strong>in</strong>g the above methodEND FORPseudo code <strong>for</strong> f<strong>in</strong>d<strong>in</strong>g the best sequences with<strong>in</strong> oneclose/touch<strong>in</strong>g subsectionFOR subsection 1 to Index2FOR each image <strong>in</strong> the subsectionSET the primary worm differently from the previousimage frameOBTAIN the 10 best worm body poses <strong>for</strong> the primarywormFOR each primary worm body poseOBTAIN the best worm body pose <strong>for</strong> the secondarywormEND FOROBTAIN the best set of two worm body posesEND FOREND FORGaussian smooth<strong>in</strong>gSHOW new videoReferences1. Baek, J., Cosman, P., Feng, Z., Silver, J., & Schafer, W. R. (2002).<strong>Us<strong>in</strong>g</strong> mach<strong>in</strong>e vision to analyze and classify Caenorhabditis<strong>elegans</strong> behavioral phenotypes quantitatively. Journal of NeuroscienceMethods, 118,9–21.2. Cron<strong>in</strong>, C. J., Mendel, J. E., Mukhtar, S., Kim, Y. M., Stirbl, R. C.,Bruck, J., et al. (2005). An automated system <strong>for</strong> measur<strong>in</strong>gparameters of nematode s<strong>in</strong>usoidal movement. BMC Genetics, 6, 5.3. Feng, Z., Cron<strong>in</strong>, C. J., Wittig Jr., J. H., Sternberg, P. W., & Schafer,W. R. (2004). An imag<strong>in</strong>g system <strong>for</strong> standardized quantitativeanalysis of C. <strong>elegans</strong> behavior. BMC Bio<strong>in</strong><strong>for</strong>matics, 5, 115.4. Geng, W., Cosman, P., Berry, C., Feng, Z., & Schafer, W. R.(2004). Automatic track<strong>in</strong>g, feature extraction and classification ofC. <strong>elegans</strong> phenotypes. IEEE Transactions on BiomedicalEng<strong>in</strong>eer<strong>in</strong>g, 51, 1811–1820.5. Dhawan, R., Dusenbery, D. B., & Williams, P. L. (1999).Comparison of lethality, reproduction, and behavior as toxicologicalendpo<strong>in</strong>ts <strong>in</strong> the nematode Caenorhabditis <strong>elegans</strong>. Journal ofToxicology and Environmental Health Part A, 58(7), 451–462.6. de Bono, M., Tob<strong>in</strong>, D. M., Davis, M. W., Avery, L., &Bargmann, C. I. (2002). Social feed<strong>in</strong>g <strong>in</strong> Caenorhabditis <strong>elegans</strong>is <strong>in</strong>duced by neurons that detect aversive stimuli. Nature, 419(6910), 899–903.7. Liu, K. S., & Sternberg, P. W. (1995). Sensory regulation of malemat<strong>in</strong>g behavior <strong>in</strong> Caenorhabditis <strong>elegans</strong>. Neuron, 14(1), 79–89.8. McInerney, T., & Terzopoulos, D. (1996). De<strong>for</strong>mable models <strong>in</strong>medical image analysis: a survey. Medical Image Analysis, 1(2),91–108.9. Cootes, T. F., & Taylor, C. J. (2001). Statistical models ofappearance <strong>for</strong> medical image analysis and computer vision. Proc.SPIE Medical Imag<strong>in</strong>g, 4322, 236–248.


<strong>Us<strong>in</strong>g</strong> <strong>Articulated</strong> models <strong>for</strong> track<strong>in</strong>g multiple C. <strong>elegans</strong>10. Kass, M., Witk<strong>in</strong>, A., & Terzopoulos, D. (1987). Active contourmodels. International Journal of Computer Vision, 1(4), 321–331.11. Scott, G. L. (1987). The alternative snake—And other animals.3rd Alvey Vison Conference. Cambridge, England, pp. 341–347.12. Staib, L. H., & Duncan, J. S. (1992). Boundary f<strong>in</strong>d<strong>in</strong>g withparametrically de<strong>for</strong>mable models. IEEE Transactions on PatternAnalysis and Mach<strong>in</strong>e Intelligence, 14(11), 1061–1075.13. Bajcsy, R., & Kovacic, A. (1989). Multi-resolution elasticmatch<strong>in</strong>g. Computer Graphics and Image Process<strong>in</strong>g, 46, 1–21.14. Burl, M. C., Weber, M., & Perona, P. (1998). A probabilisticapproach to object recognition us<strong>in</strong>g local photometry and globalgeometry. European Conference on Computer Vision. Freiburg,Germany, June, pp. 628–641.15. Fischler, M. A., & Elschlager, R. A. (1973). The representationand match<strong>in</strong>g of pictorial structures. IEEE Transactions onComputers, 22(1), 67–92.16. Hogg, D. (1983). Model based vision: A program to see a walk<strong>in</strong>gperson. Image and Vision Comput<strong>in</strong>g, 1(1), 5–20.17. Rohr, K. (1993). Incremental recognition of pedestrians fromimage sequences. IEEE Conference on Computer Vision andPattern Recognition. New York, USA, June, pp. 9–13.18. Felzenszwalb, P. F., & Huttenlocher, D. P. (2000). Efficient match<strong>in</strong>gof pictorial structures. IEEE Conference on Computer Vision andPattern Recognition, Hilton Head, USA, June,pp.66–73.19. Deutscher, J., Blake, A., & Reid, I. (2000). <strong>Articulated</strong> bodymotion capture by annealed particle filter<strong>in</strong>g. Proc. IEEE Conf.Computer Vision and Pattern Recognition, 2, 126–133.20. Kakadiaris, L., & Metaxas, D. (2000). Model-based estimation of3D human motion, pattern analysis and mach<strong>in</strong>e <strong>in</strong>telligence.IEEE Transactions on Pattern Analysis and Mach<strong>in</strong>e Intelligence,22(12), 1453–1459.21. Sidenbladh, H., Black, M. J., & Fleet, D. J. (2000). Stochastictrack<strong>in</strong>g of 3D human figures us<strong>in</strong>g 2D image motion. In D.Vernon (Ed.), European Conference on Computer Vision (pp.702–718). Dubl<strong>in</strong>, Ireland: Spr<strong>in</strong>ger LNCS 1843 (June).22. Bar-Shalom, Y., Fortmann, T., & Scheffe, M. (1980) Jo<strong>in</strong>tprobabilistic data association <strong>for</strong> multiple targets <strong>in</strong> clutter. Proceed<strong>in</strong>gsof the Conference on In<strong>for</strong>mation Sciences and System.23. Isard, M., & MacCormick, J. (2001). BraMBLe: A Bayesian<strong>Multiple</strong>-Blob Tracker. Proceed<strong>in</strong>gs of the International Conferenceon Computer Vision, pp.34–41.24. MacCormick, J., & Blake, A. (2000). A probabilistic exclusionpr<strong>in</strong>ciple <strong>for</strong> track<strong>in</strong>g multiple objects. International Journal ofComputer Vision, 39(1), 57–71.25. Ioffe, S., & Forsyth, D. A. (2001). Human track<strong>in</strong>g with mixturesof trees. International Conference on Computer Vision, Vancouver,Canada, vol. 1, pp. 690–695 (July).26. Mori, G., & Malik, J. (2002). Estimat<strong>in</strong>g human body configurationsus<strong>in</strong>g shape context match<strong>in</strong>g. European Conference on ComputerVision, Copenhagen, Denmark, vol. 3, pp. 666–680 (May).27. Sullivan, J., & Carlsson, S. (2002). Recogniz<strong>in</strong>g and track<strong>in</strong>ghuman action. European Conference on Computer Vision,Copenhagen, Denmark, vol. 1, pp. 629–644 (May).28. Mittal, A., & Davis, L. (2003). M2 tracker: A multi-viewapproach to segment<strong>in</strong>g and track<strong>in</strong>g people <strong>in</strong> a cluttered scene.International Journal of Computer Vision, 51(3), 189–203.29. Khan, Z., Balch, T., & Dellaert, F. (2005). MCMC-based particlefilter<strong>in</strong>g <strong>for</strong> track<strong>in</strong>g a variable number of <strong>in</strong>teract<strong>in</strong>g targets.IEEE Transactions on Pattern Analysis and Mach<strong>in</strong>e Intelligence,27(11), 1805–1819.30. Qu, W., Schonfeld, D., & Mohamed, M. (2007). Real-time <strong>in</strong>teractivelydistributed multi-object track<strong>in</strong>g us<strong>in</strong>g a magnetic-<strong>in</strong>ertiapotential model. IEEE Transactions on Multimedia, 9(3), 511–519.31. Roussel, N., Morton, C. A., F<strong>in</strong>ger, F. P., & Roysam, B. (2007). Acomputational model <strong>for</strong> C. <strong>elegans</strong> locomotory behavior: applicationto multiworm track<strong>in</strong>g. IEEE Transactions on BiomedicalEng<strong>in</strong>eer<strong>in</strong>g, 54(10), 1786–1797.32. Huang, K., Cosman, P., & Schafer, W. R. (2007). Automatedtrack<strong>in</strong>g of multiple C. <strong>elegans</strong> with articulated models. IEEEInternational Symposium on Biomedical Imag<strong>in</strong>g, pp. 1240–1243.33. Felzenszwalb, P. F., & Huttenlocher, D. P. (2005). Pictorial structures<strong>for</strong> object recognition. International Journal of Computer Vision, 61(1), 55–79.34. Huang, K., Cosman, P., & Schafer, W. R. (2006). Mach<strong>in</strong>e visionbased detection of omega bends and reversals <strong>in</strong> C. <strong>elegans</strong>.Journal of Neuroscience Methods, 158, 323–336.35. Brenner, S. (1974). The genetics of Caenorhabditis <strong>elegans</strong>.Genetics, 77, 71–94.36. Am<strong>in</strong>i, A., Weymouth, T., & Ja<strong>in</strong>, R. (1990). <strong>Us<strong>in</strong>g</strong> dynamic programm<strong>in</strong>g<strong>for</strong> solv<strong>in</strong>g variational problems <strong>in</strong> vision. IEEE Transactions onPattern Analysis and Mach<strong>in</strong>e Intelligence, 12(9), 855–867.37. Angel, E., & Bellman, R. (1972). Dynamic programm<strong>in</strong>g andpartial differential equations. Orlando, FL: Academic.38. Lowem, D. G. (1991). Fitt<strong>in</strong>g parameterized three-dimensionalmodels to images. IEEE Transactions on Pattern Analysis andMach<strong>in</strong>e Intelligence, 13(5), 441–450.39. Gonzalez, R., & Woods, R. (2002). Digital image process<strong>in</strong>g(2nd ed.). NJ: Prentice Hall.40. University website: http://www.code.ucsd.edu/~pcosman/C_<strong>elegans</strong>.html.Dr. Kuang-Man Huang was born <strong>in</strong> Taiwan <strong>in</strong> 1977. He received theB.S. degree with award <strong>in</strong> Electro-physics from National Chiao-TungUniversity, Hs<strong>in</strong>-Chu, Taiwan, <strong>in</strong> 1999, and the M.S. and Ph.D.degrees <strong>in</strong> Electrical and Computer Eng<strong>in</strong>eer<strong>in</strong>g from the Universityof Cali<strong>for</strong>nia at San Diego <strong>in</strong> 2003 and 2008, respectively. He was aresearch assistant with the Cosman Lab on In<strong>for</strong>mation Cod<strong>in</strong>g as wellas with the Schafer Lab <strong>in</strong> the Center <strong>for</strong> Molecular Genetics, UCSD,from 2003 to 2008.


K.H. Huang et al.She is a member of Tau Beta Pi and Sigma Xi, and a Fellowof the IEEE. Her web page address is http://www.code.ucsd.edu/cosman/.Pamela Cosman obta<strong>in</strong>ed her B.S. with Honor <strong>in</strong> ElectricalEng<strong>in</strong>eer<strong>in</strong>g from the Cali<strong>for</strong>nia Institute of Technology <strong>in</strong> 1987,and her M.S. and Ph.D. <strong>in</strong> Electrical Eng<strong>in</strong>eer<strong>in</strong>g from Stan<strong>for</strong>dUniversity <strong>in</strong> 1989 and 1993, respectively. She was an NSFpostdoctoral fellow at Stan<strong>for</strong>d University and a Visit<strong>in</strong>gProfessor at the University of M<strong>in</strong>nesota dur<strong>in</strong>g 1993-1995. In1995, she jo<strong>in</strong>ed the faculty of the Department of Electrical andComputer Eng<strong>in</strong>eer<strong>in</strong>g at the University of Cali<strong>for</strong>nia, SanDiego, where she is currently a Professor and Director of theCenter <strong>for</strong> Wireless Communications. Her research <strong>in</strong>terests are<strong>in</strong> the areas of image and video compression and process<strong>in</strong>g. Dr.Cosman is the recipient of the ECE Departmental GraduateTeach<strong>in</strong>g Award (1996), a Career Award from the NationalScience Foundation (1996-1999), and a Powell Faculty Fellowship(1997-1998). She was a guest editor of the June 2000special issue of the IEEE Journal on Selected Areas <strong>in</strong>Communications on ‘Error-resilient image and video cod<strong>in</strong>g,’and was the Technical Program Chair of the 1998 In<strong>for</strong>mationTheory Workshop <strong>in</strong> San Diego. She was an associate editor ofthe IEEE Communications Letters (1998-2001), and an associateeditor of the IEEE Signal Process<strong>in</strong>g Letters (2001-2005). Shewas a senior editor (2003-2005) and is now the Editor-<strong>in</strong>-Chiefof the IEEE Journal on Selected Areas <strong>in</strong> Communications.William Schafer received the A.B. degree <strong>in</strong> biology (summa cumlaude) from Harvard University <strong>in</strong> 1986, and the Ph.D. degree <strong>in</strong>biochemistry from the University of Cali<strong>for</strong>nia, Berkeley <strong>in</strong> 1991.From 1992-1995 he was a Life Sciences Research Foundationpostdoctoral fellow at the University of Cali<strong>for</strong>nia, San Fransicso.From 1995-2006 he was on the faculty of the Division of BiologicalSciences at the University of Cali<strong>for</strong>nia San Diego. He is currentlyProgramme Leader <strong>in</strong> Cell Biology at the Medical Research CouncilLaboratory of Molecular Biology <strong>in</strong> Cambridge. He has been aVisit<strong>in</strong>g Fellow at Clare Hall, Cambridge (2004-2005), a Visit<strong>in</strong>gProfessor <strong>in</strong> the Department of Physics at the Ecole NormaleSuperieure <strong>in</strong> Paris, and is currently a Bye-fellow at Down<strong>in</strong>gCollege, Cambridge. His research has focused on the molecular andcellular mechanisms underly<strong>in</strong>g neural circuit function, and on thequantitative analysis of behavioral phenotypes. Dr Schafer hasreceived a Presidential Early Career Award <strong>for</strong> Scientists andEng<strong>in</strong>eers, a Kl<strong>in</strong>genste<strong>in</strong> Fellowship <strong>in</strong> Neuroscience, a BeckmanYoung Investigator Award, and a Sloan Fellowship <strong>in</strong> Neuroscience.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!