13.07.2015 Views

Medical Imaging and Telemedicine - Mitime.org

Medical Imaging and Telemedicine - Mitime.org

Medical Imaging and Telemedicine - Mitime.org

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Proceedings of the First International Conference on<strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong>August 16-19, 2005, Wuyi Palace, ChinaMIT 2005Editors: X. W. Gao, C. Tully, Q. Lin, S. Thom, H. MüllerISBN: 1-85924-252-9


Sponsors:Proceedings of first International Conference on<strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005), August 16-19, 2005, Wuyi Mountain, China‣ European Commission (EU ICT)‣ Fuzhou University, P.R. China‣ Middlesex University, United KingdomEditors: Xiaohong W. Gao, Colin Tully, Qiang Lin, Simon Thom, Henning MüllerGeneral Co-ChairsProf. Chiang Lin, Fuzhou University, ChinaProf. Colin Tully, Middlesex University, UKDr. Simon Thom, St. Mary Hospital, ImperialCollege, UKOrganising ChairDr. Xiaohong Gao, Middlesex University, UKPublication Co-ChairsDr. Henning Müller, University & Hospitals ofGeneva, Switzerl<strong>and</strong>Prof. Paolo Inchingolo, Universita' di Trieste,ItalyInternational Programme CommitteeDr. Anil Abharath, Imperial College London, UKDr. Ray Adam, Middlesex University, UKDr. Daniel Alex<strong>and</strong>er, University CollegeLondon, UKDr. Cheik Oumar Bagayoko, <strong>Telemedicine</strong>Network in Africa (RAFT), Africa, GenevaDr. Richard Bayford, Middlesex University, UKProf Hans Blickman, Dept. of Radiology UMC,Netherl<strong>and</strong>sProf. Davide Caramella, Universita' di Pisa, ItalyDr. Adrian Carpenter, University of Cambridge,UKProf. Jyh-Cheng Chen, National Yang-MingUniversity, TaiwanDr. John Clark, University of Cambridge, UKDr. Jane Courtney, Dublin Institute ofTechnology, DublinDr Uwe Engelmann, German Cancer ResearchCentre Heidelberg, GermanyDr. Carl Evens, Middlesex University, UKDr. Tim Fryer, University of Cambridge, UKProf. Alastair Gale, University of Derby, UKDr. Stylianos Hatzipanagos, King's CollegeLondon, UKDr. Sean He, University of Technology, Sydney,AustraliaDr. Guowei Hong, University of Derby, UKProf. H.K. Hunag, University of California SanFrancisco, USADr. Tianzi Jiang, Institute of Automation ofChinese Academy of Sciences (CAS). ChinaProf. Theodore Kalamboukis, Athens Universityof Economics <strong>and</strong> Business, GreeceDr. Thomas Lehmann, Aachen University,GenmanyProf. Yen-Chun Lin, National Taiwan Universityof Science <strong>and</strong> Technology, TaiwanProf. Ronnie Ming Luo, University of Leeds, UKProf. Shuqian Luo, Capital University of <strong>Medical</strong>Sciences, ChinaProf. Yuko Murayama, Iwate PrefecturalUniversity, JapanDr. Karim Nawaz, Pak Human ResourcesDevelopment Organization, PakistanDr. Aswini Kumar Pati, Indira G<strong>and</strong>hi IntegralEducation Centre, IndiaProf. Maria Petrou, University of Surrey, UKDr. Robert Pleass, Middlesex University, UKDr. Lubov Podladchikova, Rostov StateUniversity, RussiaDr. Hanna Pohjonen, Consultancy of healthcareinformation systems, Finl<strong>and</strong>Dr. Guoping Qiu, University of Nottingham, UKDr. Rebecca R<strong>and</strong>ell, Middlesex University, UKDr G.S.V. RadhaKrishnaRao, MultimediaUniversity, MalaysiaProf. Norman Revell, Middlesex University, UKMrs. Janet Rix, Middlesex University, UKDr. Egils Stumbris, Riga Municipal Agency"<strong>Telemedicine</strong> centre", LatviaProf. Yin Leng Theng, Nanyang TechnologicalUniversity, SingaporeProf. Shoji Tominaga, Osaka Electro-Communication University, JapanDr. Federico Turkheimer, Charing CrossHosptial, UKDr. Tingkai Wang, University of LondonMetropolitan, UKDr. John Xin, The Hong Kong PolytechnicUniversity, ChinaProf. Chu-Sing Yang, National Sun Yat-SenUniversity, TaiwanDr. Huiru(Jane) Zheng, University of Ulster, UKISBN: 1-85924-252-9Copyright © 2005 Middlesex University Press, London, United Kingdom, www.mupress.co.uk.


Proceedings of the First International Conference on<strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong>August 16-19, 2005, Wuyi Palace, ChinaMIT 2005Editors: X. W. Gao, C. Tully, Q. Lin, S. Thom, H. MüllerISBN: 1-85924-252-9


TABLE OF CONTENTS ---- MIT 2005MIT 2005 Conference ProgrammevINVITED TALKS<strong>Medical</strong> Image Retrieval <strong>and</strong> the medGIFT Project 2Henning Müller, Antoine GeissbuhlerAchieving Accurate Colour Reproduction For <strong>Medical</strong> <strong>Imaging</strong> Applications 8M. R. LuoMEDICAL IMAGE ANALYSIS ---TECHNIQUES AND APPLICATIONSTime-30: Atlas-Based Fuzzy Connectedness For Automatic Segmentation OfAbdominal OrgansYongxin Zhou, Jing BaiTime-17: Skeletonizing <strong>and</strong> Automatically Detecting the Bifurcation of BloodVessels in Digital Subtraction AngiographyHaomiao Shui, Qin Li, Jian Yang, Songyuan Tang, Shoujun Zhou, Yue Liu, YongtianWang1420Time-6: Data Classification Using Bayesian Conditional R<strong>and</strong>om Fields 24Jiahua Wu, Huiyu Zhou, <strong>and</strong> Jia HeTime-15: Detection of Heart Movement Manner Based on 28Hexagonal Image StructureXiangjian He, Wenjing Jia, Qiang Wu <strong>and</strong> Tom HintzTime-25: The Extraction <strong>and</strong> Visualization of the ROI of PET Brain FunctionalImagesXiao-peng HAN, Shu-qian LUO32Time-53: Application of Fuzzy Rough Set in <strong>Medical</strong> Image Registration 36Jianming WEI, Jianguo ZHANG, Baohua WANGTime-50: Registration of 4D Brain EIT Data to St<strong>and</strong>ard Anatomical Brain 40Yan Zhang, Peter J. Passmore, Richard H. BayfordTime-5: Application of Geodesic Active Contours to the Segmentation of Lesionson PET (Positron Emission Tomography) ImagesXiaohong W. Gao, Dawit H. Birhane45MEDICAL IMAGE DATABASES-PACSi


Time-12: Norway PACS 52Roald BergstromTime-2: Lung CT segmentation for content-based image retrieval using theinsight toolkit (ITK)Joris Heuberger, Antoine Geissbühler, Henning MüllerTime-11: Augmented <strong>Medical</strong> Image Management for Integrated HealthcareSolutionsThomas M. Lehmann, Henning Müller, Qi Tian, Nikolas P. Galatsanos, DanielMlynekTime-54: Problem Specific Detailed Structured Reporting: Tailoring RadiologyInterpretation <strong>and</strong> Reporting to Clinical NeedMansoor Fatehi, Dariush Saedi, Saeed Z<strong>and</strong>Time-59: Content Based Retrieval Of PET Images Via Localized AnatomicalTexture Measurements <strong>and</strong> Mean Activity LevelsStephen Batty, Xiaohong Gao, John Clark, Tim Fryer57636870MEDICAL IMAGING EQUIPMENT---TECHNIQUES AND APPLICATIONSTime-42: Development of a Portable ECG Monitor Based on TMS320VC5402 76Lin Qi, Ging Bai, Yonghong Zhang, Chenguang LiuTime-55: Colour Reproduction for Tele-imaging Systems 79Ping He, Xiaohong GaoTime-35: A Processing To B-Mode Ultrasonic Image For Noninvasive DetectingTemperature In HyperthermiaShuicai Wu, Xinying Ren, Yanping Bai, Youjun Liu, Yi ZengTime-41: The Soft Tissues Deformation Simulation based on Graphic ProcessingUnitYu Chen , Peng Lin, Bo-liang Wang, Xiu-ying XuTime-45: Dynamic Spectrum:A Noble Method for Non-invasive BloodComponents MeasurementY Wang, G Li, L Lin, X X Li, Y L LiuTime-47: Fuzzy Enhancemen about 2D-Omnidirectional M-modeEchocardiographyHua Zhang, Qiang Lin85879295Time-37: The Research of Omnidirectional M-mode Echocardiography System 98Qiang Lin, Wenji WuTime-29: Analysis of Colour Images for Tissue Classification <strong>and</strong> HealingAssessment in Wound CareH. Zheng, D. Patterson, M. Galushka, L. Bradley102ii


TELE-MEDICINE, TECHNIQUES AND APPLICATIONSTime-3: <strong>Telemedicine</strong> Network in French-speaking Africa (RAFT) 108Cheickh Oumar Bagayoko, Henning Müller, Antoine GeissbühlerTime-4: Digitally Integrated Teleradiology Network in Hamburg Germany 114W. Auffermann, B. Warncke, J. StettinTime-43: Long-Range Fetal Heart Rate Monitor In China 119Xicheng XIETime-52: A physiology multi-parameter tele-monitoring system based on Internet 125Fangfang Du, Song Zhang, Yanping Bai, Shuicai WuTime-56: Wireless Communication Technologies in Mobile <strong>Telemedicine</strong> 128Min Chen, Jianmei Lei, Chenglin Peng, Xingming GuoTime-14: Smart Bundle Management Layer for Optimum Management of Coexisting<strong>Telemedicine</strong> Traffic Streams under Varying Channel Conditions inHeterogeneous NetworksF.Shaikh, A. Lasebae <strong>and</strong> G.Mapp131VISUALIZATION: SOFTWARE AND HARDWARETime-38: : Parametric 3D Visualization of Human Crystalline Lens based onOpenGLXiuying XU, Zhuo LIU , Boliang WANG, Yu CHEN, Hao YANGTime-8: 3D Navigation of CT Virtual Endoscopy in Non-invasive DiagnosticDetectionXiaomian Xie, Duchun Tao, Siping Chen, Yalei Bi139143Time-9: Automatic Correction of MinIP in CT Diagnostic Applications 147Duchun Tao, Xiaomian Xie, Yalei Bi, Siping ChenTime-10: Evaluation of Quantitative Capability of the Iterative <strong>and</strong> AnalyticReconstructions for a Cone-Beam Micro Computed Tomography SystemHo-Shiang Chueh, Wen-Kai Tasi, Hsiao-Mei Fu, *Jyh-Cheng Chen151Time-20: An Approach to True-Color Volume Data Preparation for Virtual Eye 156Peng LIN, Yu CHEN, Bo-liang WANGTime-16: <strong>Imaging</strong> Biological Micro-tissues <strong>and</strong> Organs Using Phase-contrast X-ray <strong>Imaging</strong> techniqueHongxia Yin, Bo Liu, Xin Gao, Hang Shu, Xiulai Gao, Peiping Zhu, Shuqian LuoTime-34: An Novel Micro-Tomography Algorithm For X-Ray DiffractionEnhanced <strong>Imaging</strong>Xin Gao, Shuqian Luo, Bo Liu, Maolin Xu, Hongxia Yin, Hang Shu3, Xiulai Gao,Peiping Zhu160166Time-57: Intervertebral Disc Biomechanical Analysis 172iii


Using The Finite Element Modeling Based On <strong>Medical</strong> ImageZheng Wang, Haiyun LiCOMPUTER APPLICATIONS ON MEDICAL INFORMATIONTime-21: Analytic Modeling <strong>and</strong> Simulating of the Human Crystalline Lens withFinite Element MethodZhuo Liu, Boliang Wang, Xiuying Xu, Ying JuTime-26: An approach to the relevance between varations in hormone secretion<strong>and</strong> the incidence of Hyperplasia of Mammary Gl<strong>and</strong>s <strong>and</strong> Mammary CancerC Chen, Y LinTime-48: A Method for Extracting Movement Track of Sequential Gray-Pointswhich Represent the Cardiac Structure in Omnidirectional M-modeEchocardiographyWenji Wu, Qiang LinTime-46: Research Movement Information Of Special Goal Research On ASequence Image Of Heart’s B- UltrasonicXiu Zhi Yang, Qiang LinTime-28: Assessment Of Atrial Septal Motion With LEJ-1 Omnidirectional M-Mode EchocardiographyWei Guo, Li-hong Lu , Bin ChenTime-33: Interpretation of the Interaction of Endocrine Hormone Axial Systems<strong>and</strong> Mammary Gl<strong>and</strong> TumorChengqi ChenTime-49: A Method for abstracting dynamic information of Sequential Images —Rebuilding of Gray (Position)~time Function on Arbitrary Direction LinesQiang Lin, Wei LiTime-44: A Study On Metallic Micro-Capsulation For Immunoisolation InTransplanting Cells And TissuesGang Li, Minjng Zhan,.Lu Stephen, Hualei CuiTime-23: The Research of Dynamic Change <strong>and</strong> Relativity of Hormone SecretionBreast TumorChengqi Chen177182194199203207214219223Time-40: Auto data <strong>and</strong> Semantic Integration System 227Liqin HuangTime-58: Three-dimensional Skin Surface Texture Editing Based on SelfsimilarityJunyu Dong, Lin Qi, Guojiang Chen, Jiahua Wu231Author Index 235iv


MIT 2005 Programme, 16-19, August, 2005August 16 - (Day 1)2:00pm Registration Reception of Wuyi Palace8:00pmDinnerAugust 17 - (Day 2)9:00 Open Ceremony, Speech Qiang Lin, SimonThom, Henning Müller, Xiaohong Gao9:30 Key-Note SpeechProf. Paolo Inchingolo, ItalyMain Conference Room10:30 Tea Break11:00 Parallel SessionsSession I: <strong>Medical</strong> <strong>Imaging</strong>: Techniques<strong>and</strong> applications (Part I)Chair: Prof. Simon Thom11:00 Time-30: Atlas-based fuzzyconnectedness for automaticsegmentation of abdominal <strong>org</strong>ansYongxin Zhou, Jing Bai11:20 Time-17: Skeletonizing <strong>and</strong>Automatically Detecting the Bifurcationof Blood Vessels in Digital SubtractionAngiographyShui Haomiao, Li Qin, Yang Jian, TangSongyuan, Zhou Shoujun, Liu Yue, WangYongtian11:40 Time-6: Data classification usingBayesian conditional r<strong>and</strong>om fieldsJiahua Wu, Huiyu Zhou, <strong>and</strong> Jia He12:00 Time-15: Detection of Heart MovementManner Based on Hexagonal ImageStructureXiangjian He, Wenjing Jia, Qiang Wu<strong>and</strong> Tom Hintz12:20 Time-25: The Extraction <strong>and</strong>Visualization of the ROI of PET BrainFunctional ImagesHAN Xiao-peng, LUO Shu-qianSession II: Medcial Image Databases-PACSChair: Dr. John ClarkTime-12: Norway PACSRoald BergstromTime-2: Lung CT segmentation for contentbasedimage retrieval using the insight toolkit(ITK)Joris Heuberger, Antoine Geissbühler, HenningMüllerTime-11: Augmented <strong>Medical</strong> ImageManagement for Integrated HealthcareSolutionsThomas M. Lehmann, Henning Müller, Qi Tian,Nikolas P. Galatsanos, Daniel MlynekTime-54: Problem Specific DetailedStructured Reporting: Tailoring RadiologyInterpretation <strong>and</strong> Reporting to ClinicalNeedMansoor Fatehi, Dariush Saedi, Saeed Z<strong>and</strong>Time-59: Content based retrieval of PETimages via localized anatomical texturemeasurements <strong>and</strong> mean activity levelsStephen Batty, Xiaohong Gao, John Clark, TimFryer13:00 Lunch14:30 Key-note speechProf. Ronnier Luo (UK)15:30 Tea Breakv


16:00 Parallel SessionsSession III: <strong>Medical</strong> <strong>Imaging</strong>:Techniques <strong>and</strong> applications (Part II)Chair: Prof. Simon Thom16:00 Time-53: Application of Fuzzy Rough Setin <strong>Medical</strong> Image RegistrationWEI Jianming1, ZHANG Jianguo2, WANGBaohua116:20 Time-50: Registration of 4D Brain EITdata to St<strong>and</strong>ard Anatomical BrainYan Zhang, Peter J. Passmore, Richard H.Bayford16:40 Time-5: Application of Geodesic ActiveContours to the segmentation of lesionson PET (Positron Emission Tomography)imagesXiaohong W. Gao, Dawit H. Birhane, J.ClarkSession IV: Computer applications onmedical information (Part I)Chair: Dr. Thomas M. LehmannTime-21: Analytic Modeling <strong>and</strong> Simulatingof the Human Crystalline Lens with FiniteElement MethodZhuo Liu, Boliang Wang, Xiuying Xu, Ying JuTime-26: An approach to the relevancebetween varations in hormone secretion <strong>and</strong>the incidence of Hyperplasia of MammaryGl<strong>and</strong>s <strong>and</strong> Mammary CancerC Chen, Y LinTime-48: A Method for Extracting MovementTrack of Sequential Gray-Points WhichRepresent the Cardiac Structure inOmnidirectional M-mode EchocardiographyWu Wenji, Lin Qiang17:00-18:0018:30-20:00Panel Session: The way forwardDinner BanquetAugust 18 - (Day 3)9:00 Key Note speechDr. Henning Muller, Switzerl<strong>and</strong>10:00 Tea BreakSession V: <strong>Medical</strong> imaging equipment-Techniques <strong>and</strong> ApplicationsChair: Prof. Paolo Inchingolo10:30 Time-42: Development of a PortableECG Monitor Based on TMS320VC5402QI lin, Bai Ging, Zhang Yonghong, LiuChenguang10:50 Time-55: Colour Reproduction for TeleimagingSystemsPing He, Xiaohong Gao11:10 Time-35: A processing to B-modeultrasonic image for noninvasivedetecting temperature in hyperthermiaWu Shuicai, Ren Xinying, Bai Yanping,Liu Youjun, Zeng Yi11:30 Time-41: The Soft Tissues DeformationSimulation based on Graphic ProcessingUnitChen Yu, LIN Peng, WANG Bo-liang, XUSession VI: Tele-medicine, techniques <strong>and</strong>applicationsChair: Dr. Henning MullerTime-3: <strong>Telemedicine</strong> Network in FrenchspeakingAfrica (RAFT)Cheickh Oumar Bagayoko, Henning Müller,Antoine GeissbühlerTime-4: Digitally Integrated TeleradiologyNetwork in Hamburg GermanyW. Auffermann, B. Warncke, J. StettinTime-43: Long-Range Fetal Heart RateMonitor In ChinaXicheng XIETime-52: A physiology multi-parametertelemonitoring system based on InternetDu fangfang, Zhang Song,Bai Yanping, Wuvi


Xiu-yingShuicai11:50 Time-45: Dynamic Spectrum:A NobleMethod for Non-invasive BloodComponents MeasurementTime-56: Wireless CommunicationTechnologies in Mobile <strong>Telemedicine</strong>Y. Wang, G. Li, L. Lin, X.X. Li, Y.L. Liu Chen Min, Lei Jianmei, Guo Xingming12:10 Time-47: Fuzzy Enhancemen about 2D-Omnidirectional M-modeEchocardiographyTime-29: Analysis of Colour Images forTissue Classification <strong>and</strong> Healing Assessmentin Wound CareZhang hua, Lin Qiang H. Zheng, D. Patterson, M. Galushka, L.Bradley12:30 Time-37: The Research ofOmnidirectional M-modeEchocardiography SystemLin Qiang, Wu Wenji13:00-14:30Lunch14:30 Session VII: Visualization: Software <strong>and</strong>hardwareChair: Prof. Jyh-Cheng Chen (Taiwan)14:30 Time-38: : Parametric 3D Visualizationof Human Crystalline Lens based onOpenGL14:50 XU Xiuying, LIU Zhuo ,WANGBoliang,CHEN Yu,YANG Hao15:10 Time-8: 3D Navigation of CT VirtualEndoscopy in Non-invasive DiagnosticDetectionXie Xiaomian, Tao Duchun, Chen Siping,Bi Yalei15:30 Time-9: Automatic Correction of MinIPin CT Diagnostic ApplicationsTao Duchun Xie Xiaomian Bi Yalei ChenSipingSession VIII: Computer applications onmedical information (Part II)Chair: Dr. John ClarkTime-46: Movement information of specialgoal research on a sequence image of heart’sB- ultrasonicYang Xiu Zhi Lin QiangTime-28: Assessment of atrial septal motionwith LEJ-1 omnidirectional M-modeechocardiographyGuo wei, Lu li-hong , Chen binTime-33: Interpretation of the Interaction ofEndocrine Hormone Axial Systems <strong>and</strong>Mammary Gl<strong>and</strong> TumorChen Chengqi15:50-16:30Tea Break16:30 Time-10: Evaluation of QuantitativeCapability of the Iterative <strong>and</strong> AnalyticReconstructions for a Cone-Beam MicroComputed Tomography SystemHo-Shiang Chueh, Wen-Kai Tasi, Hsiao-Mei Fu, *Jyh-Cheng Chen16:50 Time-20: An Approach to True-ColorVolume Data Preparation for Virtual EyeLIN Peng, CHEN Yu, WANG Bo-liangTime-49: A Method for abstracting dynamicinformation of Sequential Images—Rebuilding of Gray (Position)~time Functionon Arbitrary Direction LinesLin Qiang, Li WeiTime-44: A Study On Metallic Micro-Capsulation For Immunoisolation InTransplanting Cells And TissuesLi Gang, Zhan Minjng, Cui Hualei , Stephen.Lu17:10 Time-16: <strong>Imaging</strong> Biological Micro- Time-23: The Research of Dynamic Changevii


tissues <strong>and</strong> Organs Using Phase-contrastX-ray <strong>Imaging</strong> techniqueHongxia Yin, Bo Liu, Xin Gao, Hang Shu,Xiulai Gao, Peiping Zhu, Shuqian Luo17:30 Time-34: An novel micro-tomographyalgorithm for X-ray diffraction enhancedimagingXin Gao, Shuqian Luo, Bo Liu, Maolin Xu,Hongxia Yin, Hang Shu3, Xiulai Gao,Peiping Zhu17:50 Time-57: Intervertebral discbiomechanical analysis using the finiteelement modeling based on medicalimageZheng Wang, Haiyun Li<strong>and</strong> Relativity of Hormone Secretion BreastTumorChen ChengqiTime-40: Auto data <strong>and</strong> Semantic IntegrationSystemHuang LiqinTime-58: Three-dimensional Skin SurfaceTexture Editing Based on Self-similarityJunyu Dong, Lin Qi, Guojiang Chen, Jiahua Wu18:30-20:30Dinner BanquetAugust 19 - (day 4)Tour of Wuyi MountainAugust 20 - (Day 5)Departureviii


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaInvited Talks1


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaThe medGIFT project on medical image retrievalHenning Müller, Christian Lovis, Antoine GeissbühlerUniversity <strong>and</strong> Hospitals of Geneva, Service of <strong>Medical</strong> Informatics, 24 Rue Micheli-du-Crest, Geneva, Switzerl<strong>and</strong>Email: henning.mueller@sim.hcuge.chAbstract— <strong>Medical</strong> images are an essential part of diagnostics<strong>and</strong> treatment planning. The variety of images produced <strong>and</strong>the amount are rising constantly. Digital radiology has alsobrought new possibilities for the use of medical images severalcontexts. In fields such as evidence–based medicine or case–basedreasoning medical image data can play a prominent role if toolsare available to ease access to images <strong>and</strong> the accompanyingtextual data. Retrieval algorithms need to meet the informationneed of the users at a certain time. The right information needsto be accessible to the right persons at the right time.The medGIFT project described in this paper includes severalaxes around the retrieval of medical images from a variety ofdatabases <strong>and</strong> image kinds as well as for several applications.The framework is based around the open source image retrievaltool GIFT (GNU Image Finding Tool) <strong>and</strong> adds tools to thisenvironment to create a system adapted for the domain–specificneeds in medical image retrieval. These tools include the preprocessingof images for better retrieval, through the extraction ofthe main object or even through segmentation in specialised fieldssuch as lung image retrieval. The combination <strong>and</strong> integrationof GIFT with tools for text retrieval such as Lucene <strong>and</strong> EasyIRare other applications. Another strong point of GIFT is thecreation of an infrastructure for image retrieval evaluation. TheImageCLEFmed benchmark is a result of the project <strong>and</strong> theoutcome does not only help locally but is accessible for manyresearch groups on all continents. These axes <strong>and</strong> the goals behindcurrent developments are described in this paper.I. INTRODUCTIONProduction <strong>and</strong> availability of digital images is rising inall domains <strong>and</strong> as a consequence the retrieval of images byvisual means has been one of the most active research areasin the fields of image processing <strong>and</strong> information retrievalover the past ten to fifteen years [1–3]. Goal is most oftento retrieve images based on the visual content, only, to allownavigation even in poorly or non–annotated databases. Mostsystems use simple low–level features such as the imagelayout, shape, color, <strong>and</strong> texture features [4]. Newer systemsadd segmentation in often limited domains <strong>and</strong> try to matchvisual features <strong>and</strong> keywords to attach semantic meaning toimages [5]. Still, it becomes clear that visual features can onlysatisfy part of the information need of users. Text is still themethod of choice for most queries, especially as a startingpoint, whereas visual browsing can be important to refine thefirst results found or specific needs (“Show me chest x-rayslooking similar to tuberculosis but have a different diagnosis”).In the medical domain, the need to index <strong>and</strong> retrieve imageshas been defined early [6–9] <strong>and</strong> a variety of applications hasbeen developed for general image classification [10] as well asfor aiding diagnostics [11]. Unfortunately, most of the projectsare rather distant from clinical routine [12] <strong>and</strong> unrealisticassumptions are taken into account such as the indexation ofan entire PACS [13] (Note: the Geneva radiology currentlyproduces over 30.000 images a day <strong>and</strong> has millions stored inthe PACS). Overviews of applications in the medical imageretrieval domain can be found in [14, 15].Many of the problems of image retrieval in the medicaldomain are linked to a distance between medical divisions<strong>and</strong> the computer sciences departments that most systemsare developed in. Thus, little is often known about the useof images in a clinical settings <strong>and</strong> few of the applicationswork on realistically sized databases or are integrated <strong>and</strong>usable in a clinical context. Another problem is the lackof evaluation of research prototypes. Often extremely smalldatasets are used <strong>and</strong> settings are made to fit the system ratherthan the other way around. Evaluation of several systemson the same datasets has not been performed before theImageCLEF initiative. The medGIFT project is trying to attackthese problems <strong>and</strong> develop an open source framework ofreusable components for a variety of medical applicationsto foster resource sharing <strong>and</strong> avoid costly redevelopment.A survey has been done to find out real user needs <strong>and</strong> anevaluation resource has been created in the framework ofthe ImageCLEF retrieval campaign, so research groups cancompare their algorithms based on the same datasets <strong>and</strong> onrealistic topics. The different axes for these developments willbe described in the following chapters.II. AN IMAGE RETRIEVAL FRAMEWORKMedGIFT is strongly based on the GNU Image Findingtool (GIFT 1 ) as its central piece. Main developments are onthe integration of various new components around GIFT tocreate a domain–specific search <strong>and</strong> navigation tool.A. GIFT/MRMLGIFT is the outcome of the Viper 2 project of the Universityof Geneva [16]. It is a retrieval engine <strong>and</strong> encompassingframework for the retrieval of images by their visual contentonly. Several simple scripts allow to index entire directorytrees, execute queries by a comm<strong>and</strong> line tool <strong>and</strong> generateinverted files. The visual features used are meant for colorphotography <strong>and</strong> include a simple color histogram as well ascolor blocks in various areas of the images <strong>and</strong> at severalscales. Most interesting part of GIFT is the use of techniqueswell known from text retrieval. The features are quantised intobins so their distribution corresponds almost to the distribution1 http://www.gnu.<strong>org</strong>/software/gift/2 http://viper.unige.ch/2


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinaof words in texts. Then, frequency–based weights similarto typical tf/idf weightings are used [17]. To allow for anefficient feature access, an inverted file structure is used <strong>and</strong>pruning methods are implemented [18]. This allows interactivequerying with response times under one second on normalPentium IV desktops even if the database is larger than 50.000images. This means that the feature space is extremely largewith over 80.000 possible features. Usually, an image containsbetween 1000 <strong>and</strong> 2000 features. As GIFT uses ImageMagickto convert images, also medical DICOM images can be usedfor indexing without any changes to the code.To separate the actual query engine from a user interface,the Multimedia Retrieval Markup Language (MRML 3 ) wasdeveloped. This query language is based on direct communicationof search engine <strong>and</strong> interface via sockets <strong>and</strong> easesa variety of applications such as meta–search engines <strong>and</strong> alsothe integration of a retrieval tool into a variety of environments<strong>and</strong> applications. The entire communication is based on theXML st<strong>and</strong>ard, which allows for quick development of tools.MRML also serves as a language to store log files of userinteraction. This information can be used to improve the queryperformance by long–term learning from user behaviour [19].Main goal of the framework is to avoid the redevelopment ofan entire system by being able to use the base components<strong>and</strong> just work on parts that changes are needed for.B. User interfacesAs medGIFT is a domain–specific search tool, the userinterface has different requirements from other domains. Oneimportant part is the display of not only thumbnail imagesfor the browsing but also the text of the diagnosis. Whereasa holiday picture might bear enough information without text,for medical images this text is absolutely necessary. For furtheranalysis much more than just a few keywords are neededbecause the images themselves out of the context do not seemto be extremely useful. Thus, our interface is integrated witha medical case database developed at the University Hospitalsof Geneva called Casimage 4 [20]. Most teaching files suchas Casimage or myPACS 5 have similar simple interfaces. Thismeans that a number of images are stored together with anordered description of a case. On the other h<strong>and</strong>, not muchcontrol is being performed on the quality of the text enteredwhich results in records of extremely varying quality withseveral being empty <strong>and</strong> other containing spelling errors <strong>and</strong>non–st<strong>and</strong>ard abbreviations.Figure 1 shows a typical web interface after a querywas executed. The query results are displayed ordered bytheir visual similarity to the query, with a similarity scoreshown underneath the images. The diagnosis is also shownunderneath the images. A click on the image links with thecase database system <strong>and</strong> allows to access the full-size images.Images are addressed via URL <strong>and</strong> it is thus possible tosubmit any accessible URL directly as query. Images will be3 http://www.mrml.net/4 http://www.casimage.com/5 http://www.mypacs.net/Fig. 1. A screen shot of a typical web interface for medical image retrievalsystem allowing query by example(s) with the diagnosis underneath the image.downloaded, features extracted for the query, <strong>and</strong> a thumbnailwill be stored locally for display in the interface. The samething occurs for images from a local disk that can be submitteddirectly. This system allows for an easy access to a closedimage database for basically all applications in the hospital.C. Features, weightings, mix of visual <strong>and</strong> textual retrievalFor the ease of processing all images are first converted to256x256 pixels. Then, GIFT relies on four main groups offeatures for retrieval:• global color features in the form of a color histogram inHSV space (Hue=18, Saturation=3, Value=3, Gray=4);• local color features in the form of the mode color ofblocks in various sizes <strong>and</strong> various regions by successivelydividing the image into four equally–sized regions;• global texture features in the form of a Gabor filterhistogram using four directions, <strong>and</strong> three scales. Thefilter responses are quantised into 10 bins;• local Gabor filter responsesGabor filter responses have often shown their good performancefor texture characterisation [21]. Equally the HSV colorspace has proven to be closer to human perception that spacessuch as RGB <strong>and</strong> it still is easy to calculate [22]. For themedical domain grey levels <strong>and</strong> textures are more importantthan the color features that perform best on stock–photography.Thus, the medGIFT system uses several configurations ofGabor filters <strong>and</strong> a higher number of grey levels. Surprisinglysmall numbers of grey (8-16) lead to best retrieval results.Two different weightings are used for the four featuregroups. The two global histogram features are weighted accordingto a simple histogram intersection [23]. The two blockfeature groups that represent around 80% of the features areweighted according to a simple tf/idf weighting:feature weight j = 1 NN∑i=1(tfij · R i)· log2( 1cf j), (1)3


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinawhere tf is the term frequency of a feature, cf the collectionfrequency of a feature, j a feature number, q corresponds toa query with i = 1..N input images, <strong>and</strong> R i is the relevanceof an input image i within the range [−1; 1].Then a score is assigned to a possible result image k withquery q containing features 1..j:score kq = ∑ ( )feature weightj , (2)jScores are calculated for all four feature groups separately<strong>and</strong> then added in a normalised way, which leads to betterresults than a simple addition [18].In connection with easyIR 6 the combination of visual<strong>and</strong> textual features was attempted in the ImageCLEF 2004competition [24]. Results were the best in the competition withrelevance feedback <strong>and</strong> second best for automatic retrieval.The results are simply normalised separately <strong>and</strong> then added.(a)(b)D. Image pre–treatmentLow–level image features have their problems in effectiveretrieval of images but other problems seem to be even moreimportant for medical images. Normally, a medical imagecontains one distinct entity as the images are taken with avery specific goal in mind <strong>and</strong> under always similar conditions.Problems are the varying machines <strong>and</strong> settings used <strong>and</strong> thebackground information that sometimes contains informationon the image taken but can be regarded as noise with respectto visual retrieval.1) lung segmentation: High–resolution lung CT retrieval isone of the few domains that have been applied in a real clinicalsetting with success [25]. Still, all current solutions require themedical doctor to annotate the image before a classification ofthe tissue is made <strong>and</strong> concentrate on a very restricted numberof pathologic tissue textures, only. The first <strong>and</strong> most importantquestion is actually whether the tissue is normal (healthy) ornot. For this, it is important to concentrate retrieval on the lungtissue itself, which is a problem with existing solutions [26,27]. We implemented <strong>and</strong> optimised the algorithm to workon JPEG as well as DICOM images [28]. The results aresatisfying (see Figure 2, 80% in classes 1,2) <strong>and</strong> we couldwell index the resulting lung parts for further retrieval.2) Object extraction: As many sorts of medical imagesare taken with the precise goal to represent a single object,the goal is to extract the object <strong>and</strong> remove all backgroundunnecessary for retrieval [29]. Some typical images from ourdatabase are shown in Figure 3. The removal is mainly donethrough a removal of specific structures followed by a lowpass filter (median) followed by thresholding <strong>and</strong> a removalof small unconnected objects. After the object extraction phasemuch of the background is removed <strong>and</strong> only very few imageshad too much being removed. Figure 4 shows the results ofthree images. Some background structures were too big to beremoved but the goal was clearly to have as few images aspossible with too much removed <strong>and</strong> this was reached.6 http://lithwww.epfl.ch/˜ruch/softs/softs.html(c)Fig. 2. Four classes of lung segmentation: (a) good segmentation, (b) smallparts missing, (c) large parts missing or fractured, (d) segmentation failed(right lung missing in this case).Fig. 3.(d)Images before the removal of logos <strong>and</strong> text.The retrieval stage shows that subjectively the results getmuch better <strong>and</strong> much more focused, especially with the use ofrelevance feedback. Still, on the ImageCLEF 2004 dataset, theresults were actually slightly worse. Part of this can be relatedto the fact that the system was not part of the pooling beforethe ground truthing <strong>and</strong> the technique brings up unjudged butrelevant images, which can influence results [30]. Anotherreason is the missing outline between the object <strong>and</strong> thebackground that can well be detected by the Gabor filters.Adding a few lines of background might improve results.III. IMAGE CLASSIFICATION/FEATURE CLASSIFICATIONImage classification is strongly related to image retrieval buttakes into account learning data to classify images into severalwell-defined classes based on usually visual features [31].In the ImageCLEF 2005 competition, a visual classificationbased on the IRMA 7 dataset was started. The dataset contains9000 training images representing 57 classes. Then, 10007 http://www.irma-project.<strong>org</strong>/4


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaFig. 4.Images after the removal of logos <strong>and</strong> text.images had to be classified correctly into these 57 classes.Due to considerable time constraints, no learning could beperformed on the data for our submission. A simple nearestneighbour (NN) algorithm was performed based on simpleretrieval results with GIFT, adding the scores of the firstN=1,5,10 images <strong>and</strong> taken the class with the highest scoreas result. Despite the fact that no training data was used, theclassification rate of the best configuration was 79.4%, using8 grey levels <strong>and</strong> 8 directions of the Gabor filters. Takinginto account learning information on these classes in a wayexplained in [19] can strongly improve these results. Withoutlearning the GIFT system had the 6th best performance withonly 3 of 12 groups obtaining better results.Another classification project has been started on the classificationof lung CT textures into classes of visual observations[32]. In this project, the lung tissue is fist segmentedfrom the rest of the CT scan. Then, the tissue is separatedinto smaller blocks <strong>and</strong> each of the blocks is classified intoone class of visual observation such as healthy tissue, micronodules, macro nodules, emphysema, etc. The system workscompletely automatic <strong>and</strong> goal is to highlight abnormal regionsin a lung CT automatically. The current system is based on asmall set of learning data using 12 CT series <strong>and</strong> 112 regionsannotated by a radiologist. The classification between healthy<strong>and</strong> pathologic tissue has an accuracy of over 80% with anearest–neighbour strategy <strong>and</strong> over 90% using Support VectorMachines (SVM). Part of the errors can be explained withtissue not being annotated exactly. This means that blocksthat are annotated as emphysema are actually right next toan emphysema but are in fact healthy tissue (see Figure 5).For the classification into several classes of visual observations,another problem becomes apparent, the extremelyunbalanced training data set. Only healthy tissue, emphysema<strong>and</strong> micro nodules have a sufficiently large percentage in thetraining data, <strong>and</strong> these classes perform much better thanclasses with only one or two example blocks. An overallclassification quality of around 83% has been achieved.Some of the open source tools used for this include Weka 8 ,the insight toolkit itk 9 <strong>and</strong> svmlib.IV. IMAGE RETRIEVAL EVALUATIONMuch has been written on retrieval evaluation [33, 34] butmost of the efforts such as the Benchathlon 10 did not result insystems being compared. There are only few articles on theuse of images in a particular domain <strong>and</strong> how users wouldlike to access <strong>and</strong> search for them.A. Survey on image use in the medical fieldImages are used in many domains in increasing quantities.Digital images have started to offer new search <strong>and</strong> usageparadigms as they are accessible directly to the user <strong>and</strong> searchcan be performed without the necessity for perfect keywordsthrough visual search. A few research groups have actuallyconducted surveys on the use of images for journalists [35]<strong>and</strong> in other domains such as libraries or cultural heritageinstitutions [1]. In the medical domain, to our knowledgeno study on image use <strong>and</strong> searching habits has been performedas of yet, only studies on general information retrieval[36]. Thus we initiated a survey of medical image users attwo institutions, the Oregon Health <strong>and</strong> Science University(OHSU) <strong>and</strong> the Geneva University Hospitals including over30 persons. Clinicians as well as researchers, lecturers, students<strong>and</strong> librarians were asked on their habits using images,the sorts of images concerned, <strong>and</strong> the tasks they search for.Another question includes the search methods that they wantedto access images to support their particular tasks. The resultsare planned to be published when the surveys are finished.First results suggest that there is a strong need to search forimages from certified resources. Several people stated to usegoogle to search for images for teaching or to illustrate articlesbut have sometimes problems figuring out the copyright orthe validity for images. To support clinical use, the supportof similar cases was suggested to be extremely important aswell as the search for pathologies in the electronic patientrecord. The need to classify images by anatomic region <strong>and</strong>modality were also given as examples for a need. First resultsof this study have been used to create topics for the 2005ImageCLEFmed competition.B. ImageCLEFmedImageCLEF is part of the Cross Language EvaluationForum (CLEF 11 ) that evaluates the retrieval of documents inFig. 5. An example for an annotated region of the Emphysema class, wherehealthy tissue is marked as well.8 http://www.cs.waikato.ac.nz/˜ml/weka/9 http://www.itk.<strong>org</strong>/10 http://www.benchathlon.net/11 http://www.clef-campaign.<strong>org</strong>/5


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinamultilingual contexts. This means that the collections can bemultilingual, or the query <strong>and</strong> the document collection arein different languages. In 2003, an image retrieval task wasadded called ImageCLEF 12 using mainly grey scale images<strong>and</strong> English annotation with query topics being in severallanguages. In 2004, a visual task from the medical domainwas added [37, 38] <strong>and</strong> participation increased from 4 groupsin 2003 to 18 groups in 2004. The database of the medical taskis a freely available database of the Casimage project <strong>and</strong> thetask was <strong>org</strong>anised by the medGIFT group. The query consistsof an image only but text was available through automaticquery expansion. Outcome is that visual features can enhancethe quality of retrieval if used in combination with text.In 2005, two medical 13 tasks were <strong>org</strong>anised within Image-CLEF, one image classification task (see section III) <strong>and</strong> animage retrieval task based on a larger database containing over50.000 images. Part of the database are the Casimage datasetthat contains almost 9.000 images of 2.000 cases [20, 37].Images present in the data set include mostly radiology, butalso photographs, powerpoint slides <strong>and</strong> illustrations. Cases aremainly in French, with around 20% being in English. We werealso allowed to use PEIR 14 (Pathology Education InstructionalResource) with annotation from the HEAL 15 project (HealthEducation Assets Library, mainly pathology images [39]). Thisdataset contains over 33.000 images with English annotationin XML per image <strong>and</strong> not per case as Casimage. The nuclearmedicine database of MIR, the Mallinkrodt Institute of Radiology16 [40], was also made available to us for ImageCLEF.This dataset contains over 2.000 images mainly from nuclearmedicine with annotations per case in English. Finally, thePathoPic 17 collection (Pathology images [41]) was included. Itcontains 9.000 images with an extensive annotation per imagein German. Part of the German annotation is translated intoEnglish. The topics are based on the survey conducted <strong>and</strong> arecloser to clinical reality than the topics of the 2004 task.In 2005, over 30 groups registered for ImageCLEF <strong>and</strong>over 20 groups submitted results to one of the four tasks. Theevaluation of the submissions is currently being performed.V. CONCLUSIONS AND FUTURE IDEASIn conclusion it can be said that medGIFT is not a project ona single subject but it is rather a project encompassing a varietyof subjects around medical image retrieval trying to develop abetter underst<strong>and</strong>ing of medical imaging tasks <strong>and</strong> image usein the medical domain. Many of the sub–projects have juststarted <strong>and</strong> further results are expected. Goal is to use whereverpossible existing open source software <strong>and</strong> solutions to keepdevelopment costs low. An integral part of the project is thecreation of an evaluation framework for medical retrieval thatis anchored in the ImageCLEFmed tasks. This project gives12 http://ir.shef.ac.uk/imageclef/13 http://ir.ohsu.edu/image/14 http://peir.path.uab.edu/15 http://www.healcentral.com/16 http://gamma.wustl.edu/home.html17 http://alf3.urz.unibas.ch/pathopic/intro.htmresearch groups without the contact to a medical institution thepossibility to work on real medical data <strong>and</strong> realistic tasks withthe goal to create applications usable in a clinical environment.To develop these tasks, surveys <strong>and</strong> the contact to medicalpractitioners are extremely importantAlthough many of the currently developed prototypes arenot usable in a clinical setting, much of the knowledge canbe reused for these applications <strong>and</strong> many ideas are actuallyevolving while developing these prototypes. One of the ideasfor an easy integration of the image retrieval interface intoexisting clinical applications is the use of a harvesting algorithmof images from a calling web page. This means that asimple box is added to a web page to call an interface thatautomatically copies the images from the calling page to alocal directory connects to a GIFT server <strong>and</strong> then allows tochoose among the harvested images those relevant for a querytask.Another automatic application is the use of a DICOMheader control program. As DICOM headers often show alarge number of errors [42], all images from the PACS canbe controlled against a reference dataset <strong>and</strong> images whereproblems are suspected can be sorted out for a manualcorrection of a proposed new header information.For the classification of lung tissue several extensions areforeseen. Lung tissue in the same area can contain severaldiseases or visual observations so this needs to be includedinto the classification, creating the need for classifiers for eachdiagnosis against all other diagnoses. This leads to SVM, thatalso perform well on unbalanced datasets, which is also thecase of the lung CT blocks.Much still needs to be done in the field of visual medicalinformation management <strong>and</strong> much needs to be learned aboutthe need of medical practitioners. Only if applications areuseful <strong>and</strong> applicable in a real setting, they will be used.ACKNOWLEDGMENTPart of this research was supported by the Swiss NationalScience Foundation with grant 632-066041.REFERENCES[1] P. G. B. Enser, “Pictorial information retrieval,” Journal of Documentation,vol. 51, no. 2, pp. 126–170, 1995.[2] A. W. M. Smeulders, M. Worring, S. Santini, A. Gupta, <strong>and</strong> R. Jain,“Content–based image retrieval at the end of the early years,” IEEETransactions on Pattern Analysis <strong>and</strong> Machine Intelligence, vol. 22 No12, pp. 1349–1380, 2000.[3] Y. Rui, T. S. Huang, M. Ortega, <strong>and</strong> S. Mehrotra, “Relevance feedback:A power tool for interactive content–based image retrieval,” IEEETransactions on Circuits <strong>and</strong> Systems for Video Technology, vol. 8,no. 5, pp. 644–655, September 1998, (Special Issue on Segmentation,Description, <strong>and</strong> Retrieval of Video Content). [Online]. Available:http://www-db.ics.uci.edu/pages/publications/1998/TR-MARS-98-10.ps[4] M. Flickner, H. Sawhney, W. Niblack, J. Ashley, Q. Huang, B. Dom,M. Gorkani, J. Hafner, D. Lee, D. Petkovic, D. Steele, <strong>and</strong> P. Yanker,“Query by Image <strong>and</strong> Video Content: The QBIC system,” IEEE Computer,vol. 28, no. 9, pp. 23–32, September 1995.[5] J. Jeon, V. Lavrenko, <strong>and</strong> R. Manmatha, “Automatic image annotation<strong>and</strong> retrieval using cross–media relevance models,” in InternationalConference of the Special Interest Group on Information Retrieval(SIGIR 2003), Toronto, Canada, August 2003.6


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, China[6] H. D. Tagare, C. Jaffe, <strong>and</strong> J. Duncan, “<strong>Medical</strong> image databases: Acontent–based retrieval approach,” Journal of the American <strong>Medical</strong>Informatics Association, vol. 4, no. 3, pp. 184–198, 1997.[7] H. J. Lowe, I. Antipov, W. Hersh, <strong>and</strong> C. Arnott Smith, “Towardsknowledge–based retrieval of medical images. The role of semanticindexing, image content representation <strong>and</strong> knowledge–based retrieval,”in Proceedings of the Annual Symposium of the American Society for<strong>Medical</strong> Informatics (AMIA), Nashville, TN, USA, October 1998, pp.882–886.[8] C. Traina Jr, J. M. Traina, Agma, R. R. dos Santos, <strong>and</strong> E. Y. Senzako,“A support system for content–based medical image retrieval in objectoriented databases,” Journal of <strong>Medical</strong> Systems, vol. 21, no. 6, pp.339–352, 1997.[9] G. Bucci, S. Cagnoni, <strong>and</strong> R. De Domicinis, “Integrating content–basedretrieval in a medical image reference database,” Computerized <strong>Medical</strong><strong>Imaging</strong> <strong>and</strong> Graphics, vol. 20, no. 4, pp. 231–241, 1996.[10] T. Lehmann, M. O. G üld, C. Thies, K. Spitzer, D. Keysers, H. Ney,M. Kohnen, H. Schubert, <strong>and</strong> B. B. Wein, “Content–based imageretrieval in medical applications,” Methods of Information in Medicine,vol. 43, pp. 354–361, 2004.[11] C.-R. Shyu, C. E. Brodley, A. C. Kak, A. Kosaka, A. M. Aisen,<strong>and</strong> L. S. Broderick, “ASSERT: A physician–in–the–loop content–basedretrieval system for HRCT image databases,” Computer Vision <strong>and</strong>Image Underst<strong>and</strong>ing (special issue on content–based access for image<strong>and</strong> video libraries), vol. 75, no. 1/2, pp. 111–132, July/August 1999.[12] M. R. Ogiela <strong>and</strong> R. Tadeusiewicz, “Semantic–oriented syntactic algorithmsfor content recognition <strong>and</strong> underst<strong>and</strong>ing of images in medicaldatabases,” in Proceedings of the second International Conferenceon Multimedia <strong>and</strong> Exposition (ICME’2001), IEEE Computer Society.Tokyo, Japan: IEEE Computer Society, August 2001, pp. 621–624.[13] H. Qi <strong>and</strong> W. E. Snyder, “Content–based image retrieval in PACS,”Journal of Digital <strong>Imaging</strong>, vol. 12, no. 2, pp. 81–83, 1999.[14] L. H. Y. Tang, R. Hanka, <strong>and</strong> H. H. S. Ip, “A review of intelligentcontent–based indexing <strong>and</strong> browsing of medical images,” Health InformaticsJournal, vol. 5, pp. 40–49, 1999.[15] H. M üller, N. Michoux, D. B<strong>and</strong>on, <strong>and</strong> A. Geissbuhler, “A review ofcontent–based image retrieval systems in medicine – clinical benefits <strong>and</strong>future directions,” International Journal of <strong>Medical</strong> Informatics, vol. 73,pp. 1–23, 2004.[16] D. M. Squire, W. M üller, H. M üller, <strong>and</strong> T. Pun, “Content–based query ofimage databases: inspirations from text retrieval,” Pattern RecognitionLetters (Selected Papers from The 11th Sc<strong>and</strong>inavian Conference onImage Analysis SCIA ’99), vol. 21, no. 13-14, pp. 1193–1198, 2000,B.K. Ersboll, P. Johansen, Eds.[17] G. Salton <strong>and</strong> C. Buckley, “Term weighting approaches in automatic textretrieval,” Department of Computer Science, Cornell University, Ithaca,New York 14853-7501, Tech. Rep. 87-881, November 1987.[18] H. M üller, D. M. Squire, W. M üller, <strong>and</strong> T. Pun, “Efficient accessmethods for content–based image retrieval with inverted files,” in MultimediaStorage <strong>and</strong> Archiving Systems IV (VV02), ser. SPIE Proceedings,S. Panchanathan, S.-F. Chang, <strong>and</strong> C.-C. J. Kuo, Eds., vol. 3846, Boston,Massachusetts, USA, September 20–22 1999, pp. 461–472.[19] H. M üller, D. M. Squire, <strong>and</strong> T. Pun, “Learning from user behavior inimage retrieval: Application of the market basket analysis,” InternationalJournal of Computer Vision, vol. 56(1–2), pp. 65–77, 2004, (SpecialIssue on Content–Based Image Retrieval).[20] A. Rosset, H. M üller, M. Martins, N. Dfouni, J.-P. Vallée, <strong>and</strong> O. Ratib,“Casimage project – a digital teaching files authoring environment,”Journal of Thoracic <strong>Imaging</strong>, vol. 19, no. 2, pp. 1–6, 2004.[21] A. Jain <strong>and</strong> G. Healey, “A multiscale representation including opponentcolor features for texture recognition,” IEEE Transactions on ImageProcessing, vol. 7, no. 1, pp. 124–128, January 1998.[22] J.-M. Geusebroek, R. van den Boogaard, A. W. M. Smeulders, <strong>and</strong>H. Geerts, “Color invariance,” IEEE Transactions on Pattern Analysis<strong>and</strong> Machine Intelligence, vol. 23, no. 12, pp. 1338–1350, 2001.[23] M. J. Swain <strong>and</strong> D. H. Ballard, “Color indexing,” International Journalof Computer Vision, vol. 7, no. 1, pp. 11–32, 1991.[24] H. M üller, A. Geissbuhler, <strong>and</strong> P. Ruch, “ImageCLEF 2004: Combiningimage <strong>and</strong> multi–lingual search for medical image retrieval,” in CrossLanguage Evaluation Forum (CLEF 2004), ser. Springer Lecture Notesin Computer Science (LNCS), Bath, Engl<strong>and</strong>, 2005.[25] A. M. Aisen, L. S. Broderick, H. Winer-Muram, C. E. Brodley, A. C.Kak, C. Pavlopoulou, J. Dy, C.-R. Shyu, <strong>and</strong> A. Marchiori, “Automatedstorage <strong>and</strong> retrieval of thin–section CT images to assist diagnosis:System description <strong>and</strong> preliminary assessment,” Radiology, vol. 228,pp. 265–270, 2003.[26] S. Hu, E. A. Hoffman, <strong>and</strong> J. M. M. Reinhardt, “Automatic lungsegmentation for accurate quantitation of volumetric X–ray CT images,”IEEE Transactions on <strong>Medical</strong> <strong>Imaging</strong>, vol. 20, no. 6, pp. 490–498,2001.[27] G. J. Kemerink, R. J. S. Lamers, B. J. Pellis, K. H. H., <strong>and</strong> J. M. A.van Engelshoven, “On segmentation of lung parenchyma in quantitativecomputed tomography of the lung,” <strong>Medical</strong> Physics, vol. 25, no. 12,pp. 2432–2439, 1998.[28] J. Heuberger, A. Geissbuhler, <strong>and</strong> H. M üller, “Lung CT segmentationfor image retrieval,” in <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005),Wuyi Mountain, China, 2005.[29] H. M üller, J. Heuberger, <strong>and</strong> A. Geissbuhler, “Logo <strong>and</strong> text removal formedical image retrieval,” in Springer Informatik aktuell: Proceedings ofthe Workshop Bildverarbeitung für die Medizin, Heidelberg, Germany,March 2005.[30] J. Zobel, “How reliable are the results of large–scale informationretrieval experiments?” in Proceedings of the 21st Annual InternationalACM SIGIR Conference on Research <strong>and</strong> Development in InformationRetrieval, W. B. Croft, A. Moffat, C. J. van Rijsbergen, R. Wilkinson,<strong>and</strong> J. Zobel, Eds. Melbourne, Australia: ACM Press, New York,August 1998, pp. 307–314.[31] T. M. Lehmann, M. O. G üld, T. Deselaers, H. Schubert, K. Spitzer,H. Ney, <strong>and</strong> B. B. Wein, “Automatic categorization of medical imagesfor content–based retrieval <strong>and</strong> data mining,” Computerized <strong>Medical</strong><strong>Imaging</strong> <strong>and</strong> Graphics, vol. 29, pp. 143–155, 2005.[32] H. M üller, S. Marquis, G. Cohen, <strong>and</strong> A. Geissbuhler, “Lung CT analysis<strong>and</strong> retrieval as a diagnostic aid,” in <strong>Medical</strong> Informatics Europe (MIE2005), Geneva, Switzerl<strong>and</strong>, 2005 – submitted.[33] H. M üller, W. M üller, D. M. Squire, S. March<strong>and</strong>-Maillet, <strong>and</strong> T. Pun,“Performance evaluation in content–based image retrieval: Overview <strong>and</strong>proposals,” Pattern Recognition Letters, vol. 22, no. 5, pp. 593–601,April 2001.[34] N. J. Gunther <strong>and</strong> G. Beretta, “A benchmark for image retrieval usingdistributed systems over the Internet: BIRDS–I,” HP Labs, Palo Alto,Technical Report HPL–2000–162, San Jose, Tech. Rep., 2001.[35] M. Markkula <strong>and</strong> E. Sormunen, “Searching for photos – journalists’practices in pictorial IR,” in The Challenge of Image Retrieval, AWorkshop <strong>and</strong> Symposium on Image Retrieval, ser. Electronic Workshopsin Computing, J. P. Eakins, D. J. Harper, <strong>and</strong> J. Jose, Eds. Newcastleupon Tyne: The British Computer Society, 5–6 February 1998.[36] W. R. Hersh <strong>and</strong> D. H. Hickam, “How well do physicians use electronicinformation retrieval systems?” Journal of the American <strong>Medical</strong>Association, vol. 280, no. 15, pp. 1347–1352, 1998.[37] H. M üller, A. Rosset, A. Geissbuhler, <strong>and</strong> F. Terrier, “A reference dataset for the evaluation of medical image retrieval systems,” Computerized<strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> Graphics, 2004 (to appear).[38] P. Clough, M. S<strong>and</strong>erson, <strong>and</strong> H. M üller, “A proposal for the CLEF crosslanguage image retrieval track (ImageCLEF) 2004,” in The Challengeof Image <strong>and</strong> Video Retrieval (CIVR 2004). Dublin, Irel<strong>and</strong>: SpringerLNCS, July 2004.[39] C. S. C<strong>and</strong>ler, S. H. Uijtdehaage, <strong>and</strong> S. E. Dennis, “Introducing HEAL:The health education assets library,” Academic Medicine, vol. 78, no. 3,pp. 249–253, 2003.[40] J. W. Wallis, M. M. Miller, T. R. Miller, <strong>and</strong> T. H. Vreel<strong>and</strong>, “Aninternet–based nuclear medicine teaching file,” Journal of NuclearMedicine, vol. 36, no. 8, pp. 1520–1527, 1995.[41] K. Glatz-Krieger, D. Glatz, M. Gysel, M. Dittler, <strong>and</strong> M. J. Mihatsch,“Webbasierte lernwerkzeuge fr die pathologie – web–based learningtools for pathology,” Pathologe, vol. 24, pp. 394–399, 2003.[42] M. O. G üld, M. Kohnen, D. Keysers, H. Schubert, B. B. Wein, J. Bredno,<strong>and</strong> T. M. Lehmann, “Quality of DICOM header information for imagecategorization,” in International Symposium on <strong>Medical</strong> <strong>Imaging</strong>, ser.SPIE Proceedings, vol. 4685, San Diego, CA, USA, February 2002, pp.280–287.7


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaAchieving Accurate Colour Reproduction For <strong>Medical</strong> <strong>Imaging</strong>ApplicationsM. R. LuoColour & <strong>Imaging</strong> GroupDepartment of Colour <strong>and</strong> Polymer ChemistryUniversity of Leeds, LS2 9JT UKAbstract- This paper begins by reviewing thepresent status of colour imaging in variousmedical applications. In order to make reliablediagnosis based on colour images, it is essentialto achieve satisfactory cross-media colourreproduction between physical specimens <strong>and</strong>digital images displayed on different monitors<strong>and</strong> observed under disparate viewingconditions. To address this problem, a colourmanagement framework for medicalapplications is proposed.I. INTRODUCTIONTraditionally, medical images are greyscale.Various professional st<strong>and</strong>ards for assessing thequality of displays for viewing medical imageshave been developed such as AAPM [1], SMPTE[2], <strong>and</strong> NEMA-DICOM (PS3) [3]. Withadvances in digital colour imaging devices <strong>and</strong>computational power, colour images arebecoming increasingly used for medicalapplications. In addition, colour photographstraditionally used for pathology will soon bereplaced by digital images, because there is ashortage of both space <strong>and</strong> personnel which arerequired for preserving photographic records.Nishibori [4] comprehensively reviewed thepresent status of colour imaging in variousmedical fields. Some important applications aregiven here. With the introduction of imaging toelectronic gastrointestinal endoscopy, physiciansare able to detect cancer, different stages ofchronic gastritis, <strong>and</strong> gastric ulcers. Muchanatomical pathology is based on photography.Digital images have major advantages such asreducing storage requirements for pathology <strong>and</strong>also for the rapid developed telemedicineapplications. Many laboratory informationsystems are enhanced by distributing digitalimages of urinalysis, haematology, microbiology,immunology, cytology, chromosome analysis <strong>and</strong>physiology. The use of digital images cansignificantly reduce a patient’s waiting time. Indermatology, macroscopic pathology of livelesions can be directly observed, which isessential for diagnosis. Accurate skin colour in animage is essential to interpret the characteristic ofa lesion <strong>and</strong> the depth in skin at which the lesionexists. Precise diagnosis can be conveyed viatelemedicine when a patient is examined bydistant physicians or medical consultants. Inneurosurgery, the leading application of advancedmedical technologies is the minimally invasivesurgery which reduces patient’s burdens <strong>and</strong> thecosts compared to conventional surgery. In thiscase, the imaging system consists of a videocamera <strong>and</strong> a display. Images give fundamentalvisual information <strong>and</strong> so must have high colourfidelity because erroneous colour appearance ofblood <strong>and</strong> tissue may directly affect surgicaldecisions.The success of above applications relies onaccurate colour reproduction between ofspecimens by digital images displayed ondifferent monitors observed under differentviewing conditions. To address theserequirements, a colour management framework,for medical applications is proposed below.II. A COLOUR MANAGEMENTFRAMEWORK FOR MEDICAL IMAGINGThe method needed to achieve satisfactorycolour reproduction between physical specimens<strong>and</strong> displayed images involves the application ofcolour science, as recommended by theInternational Committee on Illumination(abbreviated as CIE from its French title81


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaCommission Internationale de l'Eclairage) [5]. Itprovides a st<strong>and</strong>ard method for specifying colourusing tristimulus values (XYZ), which define theamounts of reference red, green <strong>and</strong> blue lightsrespectively needed to match the colour inquestion. Various instruments such as telespectroradiometers,tristimulus colorimeters [6],are manufactured for measuring colours.Based on previous experience in thedevelopment of colour management systems forthe surface colour industries [7], a colourmanagement framework for medical applicationsis proposed as shown in Fig. 1. It is aimed atmaking the images displayed on differentmonitors have same colour appearance <strong>and</strong> toclosely matching the original physical specimens.This will aid physicians in making reliablediagnoses. Fig. 1 shows the colour managementframework, or workflow, from source todestination images. There are two types of sourceimages: those generated by conventional threeprimaryimaging devices such as cameras,scanners <strong>and</strong> displays (where each pixel isexpressed in terms of red, green <strong>and</strong> blue signals),<strong>and</strong> those captured by multispectral imagingsystems, where each pixel is described in terms ofspectral reflectance over the visible spectrumapproximately 400 to 700 nm.The destination image (R’G’B’) in Fig. 1 isused for displays. It can be seen that the workflowin Fig. 1 relies on CIE tristimulus values (XYZ)<strong>and</strong> includes five processes. A brief account ofeach process is given below.A. Device Characterisation ModelEach imaging device describes a colour interms of digital signals (RGB in Fig. 1). A wellknownproblem, device dependency, arises whenthe colour appearance of an image shown on twomonitors appears to be different due to thedifferent characteristics of the individual devices.Process 1 is the forward characterisation model,which transforms a device’s RGB signals to thest<strong>and</strong>ard tristimulus values (XYZ). Methods forcharacterising CRT displays, cameras <strong>and</strong>scanners can be found in references [8,9]respectively. Process 5 is a reversecharacterisation model that transforms XYZvalues to the RGB signals of a particular monitorto ensure that images displayed on differentmonitors to have the same colour appearance.The device profiles in Fig.1 are obtained tomeasure a number of monitor colours defined bythe RGB signals of an imaging device say adisplay in terms of XYZ values. Each profilestores the information necessary forcharacterising the relationship between RGBsignals <strong>and</strong> their corresponding XYZ values for agiven imaging device when seen under a predefinedset of viewing conditions.B. Calculating Tristimulus Values from Spectralvia Multispectral <strong>Imaging</strong> SystemsFig. 2 illustrates a multispectral imagingsystem. A set of spectral filters covering limitedrange of the visible spectrum is located in front ofa greyscale CCD camera to capture a specimen.In this case is skin exposed to different amountsof UV radiation. The output from the system isvarious sub-images corresponding to each filter.The data can be used to construct spectralreflectance function for each pixel. References[10,11] introduce the technologies required toconstruct such as systems.Process 2 in Fig. 1 transforms the capturedimage data from spectral reflectance (R%) to CIEXYZ values corresponding to the real light sourceunder when physical specimens are examined.The resultant information defined from spectraldata is much more flexible than the simplecharacterisation model described in Section A.This process allows for the reproduction ofaccurate images on displays to simulate thespecimens viewed under different illuminationssuch as daylight, office, shop <strong>and</strong> domesticlighting.92


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, China<strong>Imaging</strong> device e.g. camera,scanner or displayMultispectral imagingsystemRGBR%Sourcedeviceprofile1. Forward devicecharacterisation model2. Calculation of XYZunder a real light sourceSourceviewingconditions:illuminant,luminance,surround, etcDestinationviewingconditions:illuminant,luminance,surround, etcXYZ3. Forward colour appearance modelJMH4. Reverse colour appearance modelXYZSpectralpowerdistribution ofthe lightDestinationdevice profile5. Reverse displaycharacterisation modelR’G’B’Fig. 1A colour management framework for medical imaging103


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaCCDMonochromeCameraSpectralfiltersFig. 2 A multispectral imaging system capturing skin specimensC. Colour Appearance ModelA colour appearance model is defined as adevice independent colour space having at leastthree colour attributes <strong>and</strong> capable of taking intoaccount different viewing conditions. Morerecently, the CIE Technical Committee 8-01Colour Appearance Models for ColourManagement Applications [12], recommendedCIECAM02 for use in cross-media colourreproduction for such tasks as reproducing imageson monitors, printed papers or projectedtransparencies. The model is capable of predictingcolour appearance over a wide range of viewingparameters such as illuminant, luminance,surround <strong>and</strong> background, as defined by thesource <strong>and</strong> destination viewing conditions in Fig.1. It includes two models: the forward model(Process 3) predicts colour appearance attributeslightness (J), colourfulness (M) <strong>and</strong> hue (H) fromXYZ values, <strong>and</strong> vice versa for the reverse model(Process 4).D. Other IssuesSt<strong>and</strong>ard viewing conditionsEven though a colour appearance model isincluded in the proposed framework, it isimportant to st<strong>and</strong>ardise on the viewingconditions employed for examining physicalspecimens <strong>and</strong> images on displays. Once this isdone, the closest viewing parameters can then beused to calculate accurate colour appearanceattributes (JMH) via the colour appearance modelin Fig. 1. Although there are some st<strong>and</strong>ardsavailable, they are aimed at specify applicationssuch as the ISO 3664 [13] for graphic technology<strong>and</strong> photography. This st<strong>and</strong>ard is used forjudging photographic images displayed ontransparency, print <strong>and</strong> display media. It is highlydesirable to establish st<strong>and</strong>ard viewing conditionsfor medical applications. In order to achieve this,it is necessary to conduct psychophysicalexperiments involving diagnoses using medicalimages <strong>and</strong> physical specimens by physiciansunder different viewing conditions. The resultsshould include the setup <strong>and</strong> tolerances of colourtemperature <strong>and</strong> luminance of the light source,<strong>and</strong> the calibration <strong>and</strong> performance of colourmonitors.Colour difference formulaMany medical applications involve the visualcomparison of differences between two images.Colour difference formulae are designed toindicate the perceptual colour difference betweentwo objects. Hence, visual comparison can beaided or even replaced by an instrumentalmethod, e.g. to develop robust algorithms todetermine colour difference thresholds forrecognising a tumour. Current colour differenceformulae, such as CIELAB or CIEDE2000 [5],are designed for colour patches that subtend a 2-degree viewing field or more. Colour patches inimages tend to be much smaller. A CIETechnical Committee, TC8-02, aims torecommend an industrial colour-differenceevaluation method that is appropriate for images.114


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaIII. CONCLUSIONSColour images are becoming widely used inmedical applications. There is a need tost<strong>and</strong>ardise a colour management framework toreduce diagnostic error arising when looking atimages on different displays under differentviewing conditions. A framework for achievingthis is proposed here based on colour sciencedeveloped by the CIE over many years. Thisframework covers not only conventional threeprimary imaging devices but also multispectralimaging systems.REFERENCES[1] AAPM TG18, Assessment of displayperformance for medical imaging systems,AAPM Science Council, 2002.[2] SMPTE RP133-1991, Specifications formedical diagnostic imaging test pattern fortelevision monitors <strong>and</strong> hardcopy recordingcameras, 1991.[3] NEMA-DICOM (PS3), Greyscale displayst<strong>and</strong>ard function (NEMA PS3.14), 1984.[4] M. Nishibori, Problems <strong>and</strong> solutions inmedical color imaging, The 2 nd Multi-spectral<strong>Imaging</strong> Conference, Chiba, Japan, 10-11October pp 9-17, 2000.[5] CIE, Colorimetry, CIE Pb. 15:2004, CentralBureau of the CIE, Vienna, 2004.[6] J. C. Zwinkle, Colour-measuring instruments<strong>and</strong> their calibration, Special issue on Toachieve WYSIWYG colour, edited by M. R.Luo, Displays, vol. 16, pp. 163-172, 1996.[7] P. A. Rhodes <strong>and</strong> M. R. Luo, A system forWYSIWYG colour communication, Specialissue on To achieve WYSIWYG colour,edited by M. R. Luo, Displays, vol. 16, pp.213-221, 1996.[8] R. S. Berns, Methods for characterising CRTdisplays, Special issue on To achieveWYSIWYG colour, edited by M. R. Luo,Displays, vol. 16, pp. 173-182, 1996.[9] T. Johnson, Methods for characterisingcolour scanners <strong>and</strong> digital cameras, Specialissue on To achieve WYSIWYG colour,edited by M. R. Luo, Displays, vol. 16, pp.183-192, 1996.[10] F. Konig <strong>and</strong> W. Praefcke, A multispectralscanner, Colour <strong>Imaging</strong>: Vision <strong>and</strong>technology edited by L. W. MacDonald <strong>and</strong>M. R. Luo, Wiley, 1999, pp. 129-143.[11] J. Y. Hardeberg, F. Schmitt, H. Brettel, J. P.Crettez <strong>and</strong> H. Maritre, Multispectral imageacquisition <strong>and</strong> simulation of illuminantchanges, Colour <strong>Imaging</strong>: Vision <strong>and</strong>technology edited by L. W. MacDonald <strong>and</strong>M. R. Luo, Wiley, 1999, pp. 146-164.[12] CIE. A colour appearance model for colourmanagement systems: CIECAM02, CIEPublication No. 159, Central Bureau of theCIE, Vienna, 2004.[13] ISO 3664, Viewing conditions – Graphictechnology <strong>and</strong> photography, (2004).125


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, China<strong>Medical</strong> Image Analysis ----Techniques <strong>and</strong> applications13


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaAtlas-based fuzzy connectedness for automatic segmentation of abdominal <strong>org</strong>ansYongxin Zhou, Jing BaiDepartment of Biomedical Engineering, Tsinghua Universityzhouyongxin@263.net, deabj@tsinghua.edu.cnWest Main Building 1-201, Tsinghua University, Beijing, 100084, P.R.ChinaAbstract—Organ segmentation is an important first stepfor various medical image applications. In this paper, aframework incorporating a pre-segmented atlas into the fuzzyconnectedness (FC) method is proposed for the automaticsegmentation of abdominal <strong>org</strong>ans. The atlas is first registeredonto the subject to provide an initial approximate segmentation.Based on this initial segmentation, we propose a novel methodto estimate all necessary FC parameters, including <strong>org</strong>anintensity features , seeds , <strong>and</strong> optimal FC threshold,automatically <strong>and</strong> subject-adaptively. It can replace thecommon operator specification. Besides, a shape modificationmethod employing Euclidean distance <strong>and</strong> watershedsegmentation is appended to the FC method to remedy itsdeficiency when facing neighboring <strong>org</strong>an intensity overlapping.Experiments on CT <strong>and</strong> MRI images demonstrate theversatility of the method in segmenting images of differentmodalities. Our method is fully automatic <strong>and</strong>operator-independent. Therefore, it is expected to find wideapplications, such as 3D visualization, radiation therapyplanning, <strong>and</strong> medical database construction.Keywords—Atlas-based segmentation, fuzzyconnectedness, abdominal <strong>org</strong>ansI. INTRODUCTIONThe segmentation of <strong>org</strong>an is an important first step fornumerous applications including but not limited to surgicalplanning, radiation therapy planning, <strong>and</strong> 3D visualization.Manual segmentation by radiologists is reliable, but with nodoubt formidably tedious <strong>and</strong> time-consuming. There is astrong tendency to accomplish this task automatically bycomputer. However, several problems associated withcommon medical images (e.g. CT or MRI) have to beaddressed first, including noise, intensity overlapping,inhomogeneities, <strong>and</strong> anatomical variabilities. In contrast,these problems do not put too much difficulty on manualThis work was supported by the National Nature ScienceFoundation of China, grant #60331010.segmentation. Expert’s extensive anatomical knowledgeplays a key role in his success. This inspires us tocomplement the traditional solely intensity-based algorithmswith prior anatomical knowledge, <strong>and</strong> thus leads to thestrategy of atlas-based segmentation.Atlas-based segmentation aims at introducing atlases intosegmentation process to supplement anatomical knowledge.Various methods in this strategy can be found. One intuitiveway is to view segmentation as registration directly. Thebasic tenet is that a transformation, often nonlinear, can befound which transfers labels in a pre-segmented atlas ontothe subject to be segmented [1-3]. However, due to stronganatomical variations <strong>and</strong> limited transformation freedoms,segmentations relying exclusively on registration are notalways satisfying [4, 5].The fuzzy connectedness (FC) framework was proposedin [6-8], <strong>and</strong> showed its outst<strong>and</strong>ing power in [9, 10]. Adetailed explanation is referred to [11]. The FC method takessimultaneously into consideration the degree of spaceadjacency, degree of intensity adjacency, <strong>and</strong> degree ofintensity gradient adjacency between two pixels. Thisintegrative consideration endows the FC method withdistinguished ability of accommodating the defects ofmedical images mentioned above.We notice that, the main question for the automaticimplementation of FC method is the specification of someparameters, including seeds, intensity features, intensitygradient features, <strong>and</strong> FC threshold. Often, they are specifiedby the operator, which blocks the automation of the FCmethod.In this paper, we have been interested in integrating asubject-registered atlas into the FC method for the automaticsegmentation of abdominal <strong>org</strong>ans. A pre-segmented (integerlabeled) atlas (VIP-Man as in [12]) is first registered ontothe subject through a two step registration. Based on theregistered atlas, we propose a novel method to estimatethose necessary FC parameters automatically <strong>and</strong>subject-adaptively. This automatic specification releases the14


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinaoperator from the burden of viewing volume data, <strong>and</strong>makes the whole segmentation process operator-independent.Due to intensity overlapping, the result <strong>org</strong>ans segmented byFC method may contain abnormal extrusions fromneighboring <strong>org</strong>ans. A shape modification procedureemploying Euclidean boundary distance <strong>and</strong> watershedsegmentation is proposed to detect <strong>and</strong> remove possibleabnormal extrusions. Experiments on both CT <strong>and</strong> MRIimages have demonstrated the efficiency of the proposedalgorithm in segmenting multimodality images. A flowchartof the algorithm is displayed in Fig. 1.II.ATLAS-SUBJECT REGISTRATIONTwo facts make our registration a little different fromcommon registration tasks. One is that the atlas we use is notan intensity image but an integer labeled image. The other isthat the aim of our case is not to register the whole volumeperfectly, but to align only <strong>org</strong>ans of interest as well aspossible. Remember that the registration is just aninitialization for the following FC segmentation.Therefore, a two-step process, includingglobalregistration <strong>and</strong> <strong>org</strong>an registration, is designed in our case.The goal of global registration is to eliminate the overallmisalign between the atlas <strong>and</strong> the subject, e.g. imagingposition <strong>and</strong> individual stature differences. In <strong>org</strong>anregistration, we try to align each <strong>org</strong>an respectively, that is, aseparate registration is carried out on every <strong>org</strong>an of interest.The following subsections are focused on the similaritymeasures in the two steps. More details about theregistration algorithm is referred to [13].A. Global RegistrationGlobal registration utilizes normal Mutual Information(MI) as registration measure. The validity of MI between anintensity image <strong>and</strong> an integer labeled image is worthdiscussion here. MI can be viewed as a measure of how wellone image explains the other, <strong>and</strong> would achieve themaximum when both images are aligned. Images of differentmodalities can be registered by MI as long as homogeneousareas in one image correspond to the other [14]. This st<strong>and</strong>strue between our atlas <strong>and</strong> subject. The validity of MI is alsoverified by our experiments.B. Organ RegistrationA novel registration measure for <strong>org</strong>an registration isproposed. Take the <strong>org</strong>an labeled with k for example. Wefirst define a rough intensity range for <strong>org</strong>an k. Pixels withinthis range are considered as <strong>org</strong>an k’s pixel provisionally.For a specific transformation T, <strong>org</strong>an k in the transformedatlas has a corresponding region in the subject. All pixels inthat region can be divided into two groups: provisional <strong>org</strong>anpixels <strong>and</strong> other pixels. The numbers of pixels in these twogroups are defined as N In(T)for <strong>org</strong>an pixels, <strong>and</strong> N Out(T)for other pixels.The global optimal transformation T g serves as a startingpoint, so the change of the numbers of pixels in the twogroups with respect to a specific T can be calculated as∆ N ( T) = N ( T) −N ( T)In In In g.∆ N ( T) = N ( T) −N ( T)Out Out Out gThen the registration measure is defined as,(1)M() T =∆N () T −∆ N (). T(2)InOutBy maximizing M(T) with T, we essentially try toincrease the number of provisional <strong>org</strong>an pixels in <strong>org</strong>an k’scorresponding region, as well as to decrease the number ofother pixels.AtlasGlobalRegistrationOrganRegistrationIn VolumeRegistered AtlasIn SlicesSubjectFCSegmentationShapeModificationResultsFig. 1 Flowchart of the proposed algorithm.15


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaIII. FUZZY CONNECTEDNESS SEGMENTATIONWe choose to segment <strong>org</strong>ans in a slice by slice way. Firstan initial slice is extracted from the middle of the subjectvolume, along with the corresponding slice extracted fromthe atlas volume. The subject slice is segmented under theguidance of the atlas slice. Then the process goes throughtwo iterations from the initial slice along apposite directions.One goes upward to the topmost slice, <strong>and</strong> the other goesdownward to the bottommost slice. Slice segmentationaccommodates volume images with low inter-sliceresolutions, which are common in practice. In the following,we denote the No. i subject slice as S i (x), <strong>and</strong> the No. i atlasslice as Ai(x).A. Implementation of Fuzzy AffinityFuzzy affinity is a measure of the strength of localhanging togetherness between two nearby pixels. In ourmethod, the following function is utilizedµ ( x, y) = µ ( x, y) µ ( x, y) ⋅ µ ( x, y), (3)k α φ ψwhere µ α (x, y) , µ Ф (x, y) <strong>and</strong> µ ψ(x, y) are the space adjacencyfunction, intensity feature function, <strong>and</strong> intensity gradientfeature function respectively. µ α (x, y) is defined as1, , 4 −µ (, xy)= ⎧ if x y are adjacent.α ⎨ (4)⎩ 0, otherwiseThe specification of µ ψ(x, y) <strong>and</strong> µ Ф (x, y) is described in thefollowing subsection.ii<strong>org</strong>an in S(x). This corresponding region of <strong>org</strong>an k can bedenoted asB. Specification of Intensity FeaturesThe slice A (x) delineates corresponding regions for eachC = { x| x∈ S ( x), A( x) = k}.(5)ik i iThe intensity histogram of pixels in C ik is constructed <strong>and</strong>denoted as H ik (χ), where χ represents an intensity level.C ikNote that after registration there are still some pixels inwhich do not belong to <strong>org</strong>an k. To eliminate thedisturbance of those pixels, we decompose H (χ) ikleast squares fitting method , with the adjusted R-square todetermine the optimal n.into thecombination of one or more Gaussian components, using theHik2n ⎡ ⎛ χ − b j⎞ ⎤( χ ) ≈ ∑ a exp ⎢−j ⎜ ⎟ ⎥ (6)j = 1 ⎢ ⎝ cj⎣ ⎠ ⎥⎦where a j , b j , c j are the amplitude, centroid <strong>and</strong> width of thejth Gaussian component respectively.We relate the Gaussian component which has the largestamplitude a j to <strong>org</strong>an k, <strong>and</strong> discard all other components.Let b ik <strong>and</strong> c ik be the centroid <strong>and</strong> peak width of the largescomponent, then we rewrite H ik (χ) asHik⎡2⎛ S ( x)− b ⎞ ⎤iik( S ( x)) = exp −,i ⎢ ⎜ ⎟ ⎥⎢⎣⎝ tc ik ⎠ ⎥⎦where experiment shows that a number between 1 <strong>and</strong> 2 ispreferable for t. Then we define µ Ф (x, y) asLet[ ](7)µ ( x, y) = min H ( S ( x)), H ( S ( y)).φ ii(8)ikM ik <strong>and</strong> D ik be the mean <strong>and</strong> st<strong>and</strong>ard deviation ofintensity differences |S (x)-S (y)|i i for all possible pairs (x,y)in C ik such that x≠y <strong>and</strong> µ α(x, y)> 0. With experiments, wefound that the following function is appropriate forµ ψ (x, y).whereµ⎧1, 0 ≤ d ≤ a1ψ⎪⎪ a − d= a2 ψ2 ψψ( d ) ⎨, a1ψd a2 ψ,a2ψ− a1ψik≤ (9)d = S ( x) − S ( y) ,(10)iia = M , a = M + pD .1ψ ik 2ψ ik ik(11)Experiment shows that a number between 2 <strong>and</strong> 4 isprefe rable for p.C. Specification of Organ SeedThe theory of fuzzy connectedness guarantees that as longas seeds are within the object region, the segmentation resultremains the same [6]. We search for a seed of <strong>org</strong>an k in C ikaccording to the following two rules: 1) the seed itselfshould have high possibility of being the <strong>org</strong>an pixel; 2)among all pixels satisfying (1), the pixel, which has thelargest overall possibility of being <strong>org</strong>an pixel in itsneighboring region, is selected as the seed. Note thatH ik(S i(x))can bethe <strong>org</strong>an pixel.S (x) iviewed as the possibility of a pixel beingD. Relative Fuzzy ConnectednessThe fuzzy connectedness scene (FCS) of each <strong>org</strong>an incan be computed using a dynamic programmingalgorithm in [11] respectively, denoted as FCS ik (x). Pixel16


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinaversatility of segmenting images from different modalities.The subject of the CT images is a Chinese woman of age70. The pixel size is 0.67×0.67 mm 2 in transverse slices, <strong>and</strong>slice thickness is 5 mm. The segmentation result by usingthe proposed method is shown in Fig. 4.Performance of the method is measured quantitativelyusing the false negative rate (R fn ) <strong>and</strong> the false positive rate(R fp ), with manual segmentation as the ground truth. Table Irepresents the averages of the false rates of each tissue. Wecan see that the averages are relative small, which indicatesthat the segmentation results are very close to the manualsegmentation resultsAs another test, we also applied our method to segment<strong>org</strong>ans, including liver, spleen, <strong>and</strong> kidneys, from the femaleMRI T1 data set of the VHP project. The female MRI T1data set contains coronal slices. The segmentation result isshowed in Fig. 5. The false rates with manual segmentationresults as the ground truth are listed in Table II. We can seepromising results by our method on the segmentation of<strong>org</strong>ans in MRI images. The false positive rates of spleen areslightly high in few slices, which are partially due to thesmall areas of the spleen in these slices.(a) (b) (c)Fig. 4 Result slices of CT image segmentation. The white contoursrepresent the segmentation results.TABLE IPERFOR MANCE MEASURES FOR THE CT DATA SETLiver Right Left SpleenKidneyKidneyAve. of R fp 0.0518 0.0697 0.0441 0.0427Ave. of R fn 0.0221 0.0466 0.0305 0.0382(a)(b)(c)Fig. 5 Segmentation result slices of the Female MRI T1 data setTABLE IIPERFORMANCE MEASURES FOR THE MRI T1 DATA SETLiver Right Left SpleenKidney KidneyAve. of R fp 0.0170 0.0839 0.0571 0.1176Ave. of R fn 0.0374 0.0660 0.0583 0.0561VI. CONCLUSIONIn this paper, an automatic method for the identification<strong>and</strong> segmentation of multiple abdominal <strong>org</strong>ans is proposed.By utilizing a pre-labeled atlas to supplement anatomicalknowledge, we try to accomplish the process in the way ananatomist does his work.The main contribution of this paper is the utilization of aregistered atlas to initialize the fuzzy connectedness method.The registered atlas, which is in correspondence with thesubject, delineates initial regions in the subject image foreach <strong>org</strong>an of interest. Thus, instead of common manualspecification, all the initial parameters of the fuzzyconnectedness method are specified automatically <strong>and</strong>subject-adaptively. The method proposed in this paper showsits versatility in successful segmentation of CT <strong>and</strong> MRIim ages. We notice that previous research works are mainlyfocused on single modality. We believe that thesegmentation of <strong>org</strong>ans from CT <strong>and</strong> MRI images also openthe possibility of extending the method to other imagingmodalities.REFERENCES[1] B. M. Dawant, S. L. Hartmann, J. P. Thirion, F.Maes, D. V<strong>and</strong>ermeulen, <strong>and</strong> P. Demaerel, "Automatic 3-Dsegmentation of internal structures of the head in MRimages using a combination of similarity <strong>and</strong> free-form18


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaSkeletonizing <strong>and</strong> Automatically Detecting the Bifurcation ofBlood Vessels in X-ray AngiogramShui Haomiao 1 , Li Qin 1 , Yang Jian 2 , Zhou Shoujun 2 , Tang Songyuan 2 , Liu Yue 2 , Wang Yongtian 21. School of Life Science & Technology, Department of Biomedical Engineering,Beijing Institute of Technology, Beijing, 1000812. School of Information Science & Technology, Department of Opto-electronic Engineering, Beijing Institute of Technology,Beijing, 100081E-mail: wyt@bit.edu.cnAbstract: This paper presents an efficient method forskeletonizing <strong>and</strong> automatically detecting the bifurcation ofblood vessels in X-ray Angiograms. The method can provide theimportant data <strong>and</strong> information for 3-D reconstruction of bloodvessel tree by angiographic images. The proposed algorithm iscomposed of three steps: (1) Vascular skeletons are generated bya thinning algorithm in binary images. The purpose of thealgorithm is to get one-pixel wide vascular tree. (2) A template isfitted to the vessel for classify the branching points; (3)According to the theory of vascular tree tracking, thebifurcations of the vascular tree are filtrated <strong>and</strong> labeledautomatically.Keywords: DSA, Skeletonization, Bifurcation, AnatomicallabelingI. INTRODUCTIONDiseases of the vascular system cause heart attacks, stroke,aneurysms, hypertension, <strong>and</strong> poor perfusion. To fix theproblems with the vascular system we need a way to gainaccess to the vessels <strong>and</strong> to assess the success of the treatment.X-ray angiography is an important screening procedure in thediagnosis of vascular disease, because angiography providesthe means to view the vascular system. But the majorweakness of conventional angiography is in the lack ofreproducible quantitative analysis. Therefore, the visualmodeling of the vascular network in three dimension spacemight prove to be very useful in clinical diagnosis. Becausethe 3-D reconstruction of the vessels developed more <strong>and</strong>more analysis techniques to characterize the vascular system<strong>and</strong> its abnormalities, it can provide quantitative diagnosticinformation.The analysis of coronary arteries is of particular importancein medical diagnosis of heart disease. Coronary angiographyuses multiple x-ray images obtained at various angles aroundthe arteries. A 3-D vessel model of the coronary vasculaturecould be reconstructed from a biplane image pair at one timeframe [1]. Automatic 3-D reconstruction of vascularstructures has been previously investigated from thesegmented images <strong>and</strong> digital subtraction angiography datawith the arterial structural points specified manually [2]. Thearterial structure consisting of vessel branch points (nodes)<strong>and</strong> lines between the branch points. An accurate registrationprocedure may be based on finding correspondences betweenpoints obtained from the segmented centerline of the arterialtree. These points, as well as the vascular tree, are featuresthat are visible in most vessel images. It is relevant to developa suitable automatic labeling algorithm to obtain branchpoints in the coronary arterial trees subsequently used for 3-Dreconstruction of the coronary arteries. So the detection ofline <strong>and</strong> bifurcation is an important task for 3Dreconstruction of the arterial tree. Generally speaking, branchdetection techniques can be classified into two majorcategories: geometry-based <strong>and</strong> template-based [3]. In thiswork, our aim is to derive a reliable method for therecognition of branching structures by a hybrid method.However, the method needs to thin the vessels to a singlepixel width. And if there are ambiguities caused by the linethinning, the subsequent labeling of branch points may bedifficult. So we developed a thinning algorithm for extractingthe skeleton of vessels, <strong>and</strong> then the skeletal representation ofarteries can provide appropriate structures for the detectionof furcation points. It is of benefit to register multipleangiographic projections <strong>and</strong> can be further utilized for 3Dreconstruction of the arterial tree.The paper includes two main parts: firstly, we describe thethinning method for yielding vessel skeletons. Secondly, thebifurcation labeling algorithm is presented. Finally, resultson patient coronary arteries are presented.II. METHODThe proposed skeleton extraction method exploits the fact ofvessel images segmentation. By the segmentation, theextracted vessels are represented as binary data. Theprocessing is done by skeletonizing <strong>and</strong> labelingA. Extract The Skeletons of VesselWe extract the skeletons of vessel by an iterative thinningalgorithm. In the algorithm, our aim is to delete all pixels thatlie on the outer boundaries of the resultant graph. However,if deleting one of these pixels will result in a disconnectedgraph, the pixel will not be deleted. The necessary <strong>and</strong>sufficient condition for thinning algorithm is to preserve theconnectivity of the resulted skeleton. At each iteration, a 3х3window is used for each pixel. That is, the values of eightneighbors of a center pixel (p 1 ) are used in the calculation ofits values for the next iteration. The neighboring values ofthe pixels can be devoted by the eight directions (p 2 , p 3 , … p 9 )<strong>and</strong> they are labeled as shown in Fig. 1. It is a graph of white<strong>and</strong> black pixels. We suppose the value of the objective pointis 1, <strong>and</strong> the value of background point is 0.The number ofobjective pixels of eight neighbors is defined by theequation:N (p 1 ) =9∑ p i .i = 220


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinap 9 p 2 p 3p 8 p 1 p 4p 7 p 6 p 5Fig. 1. A pixel of 8-neighbourhoodAccount the number of the switch between 0 <strong>and</strong> 1 around p1along the clockwise direction, i.e. from p 2 , p 3 …p 8 , p 9 , p 2 , <strong>and</strong>set up the number as s (p 1 ).The iterative algorithm includes two steps. In the firstiteration, boundary points with n (p 1 )>= 1 are searched. Theboundary points will be the c<strong>and</strong>idate deleted points if p 1satisfies the following conditions:C1:2


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinasame time, Ф chf is larger than Ф bhf . So the vessel is trackedfrom branches bh to he <strong>and</strong> from ch to hf. After the labelingprocess, we reserve the real bifurcation points <strong>and</strong> removethe ambiguous intersections. The result of this method isillustrated by Fig. 6.III. DISCUSSIONFig. 4. The sixteen templates of determination of bifurcationsC. Labeling of Bifurcation PointsAfter above section, skeletons of vessels are extracted fromthe subtracted image, <strong>and</strong> we get the skeletons’ furcationpoints, cross section points, start points <strong>and</strong> end points. Everynode denotes vessel segments <strong>and</strong> the connecting relations ofthese vessel segments. But the vessel segments have differentprojections on the different projecting directions. A furcationpoint describes the structure of branch, but a cross pointappears because of the overlap of the projections. As shownin Fig.5, there is no cross point in projection A. But inprojection B, there exist a cross point caused by the overlap.By the knowledge of anatomy, the real vessel does not havecrossovers. So we can find the real nodes from the ambiguouspoints by the vascular tree tracking method.The vascular tree tracking method was described in [5].According to the width of the vessel, we can appoint the startpoint which is the initial node of vessel of zero level. Vesselsare divided into many segments by nodes. Each segment hasstart point <strong>and</strong> end point, <strong>and</strong> the two points can express thedirection of this segment. That is the direction of a vector wasdetermined by the angle made by a bifurcation point <strong>and</strong> afurcation. So, it is possible to track the vector if the internalangle (Ф) of each branch is over 90°. For example, as shownin Fig. 5 B, the internal angle Ф abh is over 90°, <strong>and</strong> the vesselis tracked from branches ab to bh. Branch bd has the sameresult. As the angles Ф bhe is approximately 180°, the vessel istracked from branches bh to he (actually, bhe is a wholebranch). As the angle Ф bhc is under 90°, the vessel is nottracked from branches bh to hc. However, the algorithm maynot suitable to several situations. As we see, the angle Ф bhf<strong>and</strong> Ф bhe are both over 90°, but bh is not father branch of hf.In such case, we add a rule for the tracking algorithm: If thereis a crossover, the internal angle between a son branches <strong>and</strong>its father branches will be compared to decide the trackingpath. In Fig. 5 B, the angle Ф bhe is larger than Ф bhf , at theThe approach could identify the majority of bifurcationpoints in any single frame of the image sequence, but may benot exact in complicated situation such as the point withmore than two bifurcations. If the situation is too complex torecognise, we could resolve the problem by considering theinformation derived from different frames in the sequence [6].However, our long-term research objective is to create aclinically useful system that creates 3-D visualizations ofcoronary vasculature. We need to construct an attribute tablefor nodes based the topology structure figure, becausevessels are divided into many segments by furcation points<strong>and</strong> cross section points. And these segments construct thecorresponding topology structure figure. Every node has anattribute table that contains the various feature informationincluding the position of start points, the width of vessel <strong>and</strong>direction. The labeling procedure will be more efficient withthe attribute table base on anatomical knowledge [7] [8].IV. CONCLUSION AND FUTURE WORKThis paper proposes an automatic labeling method forbifurcation points of a single coronary angiographic image.The approach involves extracting centerline of the vessel <strong>and</strong>recognizing bifurcation. An improved thinning method cancorrect the distorted skeletons of vessels, <strong>and</strong> help to labelthe bifurcation points. The procedure for three-level vesselswhich are more concerned about in clinic is efficient. Butthis paper does not include improving the imagesegmentation process, which can significantly affect thelabeling results. It will be included in our future work.In our future work, a vessel model will be reconstructed bythe data sets of the perfused mode of coronary artery. Thenthe topology structure figure of real-projections will connecteach other by that of model-projections [9]. If it is difficult toremove the ambiguities caused by the overlap using thetemporal disambiguation, we can eliminate this constraint bycombining the information with multiple images. Becauserotary DSA can acquire a set of real-projections during oneoperation, the vessel overlap in one image may not appear inanother angle of view. In such cases disambiguation may bepossible by considering a frame (or a sequence of frames)acquired from another view. Consequently, a spatiotemporaldisambiguation is being considered.ACKNOWLEDGMENTThis work is supported by National 973 Program of China(2003CB716105) <strong>and</strong> the Natural Science Foundation ofBeijing Institute of Technology (BIT-UBF-200306C04)Fig. 5. Graphical vascular structure on two projecting directions22


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, China(a)REFERENCES[1]. Pellot, A. Herment, M. Sigelle, P. Horain, H.Maitre, P. Peronneau, “A3D Reconstruction of Vascular Structures from Two X-RayAngiograms Using an Adapted Simulated Annealing Algorithm,”IEEE Trans. Med. Imag., vol. 13, (l), pp. 48-60, March 1994.[2]. Alok Sarwal, Atam P. Dhawan, <strong>and</strong> Yateen S. Chitre, “3-DReconstruction of Coronary Arteries using Estimation Techniques”,Proc. SPIE Int. Soc. Opt. Eng. 2434, 361, 1995[3]. Li Wang <strong>and</strong> Abhir Bhalerao, “Model Based Detection of BranchingStructures” University of Warwick, Coventry, CV4 7AL, UK[4]. Nakamura, T., Enowaki, H., Itoh, H., He, L., “Skeleton revisionalgorithm using maximal circles” Pattern Recognition, 1998.Proceedings. Fourteenth International Conference on Volume 2, 16-20Page(s):1607 - 1609 vol.2, Aug. 1998[5]. Yuji Hatanaka, Takeshi Hara, Hiroshi Fujita, Masaru Aoyama, HideyaUchida, <strong>and</strong> Testuya Yamamoto, “Automated analysis of thedistributions <strong>and</strong> geometries of blood vessels on retinal fundusimages”, Proc. SPIE Int. Soc. Opt. Eng. 5370, 1621 ,2004[6]. Norberto Ezquerra, Steve Capell, Larry Klein, <strong>and</strong> Pieter Duijves,“Model-Guided Labeling of Coronary Structure,” IEEE Trans.Med.Imag., vol. 17, no. 3, pp. 429-441, June 1998.[7]. Hong Wei, Mou Xuanqin, Wang Yong, <strong>and</strong> Cai Yuanlong “A NovelMethod of Vessel Axis Reconstruction from the Rotary ProjectionsBased on Furcation Model”, Proceedings of SPIE, Vol. 4549, 2001[8]. Peifer, J.W.; Ezquerra, N.F.; Cooke, C.D.; Mullick, R.; Klein, L.;Hyche, M.E.; Garcia, E.V., “Visualization of Multimodality CardiacImagery”, IEEE Trans. Biomed. Eng., Vol. 37, Issue 8, PP:744 – 756,Aug. 1990[9]. Mireille Garreau, Jean Louis Coatrieux, Rene Collorec, ChristineChardenon, “A Knowledge-Based Approach for 3-D Reconstruction<strong>and</strong> Labeling of Vascular Networks from Biplane AngiographicProjections”, IEEE Tran. Med. Imag., Vol. 10, No. 2, pp. 122-130,June,1991(b)(c)Fig. 6. An example of labeling result for coronary arteries angiograph. (a) Apatient’s coronary angiographic images. (b) Extracted skeleton superposed onthe original image. (c) The angiogram with the labeled nodes.23


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaData classification using Bayesian conditionalr<strong>and</strong>om fieldsJiahua Wu, Huiyu Zhou*, <strong>and</strong> Jia HeAbstract — A new framework has been proposed to perform theclassification of structured data. A number of successfulalgorithms have been reported up to date. However, all theseestablished techniques are problem-specific. Furthermore, themodels buried within these methods cannot represent long-rangedependencies of the observations. In this paper, we introduce aconditional r<strong>and</strong>om fields (CRF) based approach to improve theperformance of the classification. This Bayesian probabilisticmodel attempts to find out an optimal solution for the classicalclassification problem. Experimental results show that ourapproach has better achievements in classifying synthetic data,compared to traditional algorithms.Keywords — Classification, Bayesian, generative model,conditional r<strong>and</strong>om fields, label.I. INTRODUCTIONTHE ability to classify structured data is an open problem inour research community. Recent literature reveals an increasinginterest to this topic [1], [2], [3]. A number of successfulalgorithms have also been reported up to date, <strong>and</strong> theirapplications can be found in the areas of bioinformatics, speechrecognition, computational linguistics, <strong>and</strong> biomedicalinformation extraction. For example, hidden Markov models(HMMs) have been successfully used in computational biologyto discover sequences homologous to a known evolutionaryfamily [4]. Kumar et al. [2] proposed a probabilistic regionclassification scheme for natural images. Settles [5] presented aframework for simultaneously recognizing occurrences ofbiomedical named entity classes using Conditional R<strong>and</strong>omFields (CRFs) with a variety of traditional <strong>and</strong> novel features.Traditional classification models, e.g. HMMs <strong>and</strong> stochasticgrammars, are a form of generative model, which attempts todefine a joint probability distribution so a generative model isdesirable to enumerate all possible observation sequences [6].However, these generative models have inherent weaknesses.Firstly, these models need to specify the data generationJiahua Wu is with the Wellcome Trust Sanger Institute, Cambridge, CB101HH, United Kingdom (E-mail: jerry.wu@sanger.ac.uk).Huiyu Zhou is with the Department of Computer Science, University ofEssex, Colchester, CO4 3SQ, United Kingdom (*Corresponding author.E-mail: zhou@essex.ac.uk).Jia He is with the Department of Computer Science, Chengdu University ofInformation Technology, P.R. of China (E-mail: hejia@cuit.edu.cn).24process (i.e. data sampling) [3]. Secondly, these models cannotrepresent long-range dependencies of the observations [1].Maximum entropy Markov models (MEMMs) <strong>and</strong> othernon-generative models (e.g. discriminative Markov models) [7]have a limitation called as the label bias problem, where thetransitions leaving a given state compete only against all thetransitions in the model. In other words, transition scores arethe conditional probabilities of possible next states given thecurrent state <strong>and</strong> the observation sequence [1]. This leads to abias towards states with fewer throughout.Of available approaches that challenge the aboveshortcomings, conditional r<strong>and</strong>om fields (CRF) basedapproaches are a probabilistic framework for classifyingstructured data, based on a conditional approach. In fact, a CRFis a form of undirected graphical model that defines a singlelog-linear distribution over label sequences given anobservation sequence. The main merit of CRFs over HMMs istheir conditional nature, resulting that the independenceassumption required by HMMs is therefore released.Sutton et al. [8] introduced a Dynamic Conditional R<strong>and</strong>omFields (DCRF) approach that was a generalization oflinear-chain CRFs repeating structure <strong>and</strong> parameters over asequence of state vectors [9]. One DCRF with multiple statevariables can be collapsed into a linear-chain CRF but this CRFneeds exponentially many in the number of variables.Additionally, a DCRF can model higher-order Markovdependent between labels, <strong>and</strong> dramatically reduce the trainingdata. However, it was claimed that the training stage wasinsufficiently efficient [8].Qi et al. [3] proposed Bayesian Conditional R<strong>and</strong>om Fields(BCRFs), which eliminated the problem of over-fitting, <strong>and</strong>took advantage of a Bayesian scheme. Unlike the work in [1],they estimated the posterior distribution of the modelparameters during training, <strong>and</strong> then average over this posteriorduring inference. An extended expectation propagation (EP)technique [9] was used in the combination with the partitionfunction. The prediction accuracy of the BCRFs outperformedother CRFs. However, these models need to be improved in thecase of sparse kernel modeling.Lafferty et al. [10] presented an extension of CRFs thatpermitted the use of implicit features spaces through Mercerkernels, using the framework of regularization theory. A


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinaprocedure for selecting cliques in the dual representation wasthen revealed, which allowed sparse representation. Thecombination of kernels <strong>and</strong> implicit features with CRFsenabled semi-supervised learning algorithm for structured data.In this paper, we challenge the deficiency of the classicalalgorithms in classifying structured data. Our contribution isthat a Bayesian model is employed to estimate the posteriorover labels provided the observed data set. Due to the use of thedistribution of labels, the relationship of data points each otheris unnecessarily concerned. In addition, the side effect of theoverlaid data points that may corrupt the classification isproperly minimized.The rest of this paper is arranged as follows: in Section II,basics of classical conditional r<strong>and</strong>om fields are described.Bayesian conditional r<strong>and</strong>om fields are then introduced inSection III, while the proposed approach is discussed in thissection. The experimental results using synthetic data areexplored in Section IV. Finally, conclusions <strong>and</strong> future workwill be mentioned in Section V.II. CONDITIONAL RANDOM FIELDScomputation is non-trivial at all. The vital point is addressed onthe estimation of the partition function (for segmentation <strong>and</strong>labeling) that is dominated by the un-defined weights.The right h<strong>and</strong> side of Eq. (1) can be represented by thefollowing equation using the profit function ψ(•) (thecumulative distribution function of a Gaussian)( s x)hf, g( S, V, j; w) =ψ wj( α j, β j)| v, , ,where w is a weight corresponding to different labels in theclassification process. Assuming that the data likelihood isrevealed as Eq. (1) <strong>and</strong> a Gaussian prior as follows(2)p g ( w ) ~ N ( w | 0, va r ), (3)To seek an analytical solution to achieve appropriateclassification for the structured data (in other words, theposteriori probability for the segmentation is pursued), weintroduce a Bayesian machinery in association with theclassical CRFs to estimate var in Eq. (3).Let us assume that x is a r<strong>and</strong>om variable over sequences tobe labeled, <strong>and</strong> y is a r<strong>and</strong>om variable over corresponding labelsequences. Suppose that all components y i belong to a finitelabel alphabet Y. Then, we have the following definition ofCRFs [1]:Definition: Let G = (U, V) be a graph such that y is indexedby the vertices of G. So, (x, y) is a CRF if, when conditioned onx, the r<strong>and</strong>om variables y obey the Markov property withrespect to the graph: p(y i |x, y U-i ) = p(y i |x, y Ui ), where (U-i) is theset of all nodes in G except the node i, U i is the set of neighborsin G.Unlike classical generative r<strong>and</strong>om fields, CRFs only modelthe conditional distribution p(y|x) <strong>and</strong> do not explicitly modelthe marginal p(x). If the graph G = (U, V) of y is a tree, itscliques can be the edges <strong>and</strong> vertices. Using the fundamentaltheorem of r<strong>and</strong>om fields [11], the joint distribution over thelabel sequence y given x has the form as followsp( y| x)∝⎛exp ⎜ ∑ α ( , | , ) + ∑⎞jfj s y sx β jgj( v, y| v) ⎟,⎝ s∈( S, j) v∈( V, j)⎠where y t is the set of components of y associated with thevertices in sub-graph t.It is intended to determine the weight parameters (α 1 , α 2 ,...;β 1 , β 2 ...) from training data with the known parameters s, v, y.However, this determination is an open problem as the(1)25III. BAYESIAN CONDITIONAL RANDOM FIELDSA. Problem definitionThe weights of a CRF, w, are set to maximize the conditionallog-likelihood of labeled sequences in a pre-selected trainingset, {(a,b) 1 , (a,b) 2 ,…, (a,b) n },2= ∑ nlog ( ( | ))− ∑ k w jL w P w b i a i, (4)22 σi= 1 j = 1where the second term is a Gaussian prior that conductssmoothing over the concerned area in order to prevent sparsityin this region [12]. If the training data makes the state sequenceunambiguous, then the likelihood function in exponentialmodels (in this case it is CRFs) is convex so that a minimum isguaranteed. This goal is reached by iteratively reducing theerrors in a cost function. In this text, we first look at theestablished expectation propagation (EP) technique [13].B. Expectation propagation algorithmExpectation propagation is based on that fact that theposterior is a product of simple terms [13]. Each of these termscan be approximated aswherep( w | x, y ) ≈ q( w ), (5)


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, China1q( w ) = p w( w ) ∏ lk( w ),Z ( w ) ∈ εwhere the parameters for the numerator terms are updatediteratively [10]. For l k , the algorithm removes its contribution toq(w). Then a proper l k is searched so that KL-divergence isminimized, when the true l k is found [10].For the denominator, if one wants to apply the previousmethod directly to Z(w), the moments [q(w)/Z(w)] have to becomputed, which seems very difficult. To simplify thecomputation, Minka [14] proposed to use the “Power EP”method that only needs to compute moments [q(w)Z(w)],which has a similar form as the numerator terms that c<strong>and</strong>ramatically reduce computational cost. This problem will besolved by a nested invocation of EP. Unfortunately, thisimproved EP strategy applied an expectation-maximizationbased technique that easily led to inefficiency in convergence.C. Enhanced Bayesian CRF modelIt is well recognized that for twice differentiable functionsL(θ ) the Newton updates2 − 1θ ← θ − [ ∂ L( θ )] ∂L(θ )are quadratically convergent in a neighborhood of theminimizer of L. The chain rule for differentiation generates thatfor the parameterization θ = Aα one can obtain the followingupdate rule for αT 2 − 1α ← α − [ A ∂ L( θ ) A] [ ∂L( θ ) A ]. (8)k(6)(7)Φ(, xy),∂ P=mm−2( i, i) y[ ( i, i)] ( j, j),∑− kx y + ∑Ekx y + σ ∑αkx yi= 1 i= 1 j=1(11)where y j represent all possible instantiations of the C clique ofy<strong>and</strong> α j are the expansion coefficients forθ pertaining to thesubspace/clique C. To pick a subset of labels on the clique C wecan use sparse greedy methods [15].The projection of the Hessian is given byΦ ∂ Φ=T 2( x , y ) P ( x ', y ')δσm+ ∑j = 12k ( ( x , y ) , ( x ', y ') )cov[ k ( x , y ), k ( x ', y ') | x i ],(12)Eq. (12) allows defining a positive semi-definite matrix butsuffers from expensive computation. To improve this, one canuse a conditional maximum-a-posteriori estimation orblock-preconditioned gradient descent method to tackle theinefficiency problem (see [15] for details).To improve the computing efficiency, we here add an extraterm (“adjusting term”) to the right h<strong>and</strong> side of Eq. (12), whichis τ Φ(x, y). This set-up can allow the computation of theHessian much faster, as the acceleration factor τ is able toalter the convergence speed: if the right h<strong>and</strong> side of Eq. (12)has a large value, then the adjusting term will drag it “down”,<strong>and</strong> vice versa.Additionally, for convex functions the Newton methodapproaches to the minimum. Now, we compute gradient <strong>and</strong>Hessian of the negative log-posterior P = -logp(θ |x,y) foroptimization.P ∑ m 2( xi, yi) Ey[ ( xi, y)| θ , x i] σ θ.(9)i = 1∂ = −Φ + Φ +<strong>and</strong>2∂ = ∑ m y Φ i θ i + σ −i = 1P cov [ ( x , y) | , x ] 2 1.(10)We can evaluate Eqs. (9) <strong>and</strong> (10) via scalar products sincethe statistics are explicitly available [15]. On the other h<strong>and</strong>, wecan decompose the solution into a linear combination of vectors(0,…,0,Φ (x,y),.0,…,0). We then haveIV. PRELIMINARY EXPERIMENTSWe now move to undertake experiments using a syntheticdata set, which consists of two partially overlaid labels (the datawas produced at the University of British Columbia). Thepurpose is to discriminate these two data groups in the image.To demonstrate the performance of the proposed Bayesian CRFapproach, other st<strong>and</strong>ard techniques are used for comparison.These methods are HMM, Logist, <strong>and</strong> Gauss models.First of all, we train the systems sequentially: local classifiersare trained individually, whose outputs are illustrated in Fig. 1.Clearly, the HMMs <strong>and</strong> our new approach have the bestperformance in terms of the convergent efficiency. In fact, ourapproach has better performance in robust discrimination. Afterthe systems have been trained, we apply the new approach todiscriminate the pre-designed synthetic data, where Fig. 2shows the final segmentation by specifying different26


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinasegmentation rates, e.g. 10%, 50%, <strong>and</strong> 90%.V. CONCLUSIONS AND FUTURE WORKWe have presented a novel probabilistic model for labeling<strong>and</strong> discriminating different labels. The model is a combinationof classical CRFs <strong>and</strong> probabilistic models: it looks at local datastatistics; using an improved EP algorithm the classifier rendersthe segmentation iteratively; the preliminary results show thatthis technique has similar performance to the establishedHMMs but much better than other st<strong>and</strong>ard methods.Our future work will be addressed on testing the proposedframework comprehensively. Most importantly, thecomparison between the HMMs <strong>and</strong> the proposed approach inconvergent <strong>and</strong> asymptotic performance will be focused (herewith two patterns). Furthermore, solid theoretical analysis ofasymptotic properties has to be launched later on. The final butnot the least, multi-patterns (more than 3) will be used to testthe performance of the proposed. To achieve this task, anextension of the existing Bayesian model must be able toaccommodate multi-variates in the created model representingdifferent groups’ distributions.Fig. 1. Training results of using four approaches to the synthetic data, whichare HMM, proposed CRF, logist, <strong>and</strong> Gauss models.ACKNOWLEDGMENTSThe authors thank Dr Kevin Murphy at the University ofBritish Columbia, Canada, for valuable discussion <strong>and</strong>considerate help. We also thank the Wellcome Trust SangerInstitute, UK, for making this dissemination available in public.REFERENCES[1] John Lafferty <strong>and</strong> Andrew McCallum <strong>and</strong> Fern<strong>and</strong>o Pereira, “ConditionalR<strong>and</strong>om Fields: Probabilistic Models for Segmenting <strong>and</strong> LabelingSequence Data,” in 2001 Proc. 18 th Intel. Conf. On Mach. Learn., CA,USA, pp. 282-289.[2] S. Kumar, A.C. Loui, <strong>and</strong> M. Hebert, “An observation-constrainedgenerative approach for probabilistic classification of image regions,”Image <strong>and</strong> Vision Computing, vol. 21, pp. 87-97, 2003.[3] Y. Qi, M. Szummer, <strong>and</strong> T.P. Minka, “Bayesian conditional r<strong>and</strong>omfields”, in 2005 Proc. Of AISTATS.[4] R. Durbin, S. Eddy, A. Krogh, <strong>and</strong> G. Mitchison, Biological SequenceAnalysis: Probabilistic Models of Proteins <strong>and</strong> Nucleic Acids, CambridgeUniversity Press, UK, 1998.[5] B. Settles, “Biomedical named entity recognition using conditionalr<strong>and</strong>om fields <strong>and</strong> rich feature sets,” in 2004 Proc. Of NLPBA, Geneva.[6] H.M. Wallach, “Conditional r<strong>and</strong>om fields: an introduction,” Tech. Rep.,MS-CIS-04-21, University of Pennsylvania, 2004.[7] L. Bottou, “Une approche theorique de l’apprenrissage cunnexionniste:Applications a la reconnaissance de la parole”, Ph.D. thesis, Universite deParis XI, 1991.[8] C. Sutton, K. Rohanimanesh, <strong>and</strong> A. McCallum, “Dynamic conditionalr<strong>and</strong>om fields: factorized probabilistic models for labeling <strong>and</strong>segmenting sequence data,” in Proc. Of ICML, Banff, Canada, 2004.[9] M. Mohri, <strong>and</strong> M.A. Paskin, “Weighted finite-state transducers in speechrecognition,” Computer speech <strong>and</strong> language, vol. 16, pp. 69-88, 2002.[10] T. Minka, “Expectation propagation for approximate Bayesianinference,” in 2001 Proc. Of Uncertainty in AI’01, pp. 362-369.Fig. 2. Classification results of binary synthetic data by using the proposedCRF technique.[11] J. Lafferty, X. Zhu, <strong>and</strong> Y. Liu, “Kernel conditional r<strong>and</strong>om fields:representation <strong>and</strong> licque selection,” in 2004 Proc. Of ICML.[12] S.F. Chen <strong>and</strong> R. Rosenfeld, “A Gaussia prior for smoothing maximumentropy models,” Tech. Rep. CMU-CS-99-108, CMU, USA, 1999.[13] Y. Qi, M. Szummer, <strong>and</strong> T.P. Minka, “Diagram structure recognition byBayesian conditional r<strong>and</strong>om fields,” in 2005 Proc. Of CVPR.[14] T.P. Minka, “Power EP,” http://www.research.microsoft.com/~minka/[15] Y. Altun, A.J. Smola, <strong>and</strong> T. Hofmann, “Exponential families forconditional r<strong>and</strong>om fields,” in Proc. Of UAI2004.27


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaDetection of Heart Movement Manner Based onHexagonal Image StructureXiangjian He, Wenjing Jia, Qiang Wu <strong>and</strong> Tom Hintz, Members, IEEEAbstract - The most notable characteristic of heart is itsmovement. Detection of dynamic information describing heartmovement manner such as amplitude, speed <strong>and</strong> acceleration ofheart movement helps doctor on judgement for human hearttreatment. In the previous years, the Omni-directional M-modeEchocardiography System (OMES) has been proposed for miningthe heart moving information from a sequence ofechocardiography image frames. OMES detects the heartmovement manner through construction <strong>and</strong> analysis of Position-Time Grey Waveform (PTGW) images on some feature points ofthe boundary of heart ventricles. Image edge detection plays animportant role for determining the feature boundary points <strong>and</strong>their moving directions that are the basis for extraction of PTGWimages. Spiral Architecture (SA) has been proved efficient forimage edge detection. SA is a hexagonal image structure on whichimage is represented as a collection of hexagonal pixels. There aretwo operations . Spiral addition <strong>and</strong> Spiral multiplication definedon SA corresponding to an image translation <strong>and</strong> a rotationrespectively. In this paper, we perform ventricle boundarydetection based on SA using various chain codes defined. Thegradient direction of each boundary point is determined at thesame time. PTGW images at each boundary point are obtainedthrough a series of Spiral additions according to the directions ofboundary points. Unlikely the OMES system, our new approachwill no longer be affected by the translation movement of humanheart. As its result, three curves representing the amplitude,speed <strong>and</strong> acceleration of heart movement respectively can beeasily drawn from the PTGW images obtained. Our approach ismore efficient <strong>and</strong> accurate than OMES, <strong>and</strong> our result containsmore robust <strong>and</strong> complete information of heart moving manner.System (OMES). OMES is a system that built Position-Timefunction (called Position-Time Grey Waveform (PTGW)) foreach pixel on the boundary of heart ventricles. The PTGWextracts motion information of moving objects or movingcharacteristic points such as edge points on images in certaindirections. The motion information contains moving velocity,acceleration <strong>and</strong> moving amplitude. The next position of amoving pixel (or point) on a directional line is determinate.The movement of heart ventricles is very complex <strong>and</strong> it isnot simply a back <strong>and</strong> forth movement. Hence, it is difficult todetermine the moving direction of each pixel on the boundary.There have been two schemes for setting the moving directionas shown in [2]:1. fixed direction setting, <strong>and</strong>2. variable direction setting.For the fixed direction setting, the direction lines areinvariable to the frames of sequential images. For the variabledirection setting, the direction lines change according to thepixel location in every frame.Index Terms - Edge detection, hexagonal image structure, imageanalysis, Omni-directional M-mode Echocardiography System,Spiral ArchitectureSI. INTRODUCTIONEQUENTIAL image frames captured orderly by a cameracontain not only the information of the objects in eachindividual frames but also the information describing themotion of the objects. Optical flow information detectionmethod [1] used to analyze the velocity field of each pixel inan image has its limitation when dealing with deformableobjects. In order to overcome this problem <strong>and</strong> mine motioninformation of objects from sequential images, Lin et al [2]proposed the Omni-directional M-mode EchocardiographyAll authors are with the Computer Vision Research Group, University ofTechnology, Sydney, PO Box 123, Broadway 2007, NSW, Australia. Email:{sean,wejia,wuq,hintz}@it.uts.edu.au.Figure 1. Omnidirectional PTGW of heart ventricles [2].Lin et al in [2] demonstrated a method of fixed directionsetting. Their method simply selects 12 direction lines ejectedout in every 30 o angle from a common central point lookinglike a clock panel (see Figure 1). The PTGW formed on eachof the 12 direction lines completely depends on the location ofthe selected central point. This method does not work wellwhen the movement is acentric or when the centre of interestedobject is moving. An example of acentric movement is the28


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinamovement of heart ventricles especially when the heart imageis represented in short axis [2].In this paper, we will present another method based on theSpiral Architecture (SA) [3], a relatively new image structure.The moving direction will be determined dynamically. Foreach image frame, the direction on an edge pixel is uniquelydetermined to be gradient direction of the edge pixel.The rest of this paper is <strong>org</strong>anized as follows. Section IIbriefly reviews the Spiral Architecture. A contour extractiontechnique on SA is introduced in Section III. This is followedby the construction of PTGW for detecting the movement ofheart ventricles in Section IV. We conclude in Section V.II. SPIRAL ARCHITECTURESpiral Architecture [3] is inspired from anatomicalconsiderations of the primate vision. From the research aboutthe geometry of the cones on the primate retina, we canconclude that the cones distribution has inherent <strong>org</strong>anization<strong>and</strong> is featured by its potential powerful computation abilities.On SA, an image is represented as a collection of hexagonalpixels. Each pixel is assigned an integer of base 7 starting from0 (see Figure 2). The assigned integers are called SpiralAddresses of pixels. On Spiral Architecture, two algebraicoperations have been defined, which are Spiral Addition <strong>and</strong>Spiral Multiplication [4]. These two operations can be used todefine two transformations on SA respectively, which aretranslation of image <strong>and</strong> image rotation with scaling. Anymovement from one pixel to another pixel can be easily donethrough a Spiral Addition. For example, in order to move thepixel with Spiral Address 4 to the pixel location with address36, we can simply perform a Spiral Addition, 4 + 3, asdescribed in [4].Figure 2. 49 hexagonal pixels labeled by spiral addressesOn SA, we can perform the extraction of object boundarypoints together with their Gradient directions in the nextsection.III. CONTOUR EXTRACTIONThe contour of an object contains relevant featureinformation about that object. It is well known that contours ofa 2-D object should be closed. Traditional contour detectionalgorithms suffer from the limitation [5] that a closed contouris not guaranteed. Many attempts have been made for thedetection of closed contours. Some examples are an algorithmfor adapting a geometrical model [6], an algorithm with whichsome contour closure requirements are set up [7], <strong>and</strong> theEnhanced Snakes Algorithm [5].In this section, an approach to the object contour extractionbased on SA is presented [8]. In order to extract contours frompre-detected edge pixels, a pixel is defined as an object pixel ifthe pixel is on the object of interest (e.g., heart ventricles). Anobject pixel is called a contour pixel if <strong>and</strong> only if there existat least two adjacent pixels <strong>and</strong> at least one non-object pixelnext to it. As an illustration, in Figure 3, pixel A is a contourpixel. The definition of contour pixels guarantees that anycontour pixel is on a closed contour <strong>and</strong> contour pixels formclosed contours <strong>and</strong> an object.Figure 3. Contour pixel, object pixel <strong>and</strong> non-object pixel.A contour tangent line is a line segment connecting adjacentcontour pixels. The direction of the tangent line connectingcontour pixels A <strong>and</strong> B is called from A to B if there is anobject pixel next to A <strong>and</strong> B, <strong>and</strong> on the right h<strong>and</strong> side of thisoriented line. For example, the direction the segment AB inFigure 3 is from A to B since the object pixel C is on the righth<strong>and</strong> side of the tangent line. When the tangent line connectingcontour pixels A <strong>and</strong> B has the direction from A to B, thedirection is called a tangent direction of pixel A. A contourhas at least one tangent direction <strong>and</strong> at most two tangentdirections what are opposite. For example, in Figure 3, pixel Ahas tangent directions from A to B <strong>and</strong> from A to D.In order to extract object contours, contour pixels areclassified into six classes of which each is given a class codecalled chain code number from 0 to 5 as shown in Figure 3.Figure 4 also shows the tangent direction of each contour pixelas indicated by arrows.The chain code defined in Figure 4 implies that outermostcontours have clockwise direction. Based on the chain code,the method for contour extraction can be described as follows.1. Determine the object mask. The object maskconsists of selected object pixels from which theobject contours can be extracted.2. Edge detection. Choose an edge detector to obtain anedge map of the sample image.29


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, China3. Get initial contour pixels. Initial contour pixels areobtained according to the definition of contour pixelsas shown in Figure 2.4. Include more contour pixels. Any contour pixel hasa direction as shown in Figure 3 <strong>and</strong> should beconnected to an adjacent pixel pointed by thisdirection. We include these adjacent pixels <strong>and</strong>record them as contour pixels.5. Get remaining contour pixels. We repeat theprevious step until no more contour pixels can befound.6. Find the outermost contours. The outermostcontours are obtained by the fact that the direction ofthese contours is clockwise.Figure 5. An ultrasound image of heart showing a cluster of brightpixels.Figure 4. Chain code of different contour points.IV. CONSTRUCTION OF PTGWWe now come to the stages for the construction of Position-Time Grey Waveform for the ultrasound images of heartventricles. We follow the steps shown in the previous sectionfor contour extraction of ventricles. The object mask isobtained as follows.The movement of the ventricle wall is caused by the bloodforce <strong>and</strong> the bouncing force from the cardiac muscle. Whenthe incident ultrasound wave hits the interface of the blood <strong>and</strong>the ventricle wall of the heart, the echo wave forms a greylevel image, of which the light intensity at every pixelcorresponds to the acoustical impedance. Because the echo isvery strong around the area close to the interface of the blood<strong>and</strong> the ventricle wall, the ultrasound image will show a clusterof bright pixels in this area (see, for example, Figure 5). Thesebright pixels are collected to form the object mask of heartventricles.After the object mask is obtained, the contour of eachventricle is extracted. The moving direction of each contourpixel is also determined that is perpendicular to the tangentdirection at this pixel. The following steps show how thePTGW is constructed.1. Choose a set of contour pixels on the first image froma sequential image frames. For example, we canchoose a contour pixel from every certain number (e.g.10) of consecutive contour pixels. We call these pixelsreference pixels.2. Record the positions of these reference pixels.Corresponding to each reference pixel, create aPosition-Time Graph (PTG). Each PTG has time as itsx-axis <strong>and</strong> position as its y-axis. The position is thedistance between the reference pixel <strong>and</strong> the contourpixel (target pixel) in the moving direction of thereference pixel. The value representing the position(or the distance) can be either positive or negativedepending on if the target pixel is in the positive ornegative gradient direction relative to the referencepixel. The position value of the reference pixel is 0.Plot this position on the PTG.3. Move to next image in the sequential image frames.Starting from the reference pixel location, along themoving direction of the reference pixel, search for thefirst contour pixel (a target pixel). Moving one pixel toanother pixel along a certain direction can be easilydone through a Spiral Addition. Compute the positionvalue of this target pixel <strong>and</strong> plot the position on thePTG.4. Move to next image in the sequential image frames.Starting from the location of the target pixel justfound, along the moving direction of this target pixel,search for the first contour pixel (a new target pixel).Compute the position value of this new target pixel<strong>and</strong> plot the position on the PTG.5. Repeat the previous step until there is no more imageleft in the sequential image frames.30


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaWhile constructing the PTG for each reference pixel, if wealso record the grey levels of the reference pixel <strong>and</strong> all thetarget pixels, <strong>and</strong> display the PTG as a grey picture using greylevels, then the PTG becomes a PTGW.Furthermore, if we also record the moving directions of thereference pixel <strong>and</strong> the target pixels <strong>and</strong> represent them usingthe six integers 1 from 0 to 5, then we can construct anothergraph that shows the relationship between the time <strong>and</strong> movingdirection. We may call this graph a Direction-Time Graph(DTG).Figures 6 <strong>and</strong> 7 give examples of a PTG <strong>and</strong> a DTGrespectively.As the result, three curves representing the amplitude, speed<strong>and</strong> acceleration of heart movement respectively can be easilydrawn from the PTGW/PTG images obtained. Meanwhile, theDTG images show clearly the changes of boundary movingdirections of heart ventricles.Additions according to the directions of boundary pixels.Unlikely the OMES system, our new approach will no longerrequire the determination of the central pixel related to theboundary (or contour). Our approach is new method fordynamic <strong>and</strong> variable direction lines setting <strong>and</strong> is not affectedby the translation movement of human heart any more. As itsresult, three curves representing the amplitude, speed <strong>and</strong>acceleration of heart movement respectively can be easilydrawn from the PTGW/PTG images obtained. Our approachalso shows the changes of the moving directions in terms oftime from the DTG. Our approach is more efficient <strong>and</strong>accurate than OMES shown in [2], <strong>and</strong> our result containsmore robust <strong>and</strong> complete information of heart movingmanner.The movement of the heart at the interface of the blood <strong>and</strong>the ventricle wall directly reflects the movement of theventricle wall of the heart. This movement is caused by theblood force <strong>and</strong> the bouncing force of the wall due to theblood compressing. Hence, the moving information asdescribed using PTG/PTGW <strong>and</strong> DTG provides importantinformation hemodynamics for clinical diagnosing.ACKNOWLEDGMENTThis paper is partly supported by the Readership Allowanceof the Faculty of Information Technology, University ofTechnology, Sydney.Figure 6. An example of Position-Time Graph.Figure 7. An example of Direction-Time Graph.V. CONCLUSIONIn this paper, we have performed a ventricle boundarydetection scheme based on SA using various chain codesdefined. The gradient or the moving direction of eachboundary pixel is determined at the same time. PTGWtogether with PTG <strong>and</strong> DTG images at each boundary pointcan be obtained at the same time through a series of SpiralREFERENCES[1] B. K. Horn, <strong>and</strong> B. G. Schunck, Determining Optical Flow, ArtificialIntelligence, Vol. 17, pp. 185-204, 1981.[2] Lin Qiang, Jia Wenjing, <strong>and</strong> Yang Xiuzhi, A Method for Mining Dataof Sequential ImagesA Rebuilding of Gray (Position)~time Function onArbitrary Direction Lines, Proc. of International Conference on <strong>Imaging</strong>Science, Systems, <strong>and</strong> Technology (CISST¡fl02, Las Vegas, pp. 3-6,June 2002.[3] Xiangjian He, Tom Hintz, <strong>and</strong> Ury Szewcow, Replicated Shared ObjectModel for Parallel Edge Detection Algorithm Based on SpiralArchitecture, Future Generation Computer Systems Journal, Elsevier,Vol. 14, pp.341-350, 1998.[4] Qiang Wu, Xiangjian He, <strong>and</strong> Tom Hintz, Distributed ImageProcessing on Spiral Architecture, Proceedings of the 5th InternationalConference on Algorithm <strong>and</strong> Architectures for Parallel Processing(IEEE), Beijing, pp.84-91, October 2002.[5] P. C. Yuen, Y. Y. Wong, <strong>and</strong> C. S. Tong, Enhanced snakes algorithmfor contour detection, Proc. IEEE Southwest Symposium on ImageAnalysis <strong>and</strong> Interpretation, San Antonio, Texas, pp. 54-59, 1996.[6] Giancarlo Iannizzotto, <strong>and</strong> Lorenzo Vita, A fast, accurate method tosegment <strong>and</strong> retrieve object contours in real images, Proc. IEEEInternational Conference on Image Processing, pp. 841-843, Lausanne,Switzerl<strong>and</strong>, 1996.[7] A. Nabout <strong>and</strong> H. A. Nour Eldin, The topological contour closurerequirement for object extraction from 2D-digital images, Proc. IEEEInternational Conference on Systems, Man. <strong>and</strong> Cybernetics, pp. 120-125, LeTouquet, France, 1993.[8] Xiangjian He, <strong>and</strong> Tom Hintz, A Parallel Algorithm for Object ContourExtraction within a Spiral Architecture, Proc. International Conferenceon Parallel <strong>and</strong> Distributed Processing techniques <strong>and</strong> Applications,pp.1462-1468, Las Vegas, 1999.1 Note that there are only six different moving directions on SpiralArchitecture as displayed in Figure 3.31


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaThe Extraction <strong>and</strong> Visualization of the ROI of PET BrainFunctional ImagesXiaopeng Han <strong>and</strong> Shuqian Luo*Capital University of <strong>Medical</strong> Sciences,Beijing, ChinaAbstract - PET ( Positron Emission ComputedTomography ) brain functional images havewidely been applied to clinical diagnoses <strong>and</strong>medical brain image analysis because of theirability of showing the brains’ metabolic level. Inorder to extract <strong>and</strong> display ROI ( Regions ofInterest ) of PET brain functional images,we useSPM ( Statistical Parametric Mapping )software to preprocess the PET brain images <strong>and</strong>build a statistical model to extract their ROI. Weuse surface rendering (Marching Cubes Method)to visualize the ROI <strong>and</strong> calculate their volume.We have applied the methods to 6 normalpersons’ PET brain images of acupuncturingZu-san-li point <strong>and</strong> got satisfactory results whichcoincide with the traditional curative theoryabout zu-san-li point. The methods not onlywiden the eyeshot about the research oftraditional Chinese medicine but also provide akind of displaying approach of multi-modalityimages.Key words - Segmentation, Surface-rendering,ROI, Visualization, SPMI. IntroductionIn modern medical image diagnoses, more <strong>and</strong>more researchers are paying their attention on theextraction <strong>and</strong> display methods of the ROI of PETbrain images because these images can show themetabolic level of brains which is correlative withmany diseases. SPM is a popular software in PETbrain images analysis which is designed by Karl JFriston, etc, <strong>and</strong> is looked as a st<strong>and</strong>ard method onthe research of PET functional analysis [1][2]. Inthis paper, we use SPM software preprocess the PETbrain images <strong>and</strong> build a statistical model to extracttheir ROI <strong>and</strong> use a kind of 3D-surface renderingtechnology (Marching Cubes Method) to visualizethe ROI <strong>and</strong> calculate their volume. We have appliedthese methods in the research of traditional Chinesemeridian <strong>and</strong> acupuncture point Zu San Li <strong>and</strong> try tofind out the relations between Zu San Liacupuncture point <strong>and</strong> the functional regions ofhuman brain, try to explain its treatment theory <strong>and</strong>find more unknown therapy relationship <strong>and</strong> providea new approach for our precious traditional Chinesemedicine research.II. METHODA. SPM method1) preprocessing: Because of the difference inacquirement conditions of PET images, theimages’ spatial coordinates of the same position ofbrain are not same. Firstly, we need co-register <strong>and</strong>re-slice the original images. Secondly we extractstatistical regions of interest <strong>and</strong> transform theimages from old coordinates to normal coordinatesaccording to the experimental models <strong>and</strong> this iscalled ‘normalization’. Then we eliminate noise <strong>and</strong>obtain better signal-to-noise ratio. we also useGaussian Kernel to smooth the images.2) Establishing statistical models: In our test, weapply a kind of paired t test design, so we select thecorresponding statistical models.3) Displaying <strong>and</strong> exporting ROI: According tothe statistical parametric model, we acquired aThe asterisk (∗) indicates the corresponding author., e-mail: sqluo@ieee.<strong>org</strong>32


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinaparametric matrix <strong>and</strong> ROI by SPM. The ROI isdisplayed more brightly in the anatomic brainimages. They can be showed by 2D-slice sections( sagittal, coronal <strong>and</strong> axial ) or 3D eightdirections[3] (upper, basal, left, right, frontal, back,inner-cortex <strong>and</strong> outer-cortex of the brain). In orderto show the ROI better, we export the results tovisualization.B. 3D visualizationThere are two types of method of 3D visualization.One is volume rendering, <strong>and</strong> the other is surfacerendering. We use the latter because of its rapidness<strong>and</strong> realize it with Marching Cubes algorithm [4].For better observing the spatial position of ROI, weextract the white matter, grey matter <strong>and</strong> ventriclefrom the normalized images. In order to show thecovered region, we implemented the 3D- rotation,translation, incision <strong>and</strong> hemi-transparence views.C. Calculations of ROIWe calculated the number of the voxels in theROI to obtain its volume because each voxel has a3D volume. When interpolation is used, we canobtain more exact volume of ROI.III. RESULTSA. Experiment designWe scanned PET brain images of 7 young <strong>and</strong>healthy subjects twice, once a week. Before eachscan, they need be relaxed quietly for 30 minutes,<strong>and</strong> then, they were injected 18FDG 4mci. After 40minutes, we scanned them. In first scan, we adoptedsupposititious acupuncture. In second scan, weacupuncture the right side Zu San Li point before theinjection <strong>and</strong> the acupuncture lasted for 30 minutes[5].Fig.1 The 3D-8 direction views of ROI by SPM method(Green region is the enhanced excited region. Red regionis the suppressive region.)Firstly, we co-register, re-slice, normalize <strong>and</strong>smooth the original images. Secondly, we use apaired t test statistical method <strong>and</strong> then, we acquirethe ROI which are displayed in fig.1. The resultsshowed that there are some enhanced excitedregions in the gyrus precentralis, gyrus postcentralis,the 41, 42, 22 Brodmann regions in gyrus temporalissuperior, the 17, 18, 19 Brodmann regions in lobusoccipitalis, inferior colliculus <strong>and</strong> cerebellum. Thereare also some suppressive excited regions, such as inthe 19 Brodmann region of the left lobus occipitalis,the 11 Brodmann region in lobus frontalis <strong>and</strong> the 38<strong>and</strong> 20 Brodmann regions in lobus temporalis.3) Visualization: We use surface renderingmethod to visualize the 2D data sets of ROI. Theresults are showed below. ( fig.2 to fig.5)B. Test results1) Filtration of the data: Because one collecteddata was failed, we selected the other six data sets.2) The results of SPM method:Fig..2. The 3D-surface model of grey matter33


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaFig..3. The 3D-surface model of ROI after cutting offsome parts of grey matterFig.4. The 3D model of ROIFig.5. The 3D model of ROI after cutting offsome grey matter <strong>and</strong> white matter4) Calculation of ROI: The volume of enhancedexcited ROI is 10.76ml <strong>and</strong> its surface area is6725.21mm 2 . The volume of suppressed excitedROI is 11.46ml <strong>and</strong> its surface area is 7149.36mm 2 .IV. CONCLUSION <strong>and</strong> DISCUSSIONSPM has a strong functional analysis in theresearch of PET brain images, but it is defective in3D display of ROI. 3D visualization methodcombined with SPM software makes us to view thespatial position of ROI <strong>and</strong> their relationship moreclearly.In our test, the 3D models looked relatively coarsebecause of two reasons. One was that we had notacquired MRI images of the subjects, <strong>and</strong> the PETimages had very low spatial resolution. The otherwas that the normalized images were also oflower resolution ( 79*95*68 ) after spatialpreprocessing (the original PET images’ resolutionis 128*128*63). How to improve the resolution ofdisplay <strong>and</strong> to determine more exact orientation ofROI is our next task.The experiment results validate the medicaltheory of the gyrus precentralis <strong>and</strong> gyruspostcentralis correlated with the impulse came frombody[6]. There exists a bewildering problem to us.According to the classical theory, the primarysense nerve centre is correlated with the impulsecame from the opposite side body. But the results ofthe right side Zu San Li acupuncture show that theexited intensity of left side area is weaker than theright side area in the enhanced excited regions ofgyrus precentralis <strong>and</strong> gyrus postcentralis. We do notknow the reason yet.The enhanced excited regions in the oblongata<strong>and</strong> lobus temporalis show that there are somerelations between the zu-san-li acupuncture <strong>and</strong> theneural regulation of the vermiculation of stomach<strong>and</strong> intestines. This is consistent with the traditionalChinese medical theory that the acupuncture of ZuSan Li point can cure the sickness of the stomach.Furthermore, we found the enhanced excited ROI<strong>and</strong> the suppressive excited ROI appeared at thesame time. It shows us the variety of the therapyabout Zu San Li acupuncture. In the future work, therelationship between Zu San Li point <strong>and</strong> thediseases will be studyed with more new Chinesemedical findings. These researches could be usefulto explore the therapy theory of Traditional ChineseMedicine.ACKNOWLEDMENTWe really appreciate professor Ling Yin of thePLA General hospital for providing us the PET Data.We also thank Dr. Huizhi Cao for his cooperativework <strong>and</strong> useful discussion. This project issupported by National Nature Science Foundation ofChina (30270401) <strong>and</strong> Beijing Nature ScienceFoundation (3022004).34


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaREFERRENCES[1] Satoshi Minoshima, Robert A, et al. AnatomicSt<strong>and</strong>ardization: Linear Scaling <strong>and</strong> nonlinear warping offunctional brain images[J].J Nucl Med,1998,35(9),pp.1528-1537[2] In Young Hyun, Jae Sung Lee, Joung Ho Rha,et al.Different uptake of 99m Tc-ECD <strong>and</strong> 99m Tc-HMPAO in thesame brains:analysis by statistical parametric mapping[J].Euro J Nuc1 Med, 2001,28(2), pp.191-197[3] Zhang Hai-min,Chen Shengzu. A new method of brainfunctional images- Statistical Parametric Mapping(SPM)[J].Beijing : Chinese journal of medical imagingtechnology,2002,18, pp.711-713[4] Tang Ze-sheng etc. Three dimensions data visualization[M].Beijing:Tsinghua university press, 1999, pp.89-90[5] Yin Ling etc.The principium research of brain functionalimages about accupuncture[C].Beijing:NeuroinformaticsLetters,2002, pp.55-57[6] Tang Zu-wu. Central nervous system anatomy [M].Shanghai: Shanghai science technology press, 1986, pp.243-24935


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaApplication of Fuzzy Rough Set in <strong>Medical</strong> ImageRegistrationWEI Jianming 1 , ZHANG Jianguo 2 , WANG Baohua 11 Shanghai University, P. R. China.2 Shanghai Institute of Technical Physics, Chinese Academy of Sciences, P. R. China.AbstractIn this paper, we presented one theory of fuzzy rough set suitable to the application of medicalimage registration. The proposal method avoided the direct correlation on the two images, registeredtwo images based on their contours extracted by fuzzy rough theory. We first extracted the contour inthe images by fuzzy rough theory <strong>and</strong> encoded them into the chain code, then computed the cross to getthe correct registration point. The validity of this method was proved by the fused results of theexperiment <strong>and</strong> evaluation.Keywords: <strong>Medical</strong> image registration, fuzzy set, rough set, fuzzy rough set.1. IntroductionThe registration is very important <strong>and</strong>difficult for multi-source medical imagefusion because of difference on gray level<strong>and</strong> resolution level. Although manymethods for medical image registration havebeen developed so far, there still exist mostproblems [1][2]In general, the registration algorithmsbased on image contour adopts the center ofmass or corner as control point to register,but because of imaging difference, thecontour formed by the same object shouldnot be uniform, so the registering accuracywill be sure to be influenced by theconventional method [3]-[6]This paper presented one new <strong>and</strong>effective contour-based registrationapproach for the multi-source image fusionby making use of fuzzy rough set.2. The proposal methodThe following details show theapplication of contour extraction based onfuzzy rough set.Definition 1: One fuzzy set A on thedomain of U is represented by onemembership function on it:( ) : [ 0,1]µ A u U → (1)Where ( u )µ indicates theAmembership degree of u against U . Onefuzzy set is expressed byfor short.µ ( u ) or A( u )Definition 2: Assume that ( U,R ) is Pawlakapproximation space, i.e.relation ofA −RUIfAR is an equivalentA is a fuzzy set,A− R<strong>and</strong>of A against knowledge database( U,R ) are defined as one pair of fuzzy set,their membership degree function isrespectively defined as:{ ( ) [ ] R}A = inf A y y∈ x , x∈ U (2)− R−R{ ( ) [ ] R}A = sup A y y∈ x , x∈ U (3)Where [ x ] Ris the equivalent relation of36


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinax under R . If A− R= , then A is definable,A −Ror A is called fuzzy rough set.Among ( U,R ) ,A− Ris positive domain,A − R is negative domain, <strong>and</strong> A ∩ ~ A −R− ⎜R⎛⎝⎞⎟ is⎠the boundary of A .In order to obtain the precise boundaryof one gray image, firstly the color imageshould be converted into gray image, <strong>and</strong>then make processing for noise cancellation<strong>and</strong> sliding. Thus the image can beconsidered as a image Pawlak approximatespace consisting of image I <strong>and</strong> theequivalent relation R . As for any pixel ofimage ,I ( )A x represents the membershipdegree of x to the boundary of image, <strong>and</strong>X represents the boundary of image I . Andthe detailed definition is shown as: if themembership degrees of two pixels are bothwithin the selected parameters of theboundary, the two pixels should belong tothe same equivalent relation, i.e:{ ( ) ( ) [ ] }A = inf A y A y ≥ c , y ∈ x , x ∈ U (4)− R−R{ ( ) ( ) [ ] }A = sup A y A y ≥c, y∈ x , x∈ U (5)⎛x = A ∩ A − ⎞⎜⎝ ⎟ (6)⎠− R~ RIs the boundary of image I , where c isthe boundary gradient.According to the method depicted above,we can obtain the means for calculating theimage boundary:Assume that the gray scale of any pixel( i,j)x = of image is hij , , the 3×3RRI ( )structuring element of sweep-window isshown as the Figure. 1, the gradient blocksconsisting of sweep-window are shown asFigure. 2.And then:Figure. 13×3 structuring element ofsweep-windowFigure. 2 Gradient blocks consisting of8() ( )j=0sweep-windowH i =∑ h i, j , i = 0.8;(7)1Tolerance1= h 2i −H4∑ ( ) ( 0 ) (8)4 i = 11Tolerance2= h 2i −1−H4∑ ( ) ( 0 ); (9)4 i = 11Tolerance3= h i −H8∑ () ( 0 ) (10)8 i = 1Therefore, the boundary X of image I is:( )( )( )⎧ h i, j ≤ Tolerance 1,⎫⎪⎪X = ⎨x = ( i, j)h i, j ≤Tolerance2,⎬⎪⎪⎪⎩h i, j ≤ Tolerance3⎭⎪(11)Assume the gray scale of contour pixelis 1, <strong>and</strong> then the contour of the image isjust the connecting path made up of thesecontour pixels. The path can be consideredto be composed by the line connecting thetwo adjacent pixels; what’s more, each linehas one direction. When cover the wholecontour along the direction of clockwise, thedirection along the contour chain should becoded as a certain mode. Here we adopt themode of eight-connected code, i.e. each 0~7code is corresponding to each direction of37


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, China45 degree, for example, one 90 degreedirection is corresponding with the code 2.<strong>and</strong> then make an operation for transforming<strong>and</strong> sliding.Supposed that the contour A <strong>and</strong>B are represented respectively by two chaincode sequences of {relationD kla i} <strong>and</strong> { b i }, thebetween the sequence startingfrom k to n on the contour of A <strong>and</strong> thesequence starting from l to n can bedefined as the following:The registered result is shown as Figure.4 <strong>and</strong> the final fused result is shown asFigure. 5.(a) Original MRI image (b) Original PET imageFigure. 3 Original MRI image <strong>and</strong> PET imageDn−1' '( a + + )1 π= ∑ cos − b(12)4kl k j l jn j=0Where:n−1'1ak+ j = a( k+ j) mod N− a( ) mod,0A∑ k+j N≤ip nnAj=0Figure. 4 Final result of registrationn−1'1bl+ i= b( l+i) mod N− b( i j)mod N ,0B∑ + B ≤ip nnj=0In order to seek the optimum matchingposition between A <strong>and</strong> B , let the linewith one length of n starting from thepoint of k of contour A slide along thecontour B , whose maximum correlation isdetermined by the following expression:CAB = max{ Dkl}l ∈(13)MWhere M indicates the whole beyondof sliding.3. Data used <strong>and</strong> experimental designFor experiment, we chose the MRIimage <strong>and</strong> PET image which were providedby Digital <strong>Medical</strong> Research Center ofFudan University (shown as Figure. 3). Theregistration for the MRI image <strong>and</strong> PETimage should be carried out by the proposalapproach, <strong>and</strong> the final fusion should berealized by wavelet decompose algorithm.Figure. 5 Final result of fusionFor evaluation of the registered <strong>and</strong>fused results, the comparison of the fusederrors by different methods is shown onTable 1, from which we can see that theproposal method outperforms the others:w =∑( R( i, j) − D( i,j)) 2i,jN×N(14)Where w is the fused error, R is theoptimal fused result,result.D is the actual fusedTable 1 Fused error comparison of the fusedimages by contour registration obtained fromFuzzyroughset(proposal)different methodsWaveletanalysisCTclusteringIteration1.8719 1.9713 2.8553 5.30764. Results <strong>and</strong> interpretation38


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, China5. ConclusionsWe have described the theory of fuzzyrough set that is suitable for the applicationof image registration, firstly we extractedthe overall contour in the image accordingto the proposed method, <strong>and</strong> then computedthe correlation based on their chain code toget the correct registration point; finally theexperiment <strong>and</strong> evaluation were carried out.The results show that the application of theproposal method is feasibility <strong>and</strong> validity.AcknowledgementThis work is supported by the vital itemof Shanghai Science <strong>and</strong> TechnologyCommittee, P.R.China (No.03DZ19709)References[1].Vincent Bara, Jean-Yves Boire, AGeneral Framework for the Fusion ofAnatomical <strong>and</strong> Functional <strong>Medical</strong>Images, NeuroImage, 13 ( 2001 ) :4109-424[2].Zhao-li Zhang,Sheng-he Sun,Fu-chunZheng, Image Fusion Base on MedianFilters <strong>and</strong> SOFM Neural Networks: aThree-step Scheme, Signal Processing,81(2001):1325-1330[3].D.Dubois, Twofold Fuzzy Sets <strong>and</strong>Rough Sets—Some Issues KnowledgeRepresentation, Fuzzy Sets <strong>and</strong> Systems,23(1987):3-18[4].Didier Demigny. An Optimal LinearFiltering for Edge Delection. ImageProcessing, IEEE Transaction,11(2002):728-737[5].Bloch. I, Maitre. H, Data Fusion in 2 D<strong>and</strong> 3D Image Processing: an Overview,Computer Graphics <strong>and</strong> Image Processing,14(1997) :127-134[6].S.Banerjee <strong>and</strong> D.D.Majumdar, ShapeMatching in Multimodal <strong>Medical</strong> ImagesUsing Point L<strong>and</strong>marks with Hopfieldnet,Neurocomputing,30(2000):103-10639


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaRegistration of Brain EIT images with an Anatomical Brain ModelYan Zhang 1 , Peter J. Passmore 1 , Richard H. Bayford 21School of Computer Science, Middlesex University, London, UK, N17 8HR2School of Health <strong>and</strong> Social Sciences, Middlesex University, London, UK, EN3 4SA(y.zhang@mdx.ac.uk)Abstract - <strong>Medical</strong> images are increasingly beingused in healthcare <strong>and</strong> medical research. Differentimaging modalities bring complementary informationthat can be advantageously used to establish adiagnosis or assist the clinician in therapeuticintervention. Electrical Impedance Tomography (EIT)is a relatively new functional medical imaging method,by which impedance measurements from the surfaceof an object are reconstructed into impedance images.It is fast, portable, inexpensive, <strong>and</strong> non-invasive, buthas a relatively low spatial resolution. Littleanatomical information is included in EIT images.Registration of brain EIT images with a st<strong>and</strong>ardanatomical brain model links physiologicalinformation detected by EIT with anatomicalinformation obtained by other imaging modalities. Inthis paper, we propose a scheme which is based on theinformation included in the EIT measurement <strong>and</strong>image reconstruction process, to achieve thisregistration. Our prototype system has been developedin Visual C++ with using of ITK (Insight Toolkit),which is an open-source software toolkit forperforming registration <strong>and</strong> segmentation, <strong>and</strong> somepreliminary results are shown.1. BACKGROUND<strong>Medical</strong> images are increasingly being used inhealthcare <strong>and</strong> medical research. Differentimaging modalities bring complementaryinformation that can be advantageously used toestablish a diagnosis or assist the clinician for atherapeutic intervention. Generally, medicalimage modalities can be divided into two groups:anatomical <strong>and</strong> functional. Anatomicalmodalities provide information about primarymorphology, which include X-ray, CT(Computer Tomography), MRI (MagneticResonance <strong>Imaging</strong>), <strong>and</strong> Ultrasound. Functionalmodalities reveal information about themetabolism of the underlying anatomy, whichinclude SPECT (Single Photon EmissionComputed Tomography), PET (PositronEmission Tomography), fMRI (functional MRI),EEG (Electro Encephalography), MEG(Magneto Encephalography), EIT (ElectricalImpedance Tomography), etc.EIT is a relatively new medical imagingmethod. The physiological basis of EIT is thatdifferent tissues have different impedances. EITimaging exploits this property (i.e. impedance)by injecting a small current through sensorsencompassing the area to be imaged. EITimaging of the brain provides neuroimages bydetecting functional impedance changes in thebrain caused by three main mechanisms: a) cellsoutrun their energy supply <strong>and</strong> so swell, whichcause the tissue impedance rises by tens ofpercent over minutes [1]; b) blood volume <strong>and</strong>flow increase during normal functional activity,which increases the local brain impedance by afew percent over minutes [2]; c) during neuronaldepolarisation, ion channels open in the dendriticmembrane causing its resistance to decrease afew percent over tens of milliseconds [3].Currently, EIT is not in routine biomedicaluse for any purpose, studies using EIT tomeasure functional brain activity have been doneon: stroke [1], cortical spreading depression [4],visual evoked responses [2], <strong>and</strong> epilepsy [5].EIT imaging is cheap, safe, <strong>and</strong> portable.Compared with other functional imagingapproaches, EIT has higher temporal resolutionthan SPECT, PET, <strong>and</strong> fMRI, <strong>and</strong> similarspatial-temporal resolution as EEG <strong>and</strong> MEG.Considering the non-uniqueness in thereconstruction of EEG <strong>and</strong> MEG, EIT has theadvantage to be uniquely reconstructed [6].The first generation of EIT imaging measuresimpedance changes over a few seconds orminutes using a current applied at a singlefrequency of about 50 kHz [7]. Because40


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinadifferent tissues have different spectralproperties, EIT may also be performed atmultiple frequencies at the same time. This hasthe advantages that tissue may be characterizedmuch better. Advanced EIT hardware is able tomeasure 30 frequencies simultaneously [8].Using multi-frequency EIT hardware to traceimpedance changes over time results in 5D EITimage data: three for space, one for time <strong>and</strong> onefor frequency.2. INTRODUCTIONOur research focuses on the visualization of5D EIT image data to provide an efficientpresentation <strong>and</strong> interpretation of informationincluded in EIT dataset. Because of the poorspatial resolution, little anatomical information isincluded in EIT dataset. Clinicians usually haveabundant knowledge about human morphologyso the underst<strong>and</strong>ing of EIT images could beenhanced by visualizing them in the anatomicalcontext. As an important part of our researchwork, this paper aims to provide a method toregister brain EIT image to a st<strong>and</strong>ard anatomicalbrain.Image registration is the process of overlayingtwo or more images of the same scene taken atdifferent times, from different viewpoints, <strong>and</strong>/orby different sensors [9]. The need to registerimages has arisen in many practical problems indiverse fields. <strong>Medical</strong> image registration is oneof the major research areas in image registration.According to the modalities involved, medicalimage registration can be categorized into fourtypes [10]: mono-modal registration, where theimages to be registered belong to the samemodality; multi-modal registration, where theimages to be registered stem from two differentmodalities; <strong>and</strong> modality to model <strong>and</strong> patient tomodality registration. In the last two kinds ofregistration, only one image is involved <strong>and</strong> theother “modality” is a model or the patienthimself. Registering brain EIT images toanatomical brain model is a modality to modelapplication. On the other h<strong>and</strong>, because the usedanatomical model is not produced by EITimaging, it is also a multi-modal application.Features used in medical image registrationcan be extrinsic or intrinsic. Extrinsic featuresrely on artificial objects attached to the patient.Intrinsic features can be a limited set ofidentified salient points (l<strong>and</strong>marks), segmentedstructure (segmentation based), or directly usethe image intensities (voxel property based).Because little anatomical information included inEIT images, it is almost impossible to extractany accurate intrinsic feature from them for theregistration. Considering that, in the EITmeasurement, some positions of l<strong>and</strong>marks areaccurately recorded for image reconstruction,<strong>and</strong> its electrode positions are based on amodified 10-20 system of EEG electrodeplacement [11], we will use these two sets offiducial positions as extrinsic features for EITimages.Transformation methods used in imageregistration generally include three classes: rigidbody transformation, linear/affine transformation,<strong>and</strong> non-affine transformation. Most medicalimage registration algorithms assume that thetransformation is “rigid body” [12]. However forinter-subject registration, affine transformation iswidely used to provide an alignment of differentsubjects. In non-affine transformation, moredegrees of freedom than affine <strong>and</strong> rigid bodytransformations are included. Non-affinetransformation is supposed to provide a moreaccurate registration for both intrasubject <strong>and</strong>intersubject registration, but it is usually needmore computing source <strong>and</strong> the registrationquality is not always better than the other two.3. METHOD3.1 Feature location in EIT imageCurrently, EIT imaging for brain is conductedwith 31 electrodes attached to scalp (see figure1). Current is applied through two oppositeelectrodes, <strong>and</strong> voltage is measured from otherpair of electrodes. Impedance measurements aremade at different electrode combinations foreach image. 27 electrodes are located accordingto the international 10-20 system of EEGelectrode placement. The 10-20 system is basedon the position of four anatomical l<strong>and</strong>marks:nasion, inion, left preaurical point <strong>and</strong> rightpreaurical point. Four additional electrodes arealso used: two are placed on the mastoid bonesbehind each ear, <strong>and</strong> the other two are placedover the base of the occipu.41


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaEIT registration (with parameters set asModality=T1, INU=20%, Noise=3%,Phantom_name=normal, Slice_thickness=1mm,Protocol=ICBM).3.3 Feature Location in brain modelCorresponding to the features chosen in EITimages, two sets of features need to be located inthe anatomical brain model: positions for fourl<strong>and</strong>marks <strong>and</strong> positions matching the electrodes.The positions matching the electrodes used inEIT measurement will be calculated according torules in the 10-20 system also.Figure 1: EIT electrode positionsViewed from above the head. Those taken from theInternational 10-20 system are labeled according tothat system. Four modified positions are labeled with1-4 separately. (From: [2])3.4 Fiducial-based registrationAt this moment, EIT image is reconstructedusing 3D reconstruction algorithm based on ast<strong>and</strong>ard head mesh. The positions of the fouranatomical l<strong>and</strong>marks are accurately located inthe head mesh, so these positions can beaccurately identified in EIT image data also.While, positions for the 31 electrodes are notrecorded in measurement, these positions in EITimages will be calculated according to the rulesdefined in the 10-20 system.3.2 Montreal anatomical model of normal brainThe Montreal BrainWeb reference datasetwas developed by the McConnell Brain <strong>Imaging</strong>Center (BIC) in Montreal Neurological Institute[13]. BrainWeb makes available to theneuroimging community, online on the WWW, aset of realistic simulated brain MR imagevolumes. It has been widely used as a st<strong>and</strong>ardreference data (“ground truth”) for anatomicalbrain mapping <strong>and</strong> quantitative brain imageanalysis methods.Currently, the BrainWeb contains simulatedbrain MRI data based on two anatomical models:a normal one <strong>and</strong> one with multiple sclerosis(MS). For both of these, full 3-D data volumeshave been simulated using three sequences (T1-,T2-, <strong>and</strong> proton-density- (PD-) weighted) <strong>and</strong> avariety of slice thicknesses, noise levels, <strong>and</strong>levels of intensity non-uniformity. Wedownloaded a data volume from its normal braindataset to be our anatomical brain model for theFigure 2: Scheme for fiducial-based registrationBased on the registration features selectedabove, a fiducial-based registration scheme isproposed for the registration of brain EIT imagesto an anatomical brain model (see figure 2). Twolevels of registration are included in this scheme.Initially, the four l<strong>and</strong>marks are located in boththe EIT image <strong>and</strong> the anatomical model;registration is then achieved by transforming theEIT image with a seven parameter affinetransformation: transition, rotation <strong>and</strong> isotropicscaling. After that, positions for the 31 electrodesare calculated, <strong>and</strong> further registration isachieved with nine parameter affinetransformation: transition, rotation <strong>and</strong>anisotropic scaling.42


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, China4. RESULTSOur prototype system has been developed inVisual C++ with using of ITK (Insight Toolkit).ITK [www.itk.<strong>org</strong>] is an open-source system,which is originally developed for imageprocessing in the Visible Human Project. ITKprovides a collection of C++ classes for image(especially, medical image) segmentation,registration <strong>and</strong> processing.Figure 3 illustrates our preliminaryexperimental results. Images displayed here are2D slices (central sagittal plane) of the datasets.The EIT dataset used in this experiment is one ofthe clinical datasets measured in [2].(f)(g)(h)(a)(b)Figure 3: Experimental resultsIn this figure, (a) <strong>and</strong> (b) show the central sagittalplanes in the EIT dataset <strong>and</strong> the anatomical modelseparately; (c) <strong>and</strong> (d) describe the positions of nasion<strong>and</strong> inion in each modal; (e) presents the initialregistration result; (f) <strong>and</strong> (g) display three electrodepositions on the central sagittal plane in each modal;<strong>and</strong> (h) shows the final registration result in thisexperiment.5. CONCLUSION(c)(e)(d)Based on the limited information available inEIT images, a fiducial-based registration schemehas been presented in this paper. We havedeveloped an initial 3D system for theregistration of EIT images with the MontrealBrain atlas. This then allows the viewer tovisually assess EIT intensity with anatomicalstructures. Current work is focusing onextending this research on two fronts: thedevelopment of a true 3D registration system,<strong>and</strong> the development of visualization tools toexplore 3D EIT data quantitatively <strong>and</strong>qualitatively with reference to a structurallysegmented reference brain.A research project at Middlesex University isinvestigating the construction of personal headmeshes to improve the reconstruction quality [14,15]. Although our method is proposed in the43


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinacontext of EIT reconstruction with a st<strong>and</strong>ardhead mesh, no special information about thest<strong>and</strong>ard mesh is used in this method, so it couldalso be used to register EIT images based onpersonal head meshes.REFERENCES[1] Holder D.S. (1992). "Detection of cerebralischaemia in the anaesthetized rat by impedancemeasurement with scalp electrodes: implicationsfor non-invasive imaging of stroke by electricalimpedance tomography." Clin. Phys. Physiol.Meas 13: 63-76.[2] Tidswell, T., A. Gibson, et al. (2001). "Three-Dimensional Electrical Impedance Tomographyof Human Brain Activity." NeuroImage 13: 283-294.[3] Liston AD. “Models <strong>and</strong> Image Reconstruction inElectrical Impedance Tomography of HumanBrain Function”. PhD Thesis, Middlesex Univ..2004.[4] Boone K., Lewis AM. <strong>and</strong> Holder D.S. (1994).“<strong>Imaging</strong> of cortical spreading depression byEIT:implications for localization of epilepticfoci”. Physiological Measurement 15:189-198.[5] Rao, A., A. Gibson, et al. (1997). "EIT imagingof electrically induced epileptic activity inanaesthetised rabbits." Med Bio Eng Comput35(1): 3274-3284.[6] Lionheart W. R. B. (1997). Conformaluniqueness results in anisotropic electricalimpedance imaging. Inverse Problems, 13:125–134.[7] Brown B H <strong>and</strong> Seagar A D. (1987) “TheSheffield data collection system” Clin. Phys.Physiol. Meas. 8 Suppl. A 91-97[8] Wilson A J, Milnes P, Waterworth A R,Smallwood R H <strong>and</strong> Brown B H. (2001) “MK3.5:a modular, nulti-frequency successor to the Mk3aEIS/EIT system”. Physiol. Meas. 22 49-54[9] Zitova, B. <strong>and</strong> J. Flusser (2003). "Imageregistration methods: a survey." Image <strong>and</strong>Vision Computing 21: 977–1000.[10] Maintz, J. B. A. <strong>and</strong> M. A. Viergever (1998). "ASurvey of <strong>Medical</strong> Image Registration." <strong>Medical</strong>imaging analysis 2: 1-37.[11] Binnie, C., Rowan, A., <strong>and</strong> Gutter, T. 1982. The10–20 system. In A Manual ofElectroencephalographic Technology, pp. 325–331.[12] Hill, D. L. G., P. G. Batchelor, et al. (2001)."<strong>Medical</strong> Image Registration." Phys. Med. Biol.46: R1-R45.[13] Collins, D. L., A. P. Zijdenbos, et al. (1998)."Design <strong>and</strong> construction of a realistic digitalbrain phantom." <strong>Medical</strong> <strong>Imaging</strong>, IEEETransactions on 17(3): 463-468.[14] Bayford R.H., Gibson A., Tizzard A.,TidswellA.T., <strong>and</strong> Holder D.S. (2001). “Solving theforward problem for the human head usingIDEAS: a finite element modeling tool”.Physiological Measurement 22 (1): 55-63.[15] Tizzard A., Horesh I., Yerworth R.J., Holder D.S.,<strong>and</strong> Bayford R.H (2005). “Generating accuratefinite element meshes for the forward model ofthe human head in EIT”. PhysiologicalMeasurement 26(2): S251-263.44


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaApplication of Geodesic Active Contours to the segmentation of lesions on PET(Positron Emission Tomography) imagesXiaohong Gao, Dawit H. Birhane, John Clark*School of Computing Science, Middlesex University, London N14 4YZ, UK*Wolfson Brain <strong>Imaging</strong> centre, University of Cambridge, CB2, 2QQ, UKAbstract - Position Emission Tomography (PET) isincreasingly used in the diagnosis <strong>and</strong> surgery in patientsthanks to its capability of showing nearly all types oflesions including tumours <strong>and</strong> head injury. However, dueto low resolutions of the images, segmentation of lesionspresents great challenges. In this study, a unifiedframework employing a fast geodesic active contourmodel is implemented integrating a geodesic activecontour, edge alignment with the gradient field <strong>and</strong>minimal variance method. However, to processing animage, several minutes are need as the contour trying tofind its edge iteratively. To reduce the computational costrequired by a direct implementation of the level setformulation, a narrow b<strong>and</strong> level set <strong>and</strong> multi-scaleapproach is used for the initialisation. Whilstmultiplicative operator splitting scheme is implementedfor the level set formulation, which shows very promisingresults.I. INTRODUCTIONActive Contours, or called ‘snakes’, have beenstudied extensively since it was published in 1988 [1]<strong>and</strong> have been applied on nearly all sorts of imagesdoing object segmentation <strong>and</strong> boundary delineation.However like many other methods, one contourmodel works very well for one kind of images, <strong>and</strong>may not perform well (if not working at all) on theother images. This is because different images havetheir own characteristics, whilst many methods aredeveloped based on these characters, for example,colour information on colour images. Many modelsof active contours are therefore developed to fit eachapplication, including geodesic active contourswhich is based on geometric information.II. BACKGROUNDWithin image segmentation, Deformable models,or snakes, have been extensively studied <strong>and</strong> widelyused with promising results. The idea behind theactive contour model is that the user specifies aninitial guess for the contour, which is then movedunder the influence of internal forces defined withinthe curve, <strong>and</strong> external forces, which are computedfrom the image data. The internal forces aredesigned to keep the model smooth duringdeformation. The external forces are defined tomove the model toward an object boundary or otherdesired features within an image. By constrainingextracted boundaries to be smooth <strong>and</strong> incorporatingother prior information about the object shape,deformable models offer robustness to both imagenoise <strong>and</strong> boundary gaps <strong>and</strong> allow integratingboundary elements into a coherent <strong>and</strong> consistentmathematical description. The term deformablemodels first appeared in the work by Kass <strong>and</strong> hiscollaborators in the late eighties [1]. Since itspublication, deformable models have grown to beone of the most active <strong>and</strong> successful research areasin image segmentation.There are two types of deformable models. Theyare parametric deformable models <strong>and</strong> geometricdeformable models. Parametric deformable modelsrepresent curves <strong>and</strong> surfaces explicitly in theirparametric forms during deformation. The approachis based on deforming an initial contour C 0 towardsthe boundary of the object to be detected. Thedeformation is obtained by trying to minimize afunction designed so that its (local) minimum isobtained at the boundary of the object. The energyfunction is composed of two components, onecontrols the smoothness of the curve <strong>and</strong> the otherattracts the curve towards the boundary.Parametric deformable is very compact <strong>and</strong>robust to both image noise <strong>and</strong> boundary gaps as itconstrains the extracted boundaries to be smooth.However, it can severely restrict the degree oftopological adaptability of the model, especially ifthe deformation involves splitting or merging ofparts. This depends on the parameterization of the45


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinacurve <strong>and</strong> is not directly related to the objectsgeometry.The geometric deformable models, also calledgeometric active contours or level sets, on the otherh<strong>and</strong> h<strong>and</strong>le topological changes naturally. In theseactive contours models, the curve is propagating(deforming) by means of a velocity that contains twoterms, one related to the regularity of the curve <strong>and</strong>the other shrinks or exp<strong>and</strong>s it towards the boundary.These models, based on the theory of curveevolution [2] <strong>and</strong> the level set method [3], representcurves <strong>and</strong> surfaces implicitly as a level set of ahigher dimensional scalar function. Theirparameterizations are computed only after completedeformation, thereby allowing topologicaladaptability to be easily accommodated.Parametric deformable curves can be viewed asLagrangian geometric formulations wherein theboundary the model is represented in a parametricform. These parameterized boundary representationswill encounter difficulties when the dynamic modelembedded in a noisy data set is exp<strong>and</strong>ing/shrinkingalong its normal field <strong>and</strong> sharp corners, cuspsdevelop. Due to the hyperbolic nature of theequations of motion, for an evolution with constantspeed in the normal direction of the curvesingularities occur frequently <strong>and</strong> rapidly. Numericalimplementations such as st<strong>and</strong>ard central differenceapproximations are not able to estimate <strong>and</strong>approximate these situations as it builds ripples onthe curve <strong>and</strong> become unstable.III. THEORYIt all started with snakes [4], or active contourmodels, which are the computer-generated curvesthat move within images to find object boundariesvisually. These curves use energy-minimisingmethod to deform to fit image features, or to appearslither across images like snakes [4] (a phenomenonknown as hysteresis). Snakes lock on to nearbyminima in the potential energy generated byprocessing an image. From any starting point,subject to certain constraints, a snake will deforminto alignment with the nearest salient feature in asuitably processed image; such features correspondto local minima in the energy generated byprocessing the image. Snakes thus provide a lowlevelmechanism that seeks appropriate local minimarather than searching for a global solution. Inaddition, high-level mechanisms can interact withsnakes – for example, to guide them towardsfeatures of interest. Unlike most other techniques forfinding image features, snakes are alwaysminimising their energy. Changes in high-levelinterpretation can therefore affect a snake during theminimisation process, <strong>and</strong> even in the absence ofsuch changes the model will still exhibit hysteresisin response to a moving stimulus.If a contour is expressed as C(p), where [C: [0,1]-> R 2 , p-> C(p)] <strong>and</strong> an image represented as I(c) [I:[0,a]× [0,b] -> R + ], then, a snake energy form canbe expressed asEsnake=111int( Eimage(C)∫ EernalC)+ ∫ Eexternal( C)+00∫0(1)which shows that a contour is influenced by internal<strong>and</strong> external constraints, <strong>and</strong> by image forcesincluding internal forces, external forces <strong>and</strong> imageforces. If the internal, external, <strong>and</strong> the image forcesare expressed below:Eint ernal= Einternal( C)+ Eexternal( C)∂Cα∂p2E image2∂ C+ β2∂ p( C)= −λ∇I(C(p)22=(2)(3)where α, β, λ are positive weighting constants, fromgeometric characteristics point of view, the internalconstraints give the model tension <strong>and</strong> stiffness, i.e,the first order term makes the snake to behave reviselike a membrane (i.e. resists stretching), while thesecond order term makes the snake to act like a thinplate (i.e. resists bending).Image energy is used to drive the model towardssalient features such as light <strong>and</strong> dark regions, lines,edges, <strong>and</strong> terminations.This energy is minimised by iterative gradientdescent according to forces derived using variationalcalculus <strong>and</strong> Euler-Lagrange Theory. In addition,46


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinainternal (smoothing) forces produce tension <strong>and</strong>stiffness that constrain the behaviour of the models;external forces may be specified by a supervisingprocess or a human user. As is characteristic ofgradient descent, the energy minimisation process isunfortunately prone to oscillation unless precautions– typically the use of small time steps – are taken.The basic energy model is not capable ofh<strong>and</strong>ling changes in the topology of the evolvingcontour when direct implementations are performed.Therefore geometric models of active contours wereproposed to overcome these disadvantages [5],which are based on the theory of curve evolution <strong>and</strong>geometric flows. The topology of the final curve willbe as the one of C 0 (the initial curve), unless specialprocedures, many times heuristic, are implementedfor detecting possible splitting <strong>and</strong> merging [6-8]A. Level Set MethodsThe Osher-Sethian’s [9] level set method tracksthe motion of an interface by embedding theinterface as the zero level set of the signed distancefunction. The motion of the interface is matchedwith the zero level set of the level set function.The level set approach represents a curve as alevel set or an equal-height contour of a givenfunction. The intersection between this function <strong>and</strong>a plane parallel to the coordinate plane yields thecurve. This function is an implicit representation ofits level set. The idea is to represent a curve as thezero level set of a higher dimensional function; themotion of the curve is then embedded within themotion of the higher dimensional surface.Let C be a smooth closed initial curve in anEuclidean plane R 2 <strong>and</strong> C(s,t): [0, L(t)] × [0,T] ->R 2 be the one parameter family of curves generatedby moving this initial curve along its normal vectorfield N r with speed V N , where s is the arc length ofthe curve C at time t, L(t) is the length of curve at atime t. The curve evolution equation describing thedifferential change of the curve in time is given byCC tv= VNN, C(s,0)C0 ( s)(4)r= ( α + εκ) N(5)t=where α, ε∈R.The tangential component affects only theparameterization, <strong>and</strong> not the geometric shape of thepropagating curve. If we take α = 0, ε = 1, we getthe family of plane curves flowing according to thegeometric heat equation.C tr= κN(6)The above differential equation describes a curvemoving along its normal direction with a velocityequal to its curvature. This equation has a number ofproperties which make it very useful in imageprocessing. In particular, this equation is theEuclidean curve shortening flow, in the sense thatthe Euclidean perimeter shrinks as quickly aspossible when the curve evolves [9-11].Let φ(x, y, t): R 2 × [0, T] -> R be an implicitrepresentation of the curve C(s,t), so that the zerolevel set φ(x, y, t) = 0 is the set of pointsconstructing the curve C(s,t). In other words, thetrace of the curve C at time t is given by the zerolevel set of the function φ at a time t:C(t) = φ(.,.,t) -1 (0) (7)That is, the zero level set of a time-varying surfacefunction φ(x, y, t). The propagation rule for φ thatyields the correct curve propagation equation isgiven by∂φ−1+ VN ∇φ= 0 given φ (0) =∂t(8)B. Narrow B<strong>and</strong> Level Set MethodC(0)The embedding of the interface as the zero levelset of a higher dimensional function requirescalculations to be performed over the entirecomputational domain, not just the zero level setcorresponding to the interface itself.Each grid point contains the value of the level setfunction at that point, <strong>and</strong> thus there is an entirefamily of contours, only one of which is the zerolevel set. The level set method st<strong>and</strong>s at each gridpoint <strong>and</strong> updates its value to correspond to the47


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinamotion of the surface. This produces a new contourvalue at that grid point. Instead, computationallyefficient <strong>and</strong> accurate method can be achievedthrough the use of adaptive methodology whichlimits computational labour in a neighbourhood ofthe zero level set.The narrow b<strong>and</strong> method limits computationallabour to a grid points located in a narrow b<strong>and</strong>around the front. Grid points around the front arekept in a one dimensional array, <strong>and</strong> updated usingthe level set equation until the interface nears theedge of this narrow b<strong>and</strong>, at which point a newnarrow b<strong>and</strong> is re-initialized.The level set approach requires a speed functionV N defined on the entire domain; not simply on thezero level set corresponding to the front. Using theNarrow B<strong>and</strong> approach the speed extension, whichextrapolates the velocity from the zero level set, canbe done only for points lying near the front, asopposed to all points in the computational domain.C. Geodesic Active ContourA geodesic curve in a Riemannian space [12, 13]with a metric derived from the image content can beseen as a particular case of the classical energysnakes model. (A geodesic curve is a (local) minimaldistance path between given points.). This meansthat in a certain framework, boundary detection canbe considered equivalent to finding a curve withminimal weighted length. This interpretation gives anew approach for boundary detection via activecontours, based on geodesic or local minimaldistance computations.The geodesic active contour is represented as thezero level-set of a 3D function where the geodesicflow includes a new component in the curve velocitybased on image information that allows boundarieswith high variation in their gradient, including smallgaps, to be accurately tracked. The geodesic activecontour model is derived by minimising the energyfunctional defined in the parametric active contour.The model doesn't require special stoppingconditions <strong>and</strong> allows simultaneous detection ofinterior <strong>and</strong> exterior boundaries. The solution to thegeodesic flow exists in the viscosity framework, <strong>and</strong>is unique <strong>and</strong> stable. The gradient descent flow ofgeodesic active contour is derived from the classicactive contour model.D. Work Undertaken in This StudyTo delineate a lesion from 2D PET images,where the lesion corresponds to a region whosepixels are of different grey level intensity, thegeodesic contour evolves to the desired boundaryaccording to intrinsic geometry measures of theimage. When a user provided an initial guess of thecontour (seed), the contour propagates inward oroutward in the normal direction driven toward thedesired boundaries by image-driven forces. Theinitial contour can be placed anywhere in the imagedomain. However, it must be placed inside thedesired shape or enclose all the constituent shapes.The final contour is extracted when the evolutionstopped. The front contour evolves according toC tv v v= g( I)((1− εκ ) N)− ( ∇g⋅ N)N(9)where level set equation takes form of Eq.(10) todetect boundaries with high differences in theirgradient values, as well as small gaps.φ t+ g I( 1−εκ)∇φ− β∇P⋅∇φ= 0 (10)whereg I( x,y)=1 +∇(Gσ1∗ I(x,y)(11)which shows that the image I(x,y) convolves with aGaussian smoothing filter G σ whose characteristicwidth is σ. AndP( x,y)= − ∇(Gσ ∗ I(x,y)(12)which attracts the surface to the edges in the image.Whist coefficient β controls the strength of thisattraction.IV. RESULTSFigure 1 gives the results of delineation oflesions for brain PET images. Three methods aregiven including (i) geometric Geometric Level SetsMethod [8], (ii) Fast Geodesic Active Contour48


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaGradient [12],<strong>and</strong> (iii) Fast Geodesic Active Contourwith Narrowb<strong>and</strong> implemented in this paper.Therefore most methods do not work well on them.By the implementation of geodesic active contourmethod with narrow b<strong>and</strong>, the delineation of lesionscan be performed on many of PET images.Mathematically, methods (ii) <strong>and</strong> (iii) are similar.The major difference is method (iii) is a bit fasterthan method (ii). From delineation point of view, themethod (iii) implemented in this study is closer tothe lesion. However, methods (ii) <strong>and</strong> (iii) havesplitting <strong>and</strong> merging characters. The final resultsmight contain segments that are not expected asshown in Figure 2.Figure 1. Experimental results on delineation oflesions from PET brain images. The first row is theoriginal images. The second row shows the method ofLevel Sets Method [8], the second row employs FastGeodesic Active Contour Gradient [12], <strong>and</strong> the thridrow the Fast Geodesic Active Contour withNarrowb<strong>and</strong> implemented in this study.V. CONCLUSION AND DISCUSSIONIn this study, we have implemented a geodesiccontour method for segmentation of PET brainimages. PET images have high noise to signal ratios.Figure 2. Example of segmentation of PET brainimage. The description of each row is the same as inFigure 1.49


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaBecause the method is implemented iteratively,the time used to get the best results takes severalminutes, which is not very encouraging. Optimisedmethod will be developed to further improve thismethod.This method needs users’ intervention. Sometimesit may not be desired. Hence, automatic or semiautomaticapproach will be developed in the future.[12]. Nikos, Paragios, Olivier, Mellina-Gottardo, Visvanathan, Ramesh, Gradient VectorFlow Fast Geodesic Active Contours, ICCV’01,2001.[13]. Goldenberg, R., Kimmel R., Rivlin E.,<strong>and</strong> Rudzsky, M., Fast Geodesic Active Contours,IEEE Transactions on Imge Processing, 10 (10),pp1467, 2001.REFERENCES:[1]. Kass, M., Witkin, A., <strong>and</strong> Terzopoulos, D.Snakes: Active contour models. InternationalJournal of Computer Vision, 1:321– 331, 1988.[2]. Malladi, R., Sethian, J.A., <strong>and</strong> Vemuri, B.C.1995. Shape modelling with front propagation: Alevel set approach. IEEE Trans. On PAMI,17:158–175.[3]. Sapiro C., Geometric Partial DifferentialEquations <strong>and</strong> Image Analysis, CambridgeUniversity press, 2001.[4]. Ivins, J., Porrill J., Everything you alwayswanted to know about snakes, AIVRU TechnicalMemo #86, March 2000,http://www.computing.edu.au/~jim/thesis.html.[5]. Caselles V., Kimmel R., Sapiro, G., GeodesicActive Contours, International Journal ofComputer Vision 22(1), 61–79 (1997)[6]. Leitner <strong>and</strong> Cinquin, 1991. Dynamicsegmentation: Detecting complex topology 3Dobjects. Proc. of Eng. in Medicine <strong>and</strong> BiologySociety, Orl<strong>and</strong>o, Florida.[7]. McInerney, T. <strong>and</strong> Terzopoulos, D. 1995.Topologically adaptable snakes. Proc. ICCV,Cambridge.[8]. Caselles, V., Catte, F., Coll, T., <strong>and</strong> Dibos, F.1993. A geometric model for active contours.Numerische Mathematik, 66:1–31.[9]. Sapiro C., Geometric Partial DifferentialEquations <strong>and</strong> Image Analysis, CambridgeUniversity press, 2001.[10]. Sethian J.A., Level Set Methods <strong>and</strong>Fast Marching Methods, Eveolving Interfaces inCmputational Geometry, Fuild Mechanics,Computer Vision, <strong>and</strong> Materials Science,Cambridge University press, 1999.[11]. 9. Osher S., Sethian J.A., FrontsPropagating with Curvature Dependent Speed:Algorithms Based on Hamilton-JacobiFormulations, Journal of Computational Physics,79, 12-49, 1988.50


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, China<strong>Medical</strong> Image Databases ----PACS51


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaPACS, Teleradiology <strong>and</strong> <strong>Telemedicine</strong> in NorwayPACS & Teleradiology <strong>and</strong> <strong>Telemedicine</strong> & eHealth in Norway: Administration <strong>and</strong> delivery of servicesMIT2005Roald Bergstrøm, Senior AdviserKITHNorwegian Centre for Informatics in Health <strong>and</strong> Social Care, Trondheim, NorwayRoald Bergstrøm, KITH, Sukkerhuset, N-7489 Trondheim, Norway - roald.bergstrom@kith.noAbstractIT in health <strong>and</strong> social services has the potential to improvewelfare, while simultaneously improving the efficiency of thesystems.By the end of 2005 nearly 100 % of the hospitals in Norwaywill have digital x-ray with RIS <strong>and</strong> PACS installed. Norway isthe first country in the world to get fully digitized in this field.All the hospitals can communicate in a National Health NetworkIn this paper we will give an introduction to national IT strategiesfor the health <strong>and</strong> social sectors, <strong>and</strong> point out major challengesfor the future of PACS&Teleradiology <strong>and</strong>eHealth&<strong>Telemedicine</strong> in Norway.IT in home- <strong>and</strong> community-care will provide the users withbetter services closer to home in the coming years. Nationalstrategies <strong>and</strong> action plans are important, but the funding necessaryfor the recommended actions must also be provided.Organisational issues are importantFig 1. Map of Europe showing the long travelling distances in NorwayI. INTRODUCTIONA. Health services in Norway.Norway provides extensive health services <strong>and</strong> a welldevelopedsocial security net. About 35% of the annual Norwegianstate budget, or 7-8 % of the gross national budget,is spent on health <strong>and</strong> social care, making it one of the Europeancountries – <strong>and</strong> the Nordic country – with the highestlevel of public spending on health per capita. The health <strong>and</strong>social care sector in Norway, as in other modern societies,faces significant challenges. Its part of the nation’s GNP isalready substantial, <strong>and</strong> the increasing mean life expectancy<strong>and</strong> falling birth rates will dramatically increase the futureburden. A specific Norwegian challenge is the low populationdensity, the consequences of which include the likelihoodthat inhabitants might have long travelling distances tomedical services, hospitals are scattered <strong>and</strong> some are small,<strong>and</strong> not all hospitals can contain every medical discipline.B. KITHKITH, is a national competence centre with close connectionsto end-users, vendors, research institutions <strong>and</strong> thegovernment. KITH is owned by the Ministry of Health <strong>and</strong>Care Services (70%), Ministry of Labour <strong>and</strong> Social Affairs(10.5%) <strong>and</strong> the Association for Municipalities (19.5%).KITH has 5 focus areas:• Codes <strong>and</strong> terminology• Electronic Information exchange• Information security• Electronic Health Record systems (EHR)• Digital imaging systems/ radiology.II. METHODS AND IMPORTENT FACTORS FOR TELEMEDICINEA. National IT strategies <strong>and</strong> action plans for health<strong>and</strong> social careInvestment in IT <strong>and</strong> making broadb<strong>and</strong> availablethroughout the country is part of the Government’s E-Norway plan, which has established ambitious goals for ITdevelopment within both the private <strong>and</strong> public sectors.IT is also an important tool in the process of implementingthe latest national health reform. Some of the main issuesin the reform are:52


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, China• Regular GP: Every citizen has one doctor• Free choice of hospital• Central government ownership <strong>and</strong> responsibilityof the hospitals <strong>and</strong> specialist healthservices.Information Technology (IT) has been regarded as a usefultool to improve health services for many years, particularlyin primary care. Back in 1997, the Ministry of Health<strong>and</strong> Social Affairs released the first national action plan forIT development in the health <strong>and</strong> social sectors.The main objective was:• stimulate electronic interaction <strong>and</strong> exchange• strengthen <strong>and</strong> increase collaboration <strong>and</strong> efficiencyin <strong>and</strong> between health <strong>and</strong> socialsevices• improve contact with patients, clients <strong>and</strong>those in need of care• improve the quality of services.B. FundingIn Norway, central government financing of new <strong>Telemedicine</strong>pilot projects has been important to reach thegoals.C. National Competence CentreA significant contribution to the Norwegian developmentin health informatics <strong>and</strong> telemedicine are given by the nationalcentres in the area:KITH – The Norwegian Centre for Health Informatics is alimited company owned by the Ministries for health <strong>and</strong>social care <strong>and</strong> The Association for Municipalities. KITH isdeveloping <strong>and</strong> contributing to the development <strong>and</strong> implementationof st<strong>and</strong>ardized terminology <strong>and</strong> coding systems,secure information exchange <strong>and</strong> st<strong>and</strong>ards for EHR <strong>and</strong>PACS systems.NST – The Norwegian Centre for <strong>Telemedicine</strong> is part ofthe University Hospital in Tromsø <strong>and</strong> aims to provide research,development <strong>and</strong> consulting in telemedicine, <strong>and</strong> topromote the introduction of telemedicine services in practice.Since 2002, the NST has been designated by the WHOas a collaborating centre for telemedicine.KoKom is a national centre working with emergencymedicine. The objective of the centre is to act as advisor togovernment both centrally <strong>and</strong> locally. One of the main projectsthe acceptance of TETRA as the national st<strong>and</strong>ard forradio communication in emergency services.NSEP - The Research Centre for EHR systems was recentlyestablished at the Norwegian University of Science<strong>and</strong> Technology (NTNU) in Trondheim, with funding fromthe Research Council of Norway <strong>and</strong> the university itself.The objectives of the centre are to perform multidisciplinaryresearch <strong>and</strong> university-level education related to EHR systemsD. National Broadb<strong>and</strong> Health NetThe Norwegian Health Net provides a good foundationfor electronic interaction <strong>and</strong> information exchange in thehealth sector. The Norwegian Health Net shall ensure dataquality, security of information, <strong>and</strong> protection of privacy inthe exchange of sensitive information. National funding isprovided for the development of different services, st<strong>and</strong>ards<strong>and</strong> security guidelines, as well as for investment inbroadb<strong>and</strong>.E. Electronic interaction within the health <strong>and</strong> socialservicesThe EHR system, whether implemented by hospitals,GPs, or other care providers, is the key to an efficient flowof information. All care providers are required by legislationto document what they do, <strong>and</strong> an extensive implementationof EHR systems amongst all providers is a prerequisite forefficient electronic cooperation.Norway has a strong legislation regarding the h<strong>and</strong>ling ofperson-related information. Information security will beaddressed by establishing basic requirements for informationsecurity, which communicating partners have to declaretheir adherence to. Specific attention is also given to thewidespread implementation of digital signature/PKI (publickey infrastructure), where the National Social SecurityAgency has brought forward a solution available for thewhole health-care sector.F. Nationwide Social Security NumberEvery inhabitants in Norway get a unique social securitynumber.III. PACS AND TELERADILOGYBy the end of 2005 nearly 100 % of the hospitals in Norwaywill have digital x-ray with RIS <strong>and</strong> PACS installed.,making Norway one of the first countries in the world to befully digitized within x-rays.The hospitals in Norway are state owned, but they dowork as Private Health Enterprises. They are <strong>org</strong>anised inregions with separate boards <strong>and</strong> a managing director foreach region. The regions communicate via the NationalHealth Network. (Broadb<strong>and</strong>-communication)53


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaNorwayBeneluxItalyEngl<strong>and</strong>GermanyFrance2220151037556070707092100cussions about archiving strategy <strong>and</strong> performance, withdetails about image retrieval times, disaster recovery, integrationof images <strong>and</strong> text <strong>and</strong> selection of storage media,are going on. Offsite archiving is introduced <strong>and</strong> ASPmodels(Application Service Provider) are going to be usedby some hospitals <strong>and</strong> private imaging centres. The amountof data produced by the imaging modalities increases constantly<strong>and</strong> the only way to manage huge digital archivesseems to be a SAN-solution (Storage Area Network). Allimages are stored on disk, with redundant solutions containingat least 2 separate archives <strong>and</strong> a backup.SpainA. Teleradiology traditionsAnother area in which electronic communication betweenactors in health sector is crucial is teleradiology.Norway has a long telemedicine tradition <strong>and</strong> the pioneersstarted with Teleradiology-services many years ago. Teleradiologyis in use for consulting in emergencies, for secondopinion <strong>and</strong> for consulting between hospital <strong>and</strong> the primaryhealth care.B. Integration100 20 40 60 80 100Integration is a key requirement for all the PACSsystems.PACS has to be integrated with RIS <strong>and</strong> RIS hasto be integrated with HIS/PAS. Initially PACS was a departmentalunit but nowadays it is a part of an enterprisesystem. The role of RIS <strong>and</strong> PACS within the hospital hasevolved, <strong>and</strong> is now moving towards fully integration withPACS as the imaging layer of the EPJ (Electronic PatientJournal). In the future the PACS will be invisible for theclinician. They will only see <strong>and</strong> work with the EPJ withthe PACS as an integrated part.602001 2005Fig 2. Implementations of PACS in European CountriesE. Broadb<strong>and</strong>Exchange of digital <strong>Imaging</strong> information requires broadb<strong>and</strong>communication. PACS has become an important applicationto the National Health Net. Central storage (for theRegional Health Enterprise) <strong>and</strong> SAN-solutions are rapidlygrowing. Central archive is without doubt a cost savingstrategy. Central archive combined with Web technologymakes it possible for the health enterprise to distribute images,interpretations <strong>and</strong> related data throughout the enterprise,increasing clinician’s access to PACS with greatervalue to the <strong>org</strong>anization.F. Information security with a shared physical storageSome regions in Norway have implemented PACS as aregional solution for all Health Enterprises within the region.In such a regional system one solution is that the HealthEnterprises share a physical storage unit for the PACS (<strong>and</strong>RIS) information. Due to Norwegian health legislation,Health Enterprises are not allowed to share patient informationindiscriminately. This means that a shared physical storageunit must be divided into logical storage areas for eachHealth Enterprise so that access can be linked to the differentHealth Enterprises. The health legislation also specifiesthat access to information owned by a different Health Enterprisemust be evaluated <strong>and</strong> approved for each individualaccess.C. IHEThe Hospitals in Norway have chosen different solutionsfor RIS <strong>and</strong> PACS. Although all image communicationuses the DICOM st<strong>and</strong>ard we do not experience that informationexchange works seamless between the hospitals.The focus in the future will be on integration <strong>and</strong> on exchangeof information across the hospitals. Norway has jointhe IHE <strong>org</strong>anisation <strong>and</strong> do participate in a Sc<strong>and</strong>inavianmirror-group. By the end of 2005 the new XDS-profile willbe used for sharing information between different PACS<strong>and</strong> RIS-systems at different hospitals within a region.D. Storage<strong>Medical</strong> images have to be on-line, 24 Hours a day. Dis-Fig 3. The new XDS-profile from IHE is used in Western Norway toachieve communication between different PACS <strong>and</strong> RIS-systems54


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaG. InvestmentsDigital X-ray represents an important share of investmentswithin the sector. Every year billions of X-ray examinationsare carried out, <strong>and</strong> every examination produces severalpictures. The large amount of examinations <strong>and</strong> X-raysmakes a digital system more practical than paper copies.A special emphasis is put on exchanging digital imagesbetween hospitals through the Norwegian Health Net, thusallowing cooperation <strong>and</strong> second opinions, as well as therational operation <strong>and</strong> increased availability of radiologyservices. St<strong>and</strong>ardization is required to communicate betweenthe different systems <strong>and</strong>, therefore, the Directorate ofHealth <strong>and</strong> Social affairs has suggested that a national projecton these issues, involving all regional health companies,should be supported. The project also includes the <strong>org</strong>anizationaldevelopment required to release the benefits, securityaspects <strong>and</strong> cost-benefit analysis.The project is be based on the successful IHE-NORWAYactivities (Integrated Healthcare Enterprise) that started in2003 with KITH as project manager.Some examples are:• Sounds, images <strong>and</strong> videos recorded by theprimary care doctor <strong>and</strong> transmitted to a specialist.Examples are stethoscope, dermatology,ear-nose-throat conditions, examinationof optic fundus for diabetes patients.• Telepathology - pathological support for hospitalslacking this capacity• Teleradiology - as imaging goes digital, supportcan be given at distance• Videoconferencing for psychiatry <strong>and</strong> cancercare.B. Home-careCare in the homes of the elderly <strong>and</strong> of other groups inneed of care is presently undergoing a development inwhich IT plays an important role. Important developmentswithin this field include systems that can communicatewith the hospitals <strong>and</strong> other <strong>org</strong>anizations within the healthsector, <strong>and</strong> mobile computers that enable communicationwhile “in the field”, without the need to go to a commonoffice.IV. TELEMEDICINE AND eHEALTH<strong>Telemedicine</strong> comprises medical diagnostic <strong>and</strong> treatmentperformed using digital information technology to transferpatient information, including medical images <strong>and</strong> PACS.To a larger extent than before, telemedicine will enable peopleto be treated, or nursed, in their local environment, or intheir homes. <strong>Telemedicine</strong> solutions has been brought intouse throughout the country to ensure a greater availability ofservices. To achieve this, two types of measures are givenpriority:• The stimulation of broadb<strong>and</strong> development betweenhospitals <strong>and</strong> between hospitals <strong>and</strong> the primaryhealth services.• The clarification of responsibility, rules, guidelines<strong>and</strong> costs in connection to telemedical consultations.A. <strong>Telemedicine</strong> services<strong>Telemedicine</strong> – excellent health services available to all.One of the main reasons for the Norwegian emphasis ontelemedicine is to achieve the vision of equal health servicesfor all patients in a country with a low population density<strong>and</strong> long travelling distances to the nearest hospital, ormedical expert. Operational solutions are in place in a varietyof medical disciplines <strong>and</strong> care situations.C. IT for groups at risk for social exclusionThe Information Society promises new opportunities forsocial inclusion, <strong>and</strong> have the potential to overcome traditionalbarriers to mobility, distance <strong>and</strong> knowledge resources.They can generate new services for disadvantagedpeople <strong>and</strong> for people seeking employment, or at risk in thelabour market. On the other h<strong>and</strong>, IT also introduces newrisks of exclusion that need to be prevented. Internet access<strong>and</strong> digital literacy are a must for maintaining employability<strong>and</strong> adaptability, <strong>and</strong> for taking economic <strong>and</strong> socialadvantage of on-line contents <strong>and</strong> services.D. EHR – the core of patient informationAccording to the Norwegian legislation, each health-careservice provider has to keep its own records, which can bein digital form, <strong>and</strong> information between service providersis only to be transferred on a need-to-know basis. A nationalEPR st<strong>and</strong>ard was released in 2001. This st<strong>and</strong>ardmainly covers issues related to architecture, archiving <strong>and</strong>security. A requirement specification for health stations <strong>and</strong>health-care in primary schools, <strong>and</strong> another requirementspecification for community care, are based on this st<strong>and</strong>ard.With few exceptions, all GPs <strong>and</strong> private specialists haveEHR systems; this has been the situation for some years.80% of hospital patients are covered by EHR.55


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaE. IT for communication in health-careThe Nordic countries are at the fore-front in eHealth applications.The number of general practioners using EHR isamong the highest in Europe.V. THE FUTURE FOR EHEALTH AND TELEMEDICINEA. ChallengesThe application of information technology in health-carecan be seen as follows:• An adequate technical infrastructure allows easy <strong>and</strong>secure many-to-many communication.• An agreed information structure secures a commonunderst<strong>and</strong>ing <strong>and</strong> correct interpretation between thevarious applications.• St<strong>and</strong>ardisation <strong>and</strong> common concepts for informationsecurity tie it all together.Governments are under pressure to deliver more valuefor taxpayers’ money. Administrations have to deliver more<strong>and</strong> better services, with equal, or fewer resources. Thechallenge is to achieve productivity growth in the publicsector, in order to create more opportunity for service improvementat equal cost. Moreover, with the ageing of thepopulation, public administrations will have to make dowith fewer employees <strong>and</strong> fewer working taxpayers, whilestill having to provide largely the same number of servicesof better quality. Civil servants dem<strong>and</strong> more interestingjobs, with more opportunity for self-development <strong>and</strong> personalinteraction.IT is not a universal solution for all challenges, but it mayreduce the stress on the public sector <strong>and</strong> create new opportunities.B. Information technology might provide an answer”eHealth is the single-most important revolution inhealthcare since the advent of modern medicine, vaccines,or even public health measures like sanitation <strong>and</strong> cleanwater”.This statement is promising, but also radical, since informationtechnology, in contrast to medicine <strong>and</strong> sanitation, isnot an integral part of medical practice. Evidence for theabove statement is still modest, but support is provided bydrawing parallels to other sectors of society. The penetrationof information technology into industry <strong>and</strong> private services(car industry <strong>and</strong> banking are prominent examples) has haddramatic effects on quality <strong>and</strong> productivity; the time mightnow be right for public services to adopt it as well.C. IT for health, care <strong>and</strong> social servicesIT in health <strong>and</strong> social services has the potential to improvewelfare, while simultaneously improving the efficiencyof systems. There are several driving forces for IT inthese sectors. One of the strongest is the dem<strong>and</strong> for increasedefficiency. This requirement can be expected to beeven stronger as the population gets older in combinationwith limited financing. Another driving force is the dem<strong>and</strong>for individual treatment <strong>and</strong> care, combined with requirementsfor participation <strong>and</strong> information. Care in the homesis increasing <strong>and</strong>, for this group, IT may provide new opportunities.Trends in favour of IT in health <strong>and</strong> social service:• Increased proportion of elderly people.• Working time gets more expensive <strong>and</strong> computersless expensive• Increased IT maturity• Increased dem<strong>and</strong> for individualized care• Increased dem<strong>and</strong> for more information <strong>and</strong> participation• Increased requirements for integrity• Increased dem<strong>and</strong> for documentation <strong>and</strong> evaluation• Increased dem<strong>and</strong> for seamless service processes• Increased care in the homeFig 4. The Health Enterprises <strong>and</strong> the amount of Digital x-raysForces that work against IT:• Slow adjustment of laws <strong>and</strong> regulations• Lack of management for change• Lack of coordination <strong>and</strong> overview• Old <strong>org</strong>anizations <strong>and</strong> work processes• Lack of common st<strong>and</strong>ards• Attitudes• Insufficient education <strong>and</strong> competence56


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaLung CT segmentation for image retrieval using theInsight Toolkit (ITK)Joris Heuberger, Antoine Geissbühler, Henning MüllerUniversity Hospitals of Geneva, Service of <strong>Medical</strong> Informatics24 Rue Micheli-du-Crest, CH-1211 Geneva 14, Switzerl<strong>and</strong>Email: henning.mueller@sim.hcuge.chAbstract— Visual information retrieval is an emerging domainin the medical field as it has been in computer vision for morethan ten years. It has the potential to help better managingthe rising amount of visual medical data currently produced.One of the proven application fields for content–based medicalimage retrieval as diagnostic aid is the retrieval of lung CTs.The diagnostics of these images depend strongly on the textureof lung tissue <strong>and</strong> automatic analysis can be a valuable help.This article describes an algorithm to separate the lung tissuefrom the rest of the image to reduce the amount of data thatneeds to be analysed for content–based retrieval <strong>and</strong> focus theanalysis to the really important part of the visual data. Mostcurrent solutions either use manual outlining for analysis or indexthe entire image without making a difference between lung <strong>and</strong>other tissue or background. As visual retrieval is usually appliedto large amounts of data, our goal is to have a fully automaticalgorithm for segmenting the lung tissue, <strong>and</strong> to separate thetwo lung sides as well. The database used for evaluation istaken from a radiology teaching file called casimage <strong>and</strong> theretrieval component is an open source image retrieval enginecalled medGIFT. Our current evaluation shows that the appliedsegmentation algorithm works on a large number of differentcases <strong>and</strong> executes automatic segmentations for various dataformats (DICOM, JPEG, ...). Segmentation quality does not needto be perfect around the outline. For image retrieval it is ratherimportant not to miss any important parts of lung tissue. Asmall number of pixels from surrounding tissue are no problem.Difficult cases <strong>and</strong> workarounds are presented in the article.I. INTRODUCTIONContent–based visual information retrieval has been anextremely active research area in the computer vision <strong>and</strong>image processing domains [1]. A large number of systemshas been developed, mostly research prototypes but also commercialsystems such as IBM’s QBIC [2]. Main reason for thedevelopment of these systems is the ever–growing amount ofvisual data being produced in many fields, for example withthe availability of digital consumer cameras at low prices, butalso with the possibility to distribute data via the Internet.The medical field is no exception <strong>and</strong> a rising amount visualdata is being produced [3]. The radiology department of theuniversity hospitals of Geneva alone produces currently morethan 25, 000 images per day. The importance of retrieval ofmedical images was identified early [4, 5] <strong>and</strong> a large numberof projects has started [6] to index various kinds of medicalimages. Not all projects are analysing the visual image content,many simply use the accompanying textual information [7].This is often called content–based retrieval but should ratherbe called context–based retrieval as the text describes thecontext in which the image was taken. Very few projects arecurrently used in clinical routine. Most projects are developedas research prototypes without a direct link to a need in aclinical application.Lung images have been analysed in the form of Thoraxradiographs for computer–aided diagnostics [8]. Most retrievalprojects concentrate on CTs of the lung. A fairly simpleapproach is given in [9] analysing the lung tissue in fixedsized blocks. A more sophisticated approach is used for theASSERT system [10, 11]. A database with selected slices <strong>and</strong>regions marked by h<strong>and</strong> was used. This approach needs muchexpensive manpower but led to some proper results. For usingthe system, an MD had to submit a selected slice of a series<strong>and</strong> mark the important region in the image. Even a realuser test was performed with ASSERT as diagnostic aid [12].An improvement in diagnostic quality was reached using thesystem, especially for less experienced MDs. The performanceof experienced radiologists was unchanged.Our goal was to make the process of retrieval less labour–intensive for the generation of the databases <strong>and</strong> for the queryprocess. The lung tissue was to be separated from the rest ofthe image, <strong>and</strong> only the lung tissue was supposed to be stored<strong>and</strong> analysed for retrieval. For the segmentation, an opensource (OS) image analysis tool is used, insight toolkit (itk 1 ),which is frequently applied for the segmentation of medicalimages. The retrieval engine is called medGIFT 2 , based on theGNU Image Finding Tool 3 . The use of OS software facilitatesthe distribution of research results <strong>and</strong> sharing of resourcesamong research groups.II. SEGMENTING LUNG CTSSegmentation is a main domain of medical image processing.It is often important to separate regions or objects ofinterest from other parts of the image. Mostly, segmentation issemi–automatic <strong>and</strong> a seed point is needed. Then, the structureis being segmented as exactly as possible, for example tomeasure its size, volume or form, in the case of a tumour.For us, the goal is not to have a perfect segmentation but analgorithm that does not need manual intervention. Goal is toquickly generate large example databases. It was not necessary1 http://www.itk.<strong>org</strong>/2 http://www.sim.hcuge.ch/medgift/3 http://www.gnu.<strong>org</strong>/software/gift/57


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinato analyse all slices for 3D segmentation as our case databasecontains only selected slices that represent a certain pathology.On the other h<strong>and</strong>, there was no possibility to use informationof connected slices to enhance segmentation. Lung imagesegmentation has also been applied to Thorax radiographies[13] but this seems to be a harder problem since lung bordersare fuzzier <strong>and</strong> the projection contains ribs <strong>and</strong> several levelsof other tissue. Segmentation algorithms for lung CTs inthe literature are mostly pixel–based methods [14–20]. Somework has been done on knowledge–based segmentation [21,22] taking into account a–priory knowledge on the structureof the lungs.In pixel–based methods, the first idea is to eliminate fattissue <strong>and</strong> bones. As the lung parenchyma has a very low–density, it is composed of low–intensity pixels in the CT scan.This property is exploited to separate the two lungs from thesurrounding tissue. Generally, the image is thresholded, eitherat a fixed value [15, 16, 19, 23] or based on a computed threshold[14, 18, 20, 24]. A study from Kemerink [25] investigatesthe influence of the threshold <strong>and</strong> shows that a threshold of–400 Hounsfield units (HU) delivers good results.As the air around the body has a very similar intensity to thelungs it will not be discarded by the thresholding, so it has tobe removed. Either it is removed before the thresholding [14,17, 20, 24] or afterwards [15, 16, 18, 23, 26]. Further steps areperformed to improve the result. Parasitical objects that remainare removed <strong>and</strong> holes inside the lungs are filled. Severaltechniques are used for segmentation such as mathematicalmorphology [23, 26] <strong>and</strong> connected component analysis [15,16, 18, 20, 23]. Another major improvement is the correction ofthe borders of the parenchyma. This is necessary when a noduletouches the border, which can lead to a bend in the contour.Such a correction is done by analysing the local curvature ofevery point of the contour [16, 19], applying a “rolling–ball”operator [14, 23] or mathematical morphology [18]. Gurcanet al. [17] developed a technique that compares, for each setof points in the border, the distance between them along thecontour <strong>and</strong> along the line that connects them. Some studiesidentify the left <strong>and</strong> right side of the lungs. If the two lungswere merged in a previous step or because the tissues touch,they can be separated [15, 18, 20].A. itkitk is an open–source (OS) software system for medical imagesegmentation <strong>and</strong> registration. It has been developed since1999 on the initiative of the US National Library of Medicine(NLM) of the National Institutes of Health (NIH). As an OSproject, itk is used, debugged, maintained <strong>and</strong> upgraded bydevelopers around the world. It can be downloaded from theitk web page itk is composed of a large collection of functions<strong>and</strong> algorithms designed for medical image segmentation <strong>and</strong>registration. As the library is implemented in C++ it can beused on most platforms such as Linux, Mac OS <strong>and</strong> Windows.The decision to use itk was taken due to the quantity ofsegmentation tools it offers <strong>and</strong> the amount of research donebased on it [27]. This allows us to concentrate on integratingtools rather than reprogramming <strong>and</strong> reinventing them.B. Lung segmentation algorithm for image retrievalOur lung segmentation algorithm follows these five steps:1) The image is thresholded to separate low–density tissue(eg. lungs) from fat.2) The surrounding air, identified as low–density tissue, isremoved.3) Cleaning is performed to remove noise <strong>and</strong> airways.4) A rolling–ball operator is used to rebuild lung borders.5) Finally, the left <strong>and</strong> right lungs are identified <strong>and</strong> separatedif needed.Figure 1 illustrates the original image <strong>and</strong> steps of the segmentationprocess.(a) (b) (c)(d) (e) (f)Fig. 1. Segmentation steps: (a) Original, (b) thresholding, (c) backgroundremoval, (d) airway <strong>and</strong> noise removal, (e) rolling–ball operator, (f) left, rightlung separated.1) Optimal thresholding: The first step is thresholding theimage. A thoracic CT contains two main groups of pixels:1) high–intensity pixels located in the body (body pixels),<strong>and</strong> 2) low–intensity pixels that are in the lung <strong>and</strong> thesurrounding air (non–body pixels). Due to the large differencein intensity between these two groups, thresholding leads to agood separation. Since our algorithm needs to h<strong>and</strong>le JPEG aswell as DICOM files, the fixed threshold of –400 HU proposedby Kemerink [25] is not applicable. The method applied isthe Optimal Thresholding defined by Hu et al. in [18]. Thisiterative procedure computes the value of a threshold so thatthe two groups of pixels are well separated. It works asfollows: Let T i be the threshold value at step i <strong>and</strong> µ b , µ n bethe average intensity value of body pixels (i.e. with intensityhigher than T i ), respectively non–body pixels (intensity lowerthan T i ). The threshold for step i + 1 is:T i+1 = µ b + µ n.258


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaThis procedure is repeated until convergence, i.e. until step cwhere T c = T c−1 . The initial threshold T 0 is set to 128 whichis the median gray level. When convergence is reached, theimage is thresholded at value T c . Every pixel with an intensityhigher than T c is set to 0 (body pixels) <strong>and</strong> the others pixelsare set to 1 (non–body pixels).2) Background removal: The air around the body (background)is removed using an idea from [18, 26]. Backgroundpixels are identified as follow: they are non–body pixels <strong>and</strong>connected to the borders of the image. Thus, every connectedregion of non–body pixel that touches the border is consideredas background <strong>and</strong> discarded. Problems with this backgroundremoval technique appear when one of the lungs touches theborder of the image. It appears if the CT scan was cropped tooclose to the lungs, which is common in our teaching file. Inthis case, the lung that touches the border will be removed asif it was a part of the background as can be seen in Figure 2.Fig. 2.(a) Original image (b) Thresholdingstep(c) BackgroundremovalThe right lung touching the border is removed with the background3) Cleaning: Once the background is removed, severalnon–body regions remain. Airways are sometimes found inthese regions such as the trachea or the bronchi. Since theairways are empty cavities, the intensity of pixels in the areais low. To remove these regions, an area with an averageintensity lower than T c /2 (T c is the threshold used previously)is searched for. Then, non–body regions smaller than 20 pixelsare removed, which eliminates noise that could interfere withthe rolling–ball in the following step. The airway removal canpose problems when parts of the lung have a very low density.If those parts were separated from the rest by thresholding,they can be interpreted as airways (shown in Figure 3).Fig. 3.(a) Original image(b) After cleaningPart of the left lung was removed with the airways.4) Rolling–ball operator: Rarely, holes can appear nearthe border of the parenchyma. The parenchyma can then bedivided into several parts by the thresholding. To fill thesesholes <strong>and</strong> glue different parts of a same lung half together,a rolling–ball operator is applied to non–body pixels [14, 23].The rolling–ball operator is in fact a morphological closing ofthe region followed by hole–filling. The structuring elementis a disc with a radius of 2 pixels. This radius was chosenfor its ability to glue enough parenchyma tissue of the samelung together without influencing other regions (eg the otherlung). Despite the small disc size, very close lungs can bemerged due to the rolling–ball. These kind of cases need tobe managed in the separation step that follows.5) Left/right lung identification <strong>and</strong> separation: Finally,the two lungs are identified <strong>and</strong> separated. If the numberof connected components is higher than one, each region isattributed to the left or the right lung depending on whetherits centre of mass is in the left or right half of the image.Like this, cases of lungs that were cut into several parts canbe h<strong>and</strong>led. If there is only one connected component (twolungs are connected), they are split into two regions. Due toour application of this segmentation for image retrieval, thereis no need for a perfect separation. The region is simply cutvertically in the middle.As we use a teaching file, some images were cropped toshow only one lung containing the main pathology. In thiscase, only one connected component is identified <strong>and</strong> has tostay intact. A condition was added for separation. This rule isbased on the shape of the bounding box: if the ratio height onwidth is bigger than 0.8 (nearly square to vertical rectangle) theregion is considered as one lung <strong>and</strong> not cut in two. Otherwise,the region is considered as two merged lungs <strong>and</strong> they areseparated.C. Implementation with itkOne of itk’s extremely useful features are image iterators.The iterators allow to traverse every pixel of an imageor a portion of an image quickly to apply any treatmentsuch as pixel count, average gray level of a region, etc.This was used frequently. Specific tools were employedfor the 5 steps of the algorithm. The optimal thresholdingis realised using a BinaryThresholdImageFilterat each step. Connected regions can be found applyinga NeighborhoodConnectedImageFilter on a seedpoint: for the background removal, every white pixel of theborder of the image was used as a seed point. The rolling–ball operator was simulated with a closing operation followedby a hole–filling step. The closing operator is employedby applying two filters: BinaryDilateImageFilter<strong>and</strong> BinaryErodeImageFilter, using a BinaryBall-StructuringElement with radius 2 pixels.A. casImageIII. THE RETRIEVAL COMPONENTThe radiology teaching file that serves as a base for oursystem is called casImage 4 , an in–house development of theuniversity hospitals of Geneva [28]. Currently, more than60,000 images are stored in the system. A database with 9,0004 http://www.casimage.com/59


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinaimages is freely accessible on the web, compatible to RSNAMIRC 5 (<strong>Medical</strong> <strong>Imaging</strong> Resource Centre). Images can beadded to the system directly from radiology workstations,making it well accepted among clinicians. Around 500 imagesare added per week. On the insertion of an image into thedatabase the level/window settings are fixed. Images are storedin JPEG <strong>and</strong> thumbnails are created to be shown on screen.This means that we do not have the full resolution of greyscales available. Our algorithm was created to work with eitherthe DICOM images or with JPEG images from casImage.B. medGIFTThe GNU Image Finding tool (GIFT) is the outcome ofthe Viper project of the University of Geneva [29]. medGIFTis an adaptation of GIFT [3] adding a new interface <strong>and</strong>experimenting with feature sets. The number of grey levels<strong>and</strong> importance of texture are different in medical imagesthan photographs. GIFT uses techniques well known from textretrieval for the indexing of images such as frequency–basedfeature weights, inverted files for efficient data access <strong>and</strong>simple relevance feedback. The four feature groups currentlyused are:• a global colour <strong>and</strong> grey level histogram;• local colour blocks at different scales <strong>and</strong> various fixedregions;• a global Gabor filter response histogram using severalscales <strong>and</strong> directions;• local Gabor blocks in fixed areas of the image in severalscales <strong>and</strong> directions.easier, a simple interface was built (Figure 4). This interfacepresents each image <strong>and</strong> its segmentation. It allows the userto classify the segmentation quality:(a) The segmentation is good, all lung tissue is taken.(b) A small, insignificant part of the lungs is missing.(c) Larger parts of the lungs are missed or fractured.(d) The segmentation delivers bad results.(e) The segmentation is bad, because the CT image is not atall st<strong>and</strong>ard.Examples of the first four classes of regular images are shownin Figure 5. Twelve images are in class 5. 5 of them are(a)(b)A. Segmentation resultsIV. RESULTS(c)(d)Fig. 5. First four classes: (a) good segmentation, (b) small parts missing,(c) large parts missing or fractured, (d) segmentation failed (right lung missingin this case).Fig. 4.Interface for the evaluation of the segmentation quality.To evaluate the accuracy of the algorithm a collection of153 lung CTs was extracted from the casimage database<strong>and</strong> segmented. Then, each input image <strong>and</strong> the resultingsegmentation was evaluated for quality. To make this task5 http://mirc.rsna.<strong>org</strong>/shown in Figure 6: the logo of the hospital figured on aCT scan 6(a). Unfortunately, it is a black box <strong>and</strong> touchesborder <strong>and</strong> right lung which is removed with the background.6(b) shows a cropped scan. The right lung touches the border<strong>and</strong> is removed with the background. Such a case does notappear in clinical routine. Figure 6(c) was annotated withcoloured flashes, creating an artifact on the segmented lung.Figure 6(d) shows an image taken sideways. Our methodis not able to determine the side of each lung. Both lungswhere classified left because both mass centres are on theleft side. In Figure 6(e), background parts were too big <strong>and</strong>considered as lung parts. These twelve non-st<strong>and</strong>ard imageswere removed from the collection for further evaluation. Ofthe 141 remaining images, 59 were well segmented (Figure 7),small parts were missing in 57 images, big holes were visiblein 32 images <strong>and</strong> 3 images were badly segmented.For our goal of image indexing <strong>and</strong> retrieval, segmentationdoes not need to be perfect. If small parts are missing, feature60


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, China(a)(b)(c)(d)Fig. 7.Satisfying segmentation.(e)Fig. 6.Some unusual cases.extraction will not be extremely different. It is not important,either whether two lungs were perfectly separated. For thesereasons, the two first categories of well <strong>and</strong> quite-well segmentedimages were considered as being of sufficient quality,the rest was considered as unsatisfying segmentation. Withthese criteria, 116 images were sufficiently well segmented<strong>and</strong> 35 images delivered unsatisfying results. This leads to arate of 82.3% for sufficient segmentation.B. Retrieval resultsSo far, only a prototype of the retrieval engine is used thatanalyses the entire image with grey level <strong>and</strong> texture measuresas explained in Section III-B. The mode colour was taken to fillthe area around the parenchyma so the Gabor filter responsesare not altered by a large grey level change on the borders.Diagnoses of the images are shown as text under the images toallow for a quick visual evaluation. No quantitative evaluationof retrieval performance has been performed, yet. Figure 8shows our visual retrieval interface with a query result usinga single input image. User feedback can be given with severalFig. 8.Example of a query result using medGIFT.relevant <strong>and</strong> irrelevant images to refine the search.V. CONCLUSIONS AND FUTURE WORKThis article shows a simple segmentation algorithm forlung CT images. Obtained segments can be used for content–based image retrieval as a diagnostic aid. An evaluation ofthe segmentation quality shows good results. First qualitativeresults for lung image retrieval show that the visual retrievalis much better than when taking into account the entire image.All tools used are based on OS programs <strong>and</strong> the source codecan be obtained from the authors. The segmentation algorithmproves to be simple but effective for our purpose. Several61


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinaabnormal cases were included into the algorithm <strong>and</strong> allow fora reliable segmentation of the lung tissue <strong>and</strong> a separation fromsurrounding background. This allows to focus the retrieval onthe really important parts of the image.Currently, our retrieval algorithm is not perfectly adapted tothe obtained images. As the segmented lung parts are entirelyindexed, the edge between the segmented regions <strong>and</strong> thebackground results in strong responses of the Gabor filters.This means that the form of the lung parts becomes importantas well whereas the actual tissue texture should be the mostimportant. Besides a concentration of feature extraction on theparenchyma, the main future work is on validating the qualityof the algorithm with clinical data <strong>and</strong> especially with imagesusing DICOM <strong>and</strong> the full range of grey levels. Currently anannotated image database is being created for this evaluation.REFERENCES[1] A. W. M. Smeulders, M. Worring, S. Santini, A. Gupta, <strong>and</strong> R. Jain,“Content–based image retrieval at the end of the early years,” IEEETransactions on Pattern Analysis <strong>and</strong> Machine Intelligence, vol. 22 No12, pp. 1349–1380, 2000.[2] M. Flickner, H. Sawhney, W. Niblack, J. Ashley, Q. Huang, B. Dom,M. Gorkani, J. Hafner, D. Lee, D. Petkovic, D. Steele, <strong>and</strong> P. Yanker,“Query by Image <strong>and</strong> Video Content: The QBIC system,” IEEE Computer,vol. 28, no. 9, pp. 23–32, September 1995.[3] H. M üller, A. Rosset, J.-P. Vallée, <strong>and</strong> A. Geissbuhler, “Integratingcontent–based visual access methods into a medical case database,” inProceedings of the <strong>Medical</strong> Informatics Europe Conference (MIE 2003),St. Malo, France, May 2003.[4] H. D. Tagare, C. Jaffe, <strong>and</strong> J. Duncan, “<strong>Medical</strong> image databases: Acontent–based retrieval approach,” Journal of the American <strong>Medical</strong>Informatics Association, vol. 4, no. 3, pp. 184–198, 1997.[5] H. J. Lowe, I. Antipov, W. Hersh, <strong>and</strong> C. Arnott Smith, “Towardsknowledge–based retrieval of medical images. The role of semanticindexing, image content representation <strong>and</strong> knowledge–based retrieval,”in Proceedings of the Annual Symposium of the American Society for<strong>Medical</strong> Informatics (AMIA), Nashville, TN, USA, October 1998, pp.882–886.[6] H. M üller, N. Michoux, D. B<strong>and</strong>on, <strong>and</strong> A. Geissbuhler, “A review ofcontent–based image retrieval systems in medicine – clinical benefits <strong>and</strong>future directions,” International Journal of <strong>Medical</strong> Informatics, vol. 73,pp. 1–23, 2004.[7] C. Le Bozec, E. Zapletal, M.-C. Jaulent, D. Heudes, <strong>and</strong> P. Degoulet,“Towards content–based image retrieval in HIS–integrated PACS,” inProceedings of the Annual Symposium of the American Society for<strong>Medical</strong> Informatics (AMIA), Los Angeles, CA, USA, November 2000,pp. 477–481.[8] H. Abe, H. MacMahon, R. Engelmann, Q. Li, J. Shiraishi, S. Katsuragawa,M. Aoyama, T. Ishida, K. Ashizawa, C. E. Metz, <strong>and</strong> K. Doi,“Computer–aided diagnosis in chest radiography: Results of large–scale observer tests at the 1996–2001 RSNA scientific assemblies,”RadioGraphics, vol. 23, no. 1, pp. 255–265, 2003.[9] C.-T. Liu, P.-L. Tai, A. Y.-J. Chen, C.-H. Peng, T. Lee, <strong>and</strong> J.-S. Wang,“A content–based CT lung retrieval system for assisting differentialdiagnosis images collection,” in Proceedings of the second InternationalConference on Multimedia <strong>and</strong> Exposition (ICME’2001), IEEE ComputerSociety. Tokyo, Japan: IEEE Computer Society, August 2001,pp. 241–244.[10] C.-R. Shyu, C. E. Brodley, A. C. Kak, A. Kosaka, A. M. Aisen,<strong>and</strong> L. S. Broderick, “ASSERT: A physician–in–the–loop content–basedretrieval system for HRCT image databases,” Computer Vision <strong>and</strong>Image Underst<strong>and</strong>ing (special issue on content–based access for image<strong>and</strong> video libraries), vol. 75, no. 1/2, pp. 111–132, July/August 1999.[11] C. Brodley, A. Kak, C. Shyu, J. Dy, L. Broderick, <strong>and</strong> A. M. Aisen,“Content–based retrieval from medical image databases: A synergy ofhuman interaction, machine learning <strong>and</strong> computer vision,” in Proceedingsof the 10th National Conference on Artificial Intelligence, Orl<strong>and</strong>o,FL, USA, 1999, pp. 760–767.[12] A. M. Aisen, L. S. Broderick, H. Winer-Muram, C. E. Brodley, A. C.Kak, C. Pavlopoulou, J. Dy, C.-R. Shyu, <strong>and</strong> A. Marchiori, “Automatedstorage <strong>and</strong> retrieval of thin–section CT images to assist diagnosis:System description <strong>and</strong> preliminary assessment,” Radiology, vol. 228,pp. 265–270, 2003.[13] L. Li, Y. Zheng, M. Kallergi, <strong>and</strong> R. A. Clark, “Improved method forautomatic identification of lung regions on chest radiographs,” AcademicRadiology, vol. 8, pp. 629–638, 2001.[14] S. G. Armato III, M. L. Giger, C. J. Moran, J. T. Blackburn, K. Doi, <strong>and</strong>H. MacMahon, “Computerized detection of pulmonary nodules on CTscans,” <strong>Imaging</strong> & Therapeutic Technology, vol. 19, no. 5, pp. 1303–1311, 1999.[15] J. Everhart, M. Cannon, J. Newell, <strong>and</strong> D. Lynch, “Image segmentationapplied to CT examination of lymphangioleiomyomatosis (LAM),”SPIE, vol. 2167, pp. 87–95, 1994.[16] J. Mo Goo, J. Won Lee, H. Ju Lee, S. Kim, J. Hyo Kim, <strong>and</strong> J.-G.Im, “Automated lung nodule detection at low–dose CT: Preliminaryexperience,” Korean Journal of Radiology, vol. 4, no. 4, pp. 211–216,2003.[17] M. N. Gurcan, B. Sahiner, N. Petrick, H.-P. Chan, E. A. Kazerooni, P. N.Cascade, <strong>and</strong> L. Hadjiiski, “Lung nodule detection on thoracic computedtomography images: Preliminary evaluation of computer–aided diagnosissystem,” <strong>Medical</strong> Physics, vol. 29, no. 1, pp. 2552–2558, 2002.[18] S. Hu, E. A. Hoffman, <strong>and</strong> J. M. M. Reinhardt, “Automatic lungsegmentation for accurate quantitation of volumetric X–ray CT images,”IEEE Transactions on <strong>Medical</strong> <strong>Imaging</strong>, vol. 20, no. 6, pp. 490–498,2001.[19] J. P. Ko <strong>and</strong> M. Betke, “Chest CT: Automated nodule detection <strong>and</strong>assessment of change over time–preliminary experience,” Radiology, vol.218, no. 1, pp. 267–273, 2003.[20] J. K. Leader, B. Zheng, R. M. Rogers, F. C. Sciurba, A. Perez,B. E. Chapman, S. Patel, C. R. Fuhrman, <strong>and</strong> D. Gur, “Automatedlung segmentation in X–ray computed tomography: Development <strong>and</strong>evaluation of a heuristic threshold–based scheme,” Academic Radiology,vol. 10, pp. 1224–1236, 2003.[21] M. S. Brown, J. G. Goldin, M. F. McNitt-Gray, L. E. Greaser, A. Sapra,K.-T. Li, J. W. Sayre, K. Martin, <strong>and</strong> D. R. Aberle, “Knowledge–basedsegmentation of thoracic computed tomography images for assessmentof split lung function,” <strong>Medical</strong> Physics, vol. 27, no. 3, pp. 592–598,2000.[22] M. S. Brown, M. F. McNitt-Gray, N. J. Mankovich, J. G. Goldin,J. Hiller, L. S. Wilson, <strong>and</strong> D. R. Aberle, “Method for segmenting chestCT image data using an anatomical model: Preliminary results,” IEEETransactions on <strong>Medical</strong> <strong>Imaging</strong>, vol. 16, no. 6, pp. 828–839, 1997.[23] A. C. Silva, P. C. P. Carvalho, <strong>and</strong> R. A. Nunes, “Segmentation <strong>and</strong>reconstruction of the pulmonary parenchyma,” Vision <strong>and</strong> GraphicsLaboratory, Institute of Pure <strong>and</strong> Applied Mathematics, Rio de Janeiro,Tech. Rep., 2002.[24] A. El-Baz, A. A. Farag, R. Falk, <strong>and</strong> R. La Rocca, “Detection, visualization,<strong>and</strong> identification of lung abnormalities in chest spiral CT scans:Phase I,” in Proc. of the International Conf. on Biomedical Engineering,Cairo, Egypt, 2002.[25] G. J. Kemerink, R. J. S. Lamers, B. J. Pellis, K. H. H., <strong>and</strong> J. M. A.van Engelshoven, “On segmentation of lung parenchyma in quantitativecomputed tomography of the lung,” <strong>Medical</strong> Physics, vol. 25, no. 12,pp. 2432–2439, 1998.[26] B. Zhao, G. Gamsu, M. S. Ginsberg, L. Jiang, <strong>and</strong> L. H. Schwartz,“Automatic detection of small lung nodules on CT utilizing a localdensity maximum algorithm,” Journal of Applied Clinical <strong>Medical</strong>Physics, vol. 4, no. 3, pp. 248–260, 2003.[27] J. Cates, A. Lefohn, <strong>and</strong> R. Whitaker, “GIST: An interactive GPU-basedlevel-set segmentation tool for 3d medical images,” Journal on <strong>Medical</strong>Image Analysis, vol. 8, 2004 – to appear.[28] A. Rosset, H. M üller, M. Martins, N. Dfouni, J.-P. Vallée, <strong>and</strong> O. Ratib,“Casimage project – a digital teaching files authoring environment,”Journal of Thoracic <strong>Imaging</strong>, vol. 19, no. 2, pp. 1–6, 2004.[29] D. M. Squire, W. M üller, H. M üller, <strong>and</strong> T. Pun, “Content–based query ofimage databases: inspirations from text retrieval,” Pattern RecognitionLetters (Selected Papers from The 11th Sc<strong>and</strong>inavian Conference onImage Analysis SCIA ’99), vol. 21, no. 13-14, pp. 1193–1198, 2000,b.K. Ersboll, P. Johansen, Eds.62


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaAugmented <strong>Medical</strong> Image Managementfor Integrated Healthcare SolutionsThomas M. Lehmann 1 , Henning Müller 2 , Qi Tian 3 , Nikolas P. Galatsanos 4 , Daniel Mlynek 51 Department of <strong>Medical</strong> Informatics, RWTH Aachen University of Technology, Aachen, Germany2 <strong>Medical</strong> Informatics Service, University <strong>and</strong> University Hospitals of Geneva, Geneva, Switzerl<strong>and</strong>3 Institute for Infocomm Research, Agency for Science, Technology <strong>and</strong> Research (A*Star), Singapore4 Biomedical Research Institute, Foundation for Research & Technology – HELLAS, Ioannina, Greece5 Signal Processing Laboratory 3, École Politechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerl<strong>and</strong>Abstract— In today’s medicine, several diagnostic tasks areparticularly difficult <strong>and</strong> are plagued by high inter- <strong>and</strong> intraobservervariability. For instance, reading mammographs inmodern Western medicine or tongue images (photographs orvideos of the stuck-out tongue) in traditional Chinese medicinerequire particular skills <strong>and</strong> long experience. For these tasks,image-based access to an archive with cases of known pathologywould be a beneficial aid for both diagnosis <strong>and</strong> medicaleducation. Thus, retrieving information stored in electronichealthcare records (EHR) based also on image patterns ratherthan only on alphanumerical indexing is required. Even modernEHR archives do not yet offer such augmented functionality.In this paper, we present a novel concept for an augmentedmedical information management (AMIM). Key functionalitiesof the concept are content-based medical image retrieval basedon similarity of both the global image <strong>and</strong> local regions,capturing human expert knowledge with visual similaritymetrics, retrieval from EHR archives <strong>and</strong> the Internet based onboth visual <strong>and</strong> textual patterns, <strong>and</strong> a cheap <strong>and</strong> effectiveparadigm for home-based telemedicine. The latter functionalityis based on a smart self-calibrating h<strong>and</strong>-held imaging device forst<strong>and</strong>ardized self-acquisition of tongue images. This idea isdirectly extendable to other diagnosis scenarios for wide-spreaddiseases, for example, screening for skin tumors. It will helpsignificantly reduce the costs of healthcare systems <strong>and</strong> inparticular improve the quality of life of (senior) citizens.Index Terms—Information System, Information Retrieval,Image Processing, Image Retrieval, <strong>Telemedicine</strong>, ElectronicHealthcare RecordMI. INTRODUCTIONedical imaging plays vital roles for many health-relatedapplications such as medical diagnostics, drugevaluation, medical research, training <strong>and</strong> teaching.Due to the rapid progress in the technologies for obtaining <strong>and</strong>storing digital images for diagnostic purposes in medicine(from photography over digital radiography to functional MRI<strong>and</strong> PET) <strong>and</strong> the rapid expansion of computer networks <strong>and</strong>the Internet, medical image databases for training <strong>and</strong>supporting diagnostics have become technologically feasible.However, the rapid expansion in these technologies has notbeen accompanied yet by a similar development in thetechnologies for image management. If the Digital <strong>Imaging</strong><strong>and</strong> Communication in Medicine (DICOM) protocol is used,any retrieval is based solely on textual information hosted inthe (sometimes erroneous [1]) DICOM-headers. Thus, theinformation that is contained in such databases remainsunexploited to a large extent [2].Content-based image retrieval (CBIR) by itself has beenone on the most active research areas in the field of computervision, image processing <strong>and</strong> data mining over the last 10years [2,3]. Many algorithms, architectures <strong>and</strong> systems havebeen studied <strong>and</strong> developed to help search <strong>and</strong> browse throughlarge multimedia databases based on content. Because of theimportance of medical imaging, recently there is increasinginterest by informatics researchers <strong>and</strong> physicians to developCBIR algorithms, as well as architectures for medical imageapplications. In addition to efficient <strong>and</strong> convenientrepositories of medical images, these CBIR systems can bealso used as aids for medical diagnostics <strong>and</strong> training ofphysicians [3,4,5].However, there are scientific <strong>and</strong> technologicalshortcomings of existing approaches that have to be resolvedbefore a CBIR methodology can be implemented successfullyinto medical applications. In particular:• A general framework for CBIR is required which can alsosupport retrieval based on similarity of local features.This is a novel functionality because the CBIR systems todate use only global features. Thus, they cannot supportthe retrieval of similar images based on, for example, aspecific <strong>org</strong>an or an anatomical detail which is containedin a segment of an image.• Similarity measures between images are required thatcapture the subjective perception of a trained specialist.This will close the so called semantic gap [2,3] betweenthe meaning of medical images <strong>and</strong> their computationalrepresentation based on pixels. Until now, similarity in63


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaCBIR systems is based on objective distance metricsbetween basic numerical features of the images.• Tools for mining medical information from the imagearchive as well as the Internet are required that combineboth textual <strong>and</strong> visual features. This is also atechnologically novel idea since at the present time mostinformation mining tools use only textual information.In this paper, we propose a solution to the abovementionedproblems <strong>and</strong> show how these methods can be combined to aprototype for an efficient homecare telemedicine-basedsystem.A. Local FeaturesII. METHODSOne of the most difficult problems in content-based imageretrieval (CBIR) for medical applications <strong>and</strong> in general is tofind relevant images based on the objects contained in animage. In other words, instead of finding images that areglobally similar one wants to find images that contain similarsegments. The difficulty of this task stems from the fact thatsegmenting an image in regions that have some physicalmeaning is one of the most difficult problems in imageprocessing since it contains a strong element of subjectivity.Indeed to this date, most medical CBIR systems are basedonly on global features or fixed portioning of the image intoregions [3,6].Nevertheless, the ability to retrieve based on objectscontained in an image is a very important <strong>and</strong> extremelyuseful capability for medical CBIR systems [7,8]. In almost allcases, diagnostics is based on local regions of the image, <strong>and</strong>the physicians are particularly interested in certain conditionsof the body part such as an abnormal size or texture. For thispurpose, we have developed a new flexible hierarchical imagerepresentation that is based on a multi-scale imagesegmentation algorithm [9]. This segmentation approachprovides a hierarchical data structure in which differentlysized segments of the image containing different parts of thehuman anatomy can be found at different scales (Fig. 1).Searching this data structure using appropriate distancemetrics [10] provides the capability to find images that containsimilar parts of the human anatomy. Thus, it solves in anelegant <strong>and</strong> reliable manner the very important problem formedical CBIR of retrieving images that are not only globallysimilar but also contain similar regions.B. A Similarity Metric Capturing Human PerceptionThere are diagnostic tasks using medical images such asmammography in modern western medicine (MWM) <strong>and</strong>diagnosis from tongue images (Fig. 2) in traditional Chinesemedicine (TCM) that require great expertise <strong>and</strong> skill[11,12,13,14]. For such tasks, the availability of a databasewith already diagnosed cases along with a retrieval systemthat captures the perception of a human expert would beenormously beneficial as an aid both for diagnostics <strong>and</strong>training. One could retrieve “similar” cases <strong>and</strong> thus gain64(a)(c)(b)(d)Fig. 1. Hierarchical image analysis of a h<strong>and</strong>-wrist radiograph: (a) h<strong>and</strong>radiograph; (b) regions from various layers of the representation are colorcodedseparately (<strong>and</strong> may overlap); (c) for each bone (localized region), anoptimal representation is determined; (d) corresponding blob representation;<strong>and</strong> (e) hierarchical tree structure for the various layers of representation.valuable insight <strong>and</strong> experience about the pathology of theunknown case at h<strong>and</strong> [15] or the outcome of treatment.Machine learning methods are used to capture the notion ofsimilarity as perceived by experts. For this purpose, regressionalgorithms are developed that will learn by presentingexamples for the similarity between pairs of images asperceived by specialists. Fig. 3 depicts the architecture of sucha retrieval system. The similarity distance is represented asf(u,v), where u <strong>and</strong> v denote the features of the query <strong>and</strong> thedatabase image, respectively. Regression methodologies basedon (i) neural networks, (ii) relevance vector machines, <strong>and</strong>(iii) support vector machines can be used to determine f(.,.)that best captures the experts perception.Relevance feedback is a post-query process to refine thesearch by using positive <strong>and</strong>/or negative indications from theuser learning the relevance of retrieved images. For medicalapplications, two approaches for relevance feedback must beconsidered. In the first approach, the impact of the feedbackimage in the similarity between the query image <strong>and</strong> adatabase entry is explicitly weighted. An alternative method isbased on incremental learning strategies in order to adjust thelearning machine using the feedback information.C. Web Mining for <strong>Medical</strong> InformationIn addition to the image repository of a hospital or clinic,the World Wide Web contains a wealth of information thatcan be valuable to many medical applications in diagnostics,research, <strong>and</strong> education. However, currently all indexing <strong>and</strong>searching of web resources are solely based on the textual datathat is often available with images. With respect to medicalimages, only a small fraction of the databases is available <strong>and</strong>directly searchable via browser-based interfaces. For example,Google delivers only 130 image results for the term “lung CT”whereas collections such as CasImage(http://www.casimage.com/) <strong>and</strong> the Health Education AssetsLibrary (HEAL, http://www.healcentral.<strong>org</strong>/) already containat least 1,000 lung CTs. No retrieval based on visual featuresis currently possible <strong>and</strong> so the wealth of medical imageinformation so far is inaccessible for medical practitioners orlimited to one or two databases that can be searched by h<strong>and</strong>(e)


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaQueryImageDatabaseImagesExtractFeatureExtractFeatureuvLearningmachinef ( uv , )SC> T ?NoUser’sFeedbackYesRetrieveFig. 2. Tongue images acquired with color plate for calibration (left [17])<strong>and</strong> without (right [14]).<strong>and</strong> with key words. The Health On the Net Foundation(HON, http://www.hon.ch/) tries to create a reliable source formedical information search on the web but the multimediarepository contains currently only 6,800 documents that canbe searched only by text. The Radiological Society of NorthAmerica (RSNA) created the MIRC st<strong>and</strong>ard (<strong>Medical</strong><strong>Imaging</strong> Resource Center, http://mirc.rsna.<strong>org</strong>) to unify theinterfaces to medical radiological teaching sources on theInternet. Still, no visual search is possible in this st<strong>and</strong>ard.According to our approach for augmented medical imagemanagement (AMIM), available medical web sources forimages are indexed in textual <strong>and</strong> visual form in collaborationwith HON, to make the wealth of knowledge available topractitioners for teaching, research <strong>and</strong> also diagnostic aid.Analyzing visual <strong>and</strong> textual features will allow us to mine thelarge amount of information accessible to find co-occurrencesof visual features <strong>and</strong> textual keywords. This can be used forsemi-automatic or even automatic annotation of non-annotatedimages. We think that many semantic connections will befound <strong>and</strong> new ways of browsing will be established throughthe exploitations of visual features for retrieval. For teachingpurposes, even images with a different diagnosis but similarvisual appearance are of great importance.For the combined retrieval methodology, visual featuresthat are similar in spirit to those used in text retrieval(frequency-based feature weights, inverted files) areexamined. A suitable visual feature space is created for this,relying on global but also local features. Taking into accountthe way people interact with the system through user log filesallows to optimize performance <strong>and</strong> discover the user’sinformation needs in terms of the combinations of the variousgroups of features [16].D. An Efficient Home-based <strong>Telemedicine</strong> ParadigmIntegrating the three AMIM components for content-basedaccess methods to medical image archives will improve healthcare <strong>and</strong> medicine. Further improvement is expected, if theimage acquisition is performed by the patient at home (Fig. 4).While home-based mammography will not be available innear future, the technology already exists to develop homebasedphotography <strong>and</strong> video imaging. By means of a smartself-calibrating h<strong>and</strong>-held imaging device for st<strong>and</strong>ardizedself-acquisition of tongue images, one tongue image per dayor week can be captured manually <strong>and</strong> transferredFig. 3. AMIM image retrieval framework with relevance feedback. Thesimilarity metric captures the perception of a human expert.automatically to the TCM center, where it is analyzed <strong>and</strong>used to remind the patient for possible check-up. This idea isdirectly extendable to other diagnostic scenarios forwidespread diseases, for example, risk patients for skintumors.Accordingly, the objective is to develop a new generic,integrated, modular <strong>and</strong> hierarchical vision system that can beconfigured for executing high performance image <strong>and</strong> videoacquisition tasks. Inspection stripe units, made of illuminationlaser, camera, <strong>and</strong> pre-processing elements can be composedfor parallel operation, offering great flexibility by means of arapid application development tool. Such heterogeneouscomponents, that traditionally require time-consumingcalibration <strong>and</strong> adaptation tasks, are integrated into a unifiedsystem <strong>and</strong> software environment. The offered quality controlcapability will enhance the h<strong>and</strong>ling of defects <strong>and</strong> theassociated operations.However, it is still a technical challenge to design aninnovative high-resolution image processing system achievingthe quality st<strong>and</strong>ards required for such applications. Thisobjective requires innovations, advances <strong>and</strong> breakthroughs tothe state of the art in this field. Nevertheless, in near futuresuch devices might easily be integrated in small cell phones.III. DISCUSSIONOur concept of augmented medical image management willhave a high impact in a number of healthcare-related areassuch as diagnostics, teaching, research, <strong>and</strong> remote healthcareprovision.A. DiagnosticsFirstly <strong>and</strong> most importantly, the diagnostic ability ofphysicians that will have access to the proposed system st<strong>and</strong>sto improve. In the domain of evidence-based medicine, casebasedreasoning, <strong>and</strong> computer-aided diagnosis it is essentialfor a system to supply relevant/similar cases for comparison.In such a constellation, the AMIM system would play the roleof a second opinion. It can accommodate this task in a numberof scenarios. In one of them, a physician could use a textualsearch on the annotated images in order to retrieve imageswith the same diagnosis gaining experience on the visualappearance of specific diseases with different imagingmodalities. In another scenario, for the special case ofmammography (MWM) <strong>and</strong> diagnosis based on tongue65


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)eLearning tasks that can be used without a teacher.August 16-19, Wuyi Mountain, Chinaimages (TCM), one could use the image under observation asa query <strong>and</strong> retrieve “similar” looking (according to aspecialist’s opinion) annotated images <strong>and</strong> thus use theavailable system as “a second opinion” for the specific case.In yet another scenario, the physician could define regions ofthe image which in his opinion contain the visual features thatbest capture the suspected pathology <strong>and</strong> find annotatedimages that contain similarly looking regions.However, another idea is the comparison of the distance ofa new case with the existing cases. In such a case, thedissimilarity as opposed to the similarity of an image withknown cases can be used to gain knowledge. This is morenatural compared to the normal workflow in medicine wherethe first requirement is to find out whether the case ispathologic or not. Dissimilarity could be used by highlightingregions in the image with the strongest dissimilarity. Such atechnique can help to find abnormal regions that mightotherwise be missed. A combination of the two approaches isalso possible where the first request is whether the imagecontains abnormalities. If it does, a second query to findsimilar cases is performed on another image databasecontaining the pathologic cases. This directly supportscomputer-aided diagnoses, e.g. for tumor stating.B. ResearchFurthermore, research will essentially benefit from theproposed system. AMIM technology will provide researcherswith more options for the choice of cases to include intoresearch <strong>and</strong> studies. More specifically, it will allowresearchers• to find relevant images based on both local <strong>and</strong> globalvisual features,• to use both text-based <strong>and</strong> visual features, <strong>and</strong>• to include similar images based on the perception oftrained experts that are not physically present.While the latter item has been demonstrated for the specialcases of mammography <strong>and</strong> tongue images, we conjecture thatby including visual features directly into medical studies, newcorrelations between the visual appearance of a case <strong>and</strong> itsdiagnosis or textual description will be found. Visual data canalso be mined to find changes or interesting patterns whichcan lead to the discovery of new knowledge by combining thevarious knowledge sources.C. TeachingThe field of medical teaching will also benefit a lot fromour AMIM approach. Here, instructors can use medical imagerepositories to search for interesting cases to present to theirstudents. These cases can be chosen not only based ondiagnosis or anatomical region but also on visual similarity.Cases with different diagnoses can also be presented toaugment the educational experience of the students. Indeed,AMIM increases the routes to accessing the “right data” <strong>and</strong>thus facilitate students retrieving the relevant cases. The fieldof Internet-based teaching st<strong>and</strong>s to also benefit remarkablysince most of the technologies can be integrated in self-guided66D. Remote HealthcareIn remote healthcare we envision that by the year 2015broadb<strong>and</strong> Internet connection has become a norm for mostfamilies around the world. Mrs. Jane, a 70 year old lady, isstaying alone at home being constantly connected with herchildren a few hundred miles away. She is also regularlymonitored by remote doctors on her tongue images capturedweekly in a small device located in her bed-room or even ah<strong>and</strong>-phone. The tongue images are sent to remote medicalcenters, automatically pre-checked by computers if everythinglooks fine, <strong>and</strong> manually monitored by physicians in any caseof suspicion. If required, Mrs. Jane will be advised to go to theclosest clinic for further check-up.With the new wireless transmission technologies the need<strong>and</strong> the market for high-resolution portable imaging systems issteadily increasing. Although very high-resolution linear CCDcameras exist on the market, the very large b<strong>and</strong>widthrequired between camera <strong>and</strong> processing memory <strong>and</strong> theamount of pixels generated put high constraints on thecommunication b<strong>and</strong>width, on the frame memory hardware<strong>and</strong> of the processing power required to analyze the receivedimages. A classical state of the art approach results in a verycostly <strong>and</strong> bulky system, probably composed of several unitsin parallel to achieve the required data b<strong>and</strong>width <strong>and</strong>processing speed. Such systems would easily result complex<strong>and</strong> costly to be attractive for the customers. A differentapproach needs to be envisaged. The AMIM system, which isbased on CMOS r<strong>and</strong>om access imaging, is a step in thedirection of improving portable imaging systems in thecontext of a healthcare application.We envision that in the near future this type of nontraditionalmedical imaging can play a significant role inregularly monitoring the health status of patients bycomputers. Furthermore, home-based healthcare solutions willbe integrated for specific applications such as skin tumors <strong>and</strong>many more.E. Non-<strong>Medical</strong> ApplicationsDuring the last few years there has been a tremendousprogress in technologies of medical imaging. New imagingmodalities have been developed <strong>and</strong> new techniques forstoring <strong>and</strong> transmitting images. However, the progress intechnologies for managing this information especially basedon content has not been as rapid. As a result, the wealth ofinformation that is contained in image repositories is notcompletely exploited. Thus, the problem of retrieval based oncontent is considered as the “holy grail” for researchers in thisarea. This problem is not specific to medical images only. Inthe area of CBIR in general this is a well-known problem.Most CBIR systems today are based on low-level features,while the desire is to retrieve images based more on“semantics”. Our work addresses exactly this problem with anapplication in medical imaging. The methodologies wepropose allow for better utilization <strong>and</strong> access of the stored


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaMediatransmissionHome:Image Acquisition by PatientCentral Information Repository:Image AnalysisDirect Communicationinformation in repositories of images (medical) <strong>and</strong> in thiscontext contribute to close the “semantic gap”.IV. CONCLUSIONDiagnosisSupportHospital:Diagnosis by <strong>Medical</strong> ExpertFig. 4. A new paradigm for efficient <strong>and</strong> cheap home-based telemedicineapplied to tongue image diagnosis. Combining content-based access methodsto medical image archives with a self-calibrating h<strong>and</strong>-held imaging device,medical image acquisition is performed at home. This will reduce costs of thehealthcare system <strong>and</strong> increase the quality of life, in particular for elderlycitizens <strong>and</strong> in remote areas.In conclusion, a medical image database system was proposedin which advanced features are integrated such as: semanticsfrom visual similarity based on experts’ opinions, textual <strong>and</strong>visual retrieval, <strong>and</strong> on harvesting relevant information fromthe web. Based on these methodologies, a low cost, pervasiveimage-based diagnosis systems can be developed for seniorcitizens in home care.More specifically, AMIM st<strong>and</strong>s to benefit socially <strong>and</strong>economically both poor <strong>and</strong> less developed but also rich <strong>and</strong>well-developed societies. Socially, the field of medicine <strong>and</strong>healthcare provision benefits mostly. Capturing similarity asperceived by a medical specialist is the “holy grail” of thefield of artificial intelligence as applied to medicine since itendows a machine with the specialist’s knowledge. Thus, itboth greatly increases the availability <strong>and</strong> simultaneouslyreduces the cost of the services that are provided by thespecialist. Information that is inherently stored in medicalimages <strong>and</strong> the attached metadata is extracted <strong>and</strong> madeavailable integrating it into the routine of radiologists.[5] Samuel G. Armato III, et al.: Lung image database consortium –Developing a resource for the medical imaging research community.Radiology 2004; 232: 739-748.[6] Tang LHY, Hanka R, Ip HHS: A review of intelligent content basedindexing <strong>and</strong> browsing of medical images. Health Informatics Journal1999; 5: 40-49.[7] Tagare HD, Jaffe CC, Duncan J: <strong>Medical</strong> image databases – a contentbasedretrieval approach. Journal of the American <strong>Medical</strong> InformaticsAssociation 1997; 4: 184-198.[8] Lehmann TM, Güld MO, Thies C, Fischer B, Spitzer K, Keysers D, NeyH, Kohnen M, Schubert H, Wein BB: Content-based image retrieval inmedical applications. Methods of Information in Medicine 2004; 43(4):354-361.[9] Lehmann TM, Beier D, Thies C, Seidl T: Segmentation of medicalimages combining local, regional, global, <strong>and</strong> hierarchical distances intoa bottom-up region merging scheme, Proceedings SPIE 2005; 5747: inpress.[10] Fischer B, Thies C, Güld MO, Lehmann TM: Content-based retrieval ofmedical images by matching hierarchical attributed region adjaciencygraphs, Proceedings SPIE 2004; 5370(1): 598-606.[11] Sickles EA: Mammographic features of 300 consecutive non-palpablebreast cancers. American Journal of Roentgenology 1986; 146: 661-663.[12] Mushlin AI, Kouides RW, Shapiro DE: Estimating the accuracy ofscreening mammography – A meta-analysis. American Journal ofPreventive Medicine 1998: 14(2): 143-153.[13] Kirschbaum B: Atlas of Chinese Tongue Diagnosis. Vol. 2, Seattle, WA,Eastl<strong>and</strong>, Dec. 2003[14] Pang B, Zhang D, Li N, Wang K. Computerized tongue diagnosis basedon Bayesian networks. IEEE Transactions on Biomedical Engineering2004; 51(10): 1803-1810.[15] El Naqa I, Yang Y, Galatsanos N, Wernick M: A similarity learningapproach to content-based image retrieval –Application to digitalmammography. IEEE Transactions on <strong>Medical</strong> <strong>Imaging</strong> 2004: 23(10):1233-1244.[16] Müller H, Squire DM, Thierry P: Learning from user behavior in imageretrieval – Application of the market basket analysis. InternationalJournal of Computer Vision 2004; 56: 65-77.[17] Devitt M, Can tongue diagnosis predict colon cancer? AcupunctureToday 2002; 3(12).REFERENCES[1] Güld MO, Kohnen M, Keysers D, Schubert H, Wein B, Bredno J,Lehmann TM: Quality of DICOM header information for imagecategorization Proceedings SPIE 2002; 4685(39): 280-287.[2] Smeulders WM, Worring M, Santini S, Gupta A, Jain R: Content-basedimage retrieval at the end of the early years. IEEE Transactions onPattern Analysis <strong>and</strong> Machine Intelligence 2000; 22(12): 1349-1380.[3] Müller H, Michoux N, B<strong>and</strong>on D, Geissbuhler A: A review of contentbasedimage retrieval systems in medical applications – Clinical benefits<strong>and</strong> future directions. International Journal of <strong>Medical</strong> Informatics 2004;73: 1-23.[4] Lehmann TM, Schubert H, Keysers D, Kohnen M, Wein BB: The IRMAcode for unique classification of medical images. Proceedings SPIE2003; 5033: 109-117.67


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaProblem Specific Detailed Structured Reporting: TailoringRadiology Interpretation <strong>and</strong> Reporting to Clinical Need.Mansoor Fatehi MD – Iranian Society of Radiology – Radiology Informatics Committee - ChairDariush Saedi MD – Iran University of <strong>Medical</strong> Sciences – Department of RadiologySaeed Z<strong>and</strong> MD – Shahid Beheshti University of <strong>Medical</strong> Sciences – Department of UrologyPurposeIn real medical practice, theinformation produced by imagingtechniques is ultimately intended tobe used in a clinical decision makingprocess. Clear underst<strong>and</strong>ing of theneeds of referring clinician <strong>and</strong>providing appropriate data fordecision-making puzzle will improvethe team work <strong>and</strong> final outcome ofpatient care. This communication ismainly performed through radiologyreports.Structured reporting (no just DICOMSR), as a modern trend in reportingaspect of radiology practice has hadmany advantages but somedisadvantages for practicingradiologists.Tabular presentation of findingsthrough a detailed list may let theradiologist to describe the pathologyin a more scientific manner <strong>and</strong>prevent any data to be overlooked.Structured tabular data may be interrelatedto a decision supportknowledge base. Tabular databaseof patients’ data may be easily usedfor future research.However, detailed comprehensivereports may face two majorproblems. First, time consumingprocess of report generation <strong>and</strong>second, lengthy impractical reportsnot welcomed by busy clinicians. So,any structured reporting systemshould be flexible enough to captureas much as possible data forinterpretation but at the same timebe reasonably concise.We believe that not only the modalityor technique of study can bespecifically defined for differentclinical situations, but also theinterpretation & reporting should betailored to clinical needs, <strong>and</strong>although it happens in manualinterpretation but it may benefit fromdigital facilities.The purpose of our work is thedevelopment of interpretation <strong>and</strong>structured reporting protocolstargeting specific clinical situations.We have tried to construct acomprehensive detailed reportingplatform being selectively used byradiologist according to the clinicalquestion. In this way, the structure<strong>and</strong> contents of reports for a singleimaging service e.g. IVU may bedifferent, including only general dataabout the non-related findings butdetailed data needed for referringclinician.We hope to show some ways toovercome the limitations of modernreporting methods.Methods68


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaWe included in this study all clinicalgenitourinary problems which aredependent on imaging findings. A setof questions were developed foreach imaging request covering theexpectations of referring clinician.Each of these clinical questions wereanalyzed by detail to formulate a listof required imaging data.From the other side, any possibledata available from images werecategorized regardless of clinicalneed. All pathologic findings wereanalyzed. The final list of so-calledvariables were grouped based onanatomic segments but alsoconsidering added pathology.Differential diagnosis criteria <strong>and</strong>grading tables for clinical decisionmakingwere reviewed in order tofind actual detail need.A database management systemwas developed to h<strong>and</strong>le thesetabular data. The system includesthe list of imaging services eachgrouped by possible clinicalquestions.ResultsThe project has yielded softwareincluding two utilities:Conclusiono Modalities Techniques• RequestSubtypeso SepcificReportingTemplateAlthough an experienced radiologistalways adapts the interpretation tothe pathology in question but thoseradiologists trying to use structuredreporting systems may facelimitations in flexibility of reports.This software may let radiologists bemore flexible in detailed tabularreportingOur system should be used in realmedical practice <strong>and</strong> its efficiency forradiologist <strong>and</strong> clinician should beevaluated through comparativestudies (with <strong>and</strong> without software).1 - A Comprehensive DetailedStructured Reporting SystemCapable of Describing A PossiblePathologic Variations2 - Tree-Structured Menu DrivenTemplates Selecting Parts of ThisComprehensive Lists⇒ Urologic Problems69


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaContent based retrieval of PET images via localizedanatomical texture measurements <strong>and</strong> mean activity levelsStephen Batty, Xiaohong Gao, John Clark*, Tim Fryer*Middlesex University, Trent Park, London, N14 4YZ, United Kingdom.*Wlofson Brain <strong>Imaging</strong> Centre, University of Cambridge, CB2 2QQ, United Kindgom(e-mail: s.batty@mdx.ac.uk)AbstractContent based image retrieval is a rapidlyexp<strong>and</strong>ing area of research interest. The field of medicalimaging, <strong>and</strong> in particular PET neurological images, hasits own inherent <strong>and</strong> unique challenges in relation tocontent based retrieval. Visual features must first beidentified as appropriate by assessing their medicalsaliency <strong>and</strong> utilization as an image comparator.Gabor texture measurements, <strong>and</strong> mean indexratio are extracted from PET neurological images; it isshown that when these features are combined retrievalbased upon patient diagnosis is reliably performed. Dataoriginates from either normal volunteers or patientsdiagnosed with dementia.1. INTRODUCTIONPositron Emission Tomography (PET) is awidely utilized functional imaging modality withinmedicine. As such the archiving, or storage <strong>and</strong>retrieval, of PET image data is a fundamental concernwithin medical <strong>and</strong> computer science research [1, 2].The design goal of a content based retrieval system forPET images, <strong>and</strong> indeed medical images in general, isto facilitate <strong>and</strong> improve patient care in number ofdifferent ways:• time saving, images indexedautomatically with no expert guidance,procedure duration determined solelyby image processing time;• Decreased cost, expenditure requiredfor expert archiving images iseliminated;• Reliability <strong>and</strong> error reduction,retrieval results are objective <strong>and</strong>based upon a defined <strong>and</strong> fixed set ofparameters that are measuredautonomously using algorithms;• Validity, entire data sets are utilizedwith no redundancy, the completeimage’s are quantified <strong>and</strong> all relevantdata extracted.Although content-based retrieval is beingapplied to the whole field of medical imaging, <strong>and</strong>indeed to imaging in general [3] not just medicine, anyset of algorithms <strong>and</strong> extracted visual features are,however, determined by the visual <strong>and</strong> semanticcharacteristics of the specific image modality underexamination.PET image’s themselves are composed ofgreyscale intensity data, reveal nothing of anatomicalstructure, possess a relatively poor spatial resolutionwhen compared to Magnetic Resonance Images, <strong>and</strong>contain functional data that can be related to a numberof different metabolic pathways [4]. These specificattributes of PET images, <strong>and</strong> the semantic significancethat is associated with their intensity data by experts,are of primary importance <strong>and</strong> ensure that any set ofdeveloped algorithms will be unique to the modality.Semantic information is obtained from PETimages by experts in Neurology <strong>and</strong> Radiology.Various radio-tracers are used to label specific lig<strong>and</strong>s<strong>and</strong> consequently asses the related metabolic pathwayswithin the human brain. A common focus of PETresearch is that of Dementia. In this study patientsdiagnosed with Alzheimer’s, Mild CognitiveImpairment <strong>and</strong> Posterior Cortical Atrophy areevaluated, along with those classified as normal;Fluorodeoxyglucose, or FDG, an analogue of glucose isused as the radiotracer. FDG levels correspond toglucose concentrations, which in turn correlate tometabolic activity. This enables FDG to be employedwhen assessing Dementia related disorders, whichexhibit a characteristic metabolic activity pattern ofhypo-metabolism; this hypo-metabolism is morepronounced in certain anatomical areas than others.Activation homogeneity has also been shownto identify dementia [5]. Specifically the co-efficient ofvariance, of pixel intensity values, has beendemonstrated to be significantly higher in patientsdiagnosed with dementia. A higher co-efficient ofvariance can be said to equate to a “rougher texture”, i.e.70


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinathere is a greater degree of variation in pixel intensityvalue.g( x y)⎛ ⎞ ⎡ ⎛ 2 2 ⎞ ⎤⎜1⎟1⎢ ⎜ x y= exp − + ⎟ + 2πjWx⎥⎢ ⎜2⎟⎝ 2πσxσy ⎠ 2⎣ ⎝σxσy ⎠ ⎥⎦, 2(1)2. METHODOLGOYTo be utilized, within a content-based imageretrieval system, extracted features must be comparablebetween all stored images with no input from a user orexpert. For PET images variations in scanner <strong>and</strong>(radioactive tracer) dosages render direct pixel intensityvalue comparison redundant. A second potential sourceof discrepancy is the variation in brain morphologybetween individuals.Visual characteristics must therefore bequantified into numerical representations in a consistentmanner, <strong>and</strong> then be included in an N-dimensionalfeature space. For this reason the neurological imagesstudied here are first spatially normalized so as toensure that any further processing results in consistent,<strong>and</strong> therefore comparable, measurements.A. Spatial normalization of PET brain imagesA widely used <strong>and</strong> accepted technique forperforming spatial normalization is that of SPM [6]. Allimages used in this study are first spatially normalizedto the MNI template using the SPM methodology.B. Talairarch <strong>and</strong> Tournoux atlas [7].Spatially normalized images are then mappedto the Talairarch <strong>and</strong> Tournoux brain atlas so as toenable distinct anatomical ROI’s to be isolated. Thisprocess requires converting the Cartesian co-ordinatesfrom MNI to Talairarch <strong>and</strong> Tournoux, <strong>and</strong> vice versa.The Cartesian co-ordinates are then referenced against adatabase of anatomical regions associated with theTalairarch <strong>and</strong> Tournoux atlas. This enables theanatomical regions of interest to be segmented from therest of brain using their referenced, Talairarch <strong>and</strong>Tournoux, Cartesian co-ordinates.After segmenting distinct anatomical regionsof interest from the original 3D PET images, theircharacterization <strong>and</strong> classification can occur. Twodifferent measurements are employed in this paper,texture <strong>and</strong> mean index ratio.C. Texture <strong>and</strong> Gabor filters.Gabor filters [8] have previously been utilizedas texture quantifiers for non-medical images [9].Shown below is the original Gabor function, in equation(1), <strong>and</strong> its Fourier transform, in equation (2).G( u,v)( u − W )2⎪⎧2= 1 ⎡vexp ⎨−⎢ +2⎪⎩2 ⎣ σ2uσv⎤⎪⎫⎥⎬⎦⎪⎭(2)Where σ u = 1/2πσ x <strong>and</strong> σ v = 1/2πσ y , W =wavelet transform, σ x = intensity variance of pixel array,σ y = intensity variance of pixel array, G(u,v) representsFourier space. A Gabor Wavelet Transform of eachimage is created using a technique where the Fouriertransform of the complex conjugate of the Gaborfunction, a Gabor filter, is applied to the Fouriertransform of the PET image. There are twenty fourdifferent filters used in total representing the sixdifferent orientations <strong>and</strong> four different scales.Application of these filters to a PET image results in 24Wavelet Transform Magnitude, from which the texturefeature is derived. The texture feature itself is a 48-element vector that consists of 24 st<strong>and</strong>ard deviations(µ) <strong>and</strong> 24 means (σ).D. Mean index ratio [4].As mentioned previously absolute mean values,in this context, are not appropriate for comparingmedical images. The mean index ratio of specificanatomical areas in relation the whole brain is thereforeutilized; this enables comparison between differentimages. It is defined as the mean intensity of wholebrain divided by the mean intensity of the specifiedanatomical structure.3. ANATOMICAL REGIONS OF INTEREST.Previous related research has shown Dementiato manifest in certain anatomical structures moreprominently than others. In this research the followingareas are studied: Parietal Cortex, Alzheimer’s;Occipital Cortex, Alzheimer’s <strong>and</strong> Posterior CorticalAtrophy; Posterior Cingulate, Mild cognitiveimpairment (very early Alzheimer’s); Brodmann’sAreas 29 <strong>and</strong> 30, Alzheimer’s; Hippocampus, Mildcognitive impairment (very early Alzheimer’s) [10].These areas all have their mean intensityvalues measured, so that they can be compared to themean intensity value of whole brain; <strong>and</strong> a mean indexratio formed. Gabor filters are also applied to each ofthe listed areas to quantify texture.Anatomical regions are defined according tothe Talairarch <strong>and</strong> Tournoux atlas; some examples ofthese areas are presented below, on a brain image from71


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinaa patient diagnosed as normal. Region of interest isblacked out, <strong>and</strong> slice number is shown at bottom ofimages. Images are scaled to fit in page.Figure 1. Brodmann’s Area 29The extracted visual characteristics are insertedinto a mySQL as series of separate scalar values. Eachanatomical region is itself represented by 49 uniqueidentifiers. 48 of these represent the texture of theregion <strong>and</strong> a single value represents the mean indexratio of the whole brain divided by the mean intensityfor that region.The database is queried using data from both apatient diagnosed as normal <strong>and</strong> also from a patientdiagnosed as suffering from Alzheimer’s. The closestmatch, in the feature space, is returned first; this enablesan evaluation of the validity <strong>and</strong> reliability of theoutlined features, in regard to the classification <strong>and</strong>content based retrieval, to be performed.The separate visual features, texture <strong>and</strong> meanindex ratio are combined using the following formula:Figure 2. Brodmann’s Area 30.TM=A⎛⎛⎜⎝⎝∑⎜⎜∑124NTAN⎞⎛WB⎟⎜⎠⎝AXYZXYZ⎞⎟ ⎞⎟⎠⎠(3)The whole brain is WB <strong>and</strong> each anatomicalstructure is represented by A. Raw texture data isshown as T, <strong>and</strong> N represents each of the orientations<strong>and</strong> rotations of the Gabor filters used to quantifytexture. TM is the combined feature.Figure 2. Hippocampus.Retrieval results are obtained using theEuclidean distance between images stored withindatabase, along this TM vector.A. Web interface.A preliminary user interface for the contentbased image retrieval has been developed using PHP<strong>and</strong> MySQL. Screenshots of this are presented below infigures 4 <strong>and</strong>` 5.Figure 3. Posterior cingulate.Figure 4. Parietal Lobe.Figure 5. Visual interface of preliminary CBIR PET system.4. FEATURE REPRESENTATION AND DATABASE.72


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinaexhibited reduced accuracy compared to those of thecombined feature, presented above in tables 1 <strong>and</strong> 2.A third set of retrieval results was obtained totest the data with regard to co-efficient of variance.Results demonstrate that, with the examined data, CVvalue could not classify or retrieve images based upontheir diagnosis/medically semantic information.6. EVALUATIONFigure 6. Textual interface of preliminary CBIR PET system.5. RETRIEVAL RESULTS.Results of retrieving images from the databaseusing the TM combined feature vector, from theOccipital Lobe, Brodmann’s Area’s 29 <strong>and</strong> 30 <strong>and</strong> thePosterior Cingulate together, are presented below intables 1 <strong>and</strong> 2.Table 1. Normal query data,closest match first.+---------+-----------+| Image | Diagnosis|+---------+-----------+| 990136 | Normal || 000065 | Normal || 990270 | PCA || 990189 | PCA || 990164 | PCA || 990121 |Alzheimer|| 990169 |Alzheimer|| 990170 |Alzheimer|| 990168 |Alzheimer|| 990276 | MCI |Table 2. Alzheimer’s query data,closest match first.+---------+-----------+| Image | Diagnosis|+---------+-----------+| 990121 |Alzheimer|| 990169 |Alzheimer|| 990170 |Alzheimer|| 990168 |Alzheimer|| 990276 | MCI || 90270 | PCA || 990189 | PCA || 990164 | PCA || 990136 | Normal || 000065 | Normal |Retrieval results show that the combinedfeature vector of texture <strong>and</strong> mean index ratio producedreliable <strong>and</strong> accurate classification of PET images basedupon their diagnosis. Performance varied betweenanatomical regions <strong>and</strong> it was found iteratively that theoptimal regions were the Occipital Lobe, Brodmann’sArea’s 29 <strong>and</strong> 30 <strong>and</strong> the Posterior Cingulate together.When treated individually the separate vectorsperformance was reduced, <strong>and</strong> co-efficient of variancewas also found to be unreliable. A possible cause ofthis is the fact that patients were diagnosed withProdormal form of Alzheimer’s, in which thecharacteristic metabolic activity pattern is lesspronounced.7. CONCLUSIONVisual features present within PETneurological images that are suitable for utilizationwithin a content-based retrieval system have beenidentified, extracted <strong>and</strong> tested.The two features texture <strong>and</strong> mean index ratiowhen combined, shown that retrieval <strong>and</strong> classificationbased upon these vectors is concurrent with theexpected results, in accordance to there previousdiagnosis by a medical expert.An anatomical database containing tuplesrepresenting each specific set of Cartesian co-ordinates,along with the associated anatomical labels, has beenpresented <strong>and</strong> used to segment the specific anatomicalregions of interest from the rest of brain.A technique for measuring texture of imagesbased upon Gabor filters [9] has been implemented <strong>and</strong>the results produced are used as indices with the acontent based image retrieval system. The indices fortexture have been combined with the index of meanindex ratio <strong>and</strong> content based retrieval performed.Texture <strong>and</strong> mean index ratio were also testedindividually, retrieval results from each of these tests73


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, China8. FUTURE WORKThe texture vector is composed of 48 separatetexture elements <strong>and</strong> no attempt is made to reduce thisnumber. There is, therefore, ample scope for fine-tuningthis feature vector <strong>and</strong> establishing which of theindividual elements best represents specific pathologies.In this study the mean index ratio was derivedfrom the specific regions of interest <strong>and</strong> the meanintensity of whole brain. By replacing the whole brainwith the cerebellum as the reference region it is thoughtthat results could be improved.The visual features described in this could beconjoined with alternate PET image characteristics,such as tumours, lesions <strong>and</strong> binding potential[11,12,13,14] to produce a comprehensive CBIR system.ACKNOWLEDGEMENTSThanks <strong>and</strong> acknowledgement goes to all staffat both Middlesex University <strong>and</strong> the Wolfson Brain<strong>Imaging</strong> Centre.This project was funded by the EngineeringPhysics Sciences Research Council (EPSRC) of Britain.Their support is gratefully acknowledged.Images obtained, pre-processed <strong>and</strong> analyzedat the Wolfson Brain <strong>Imaging</strong> Centre, AddenbrookesHospital, Cambridge (UK). Algorithm <strong>and</strong> systemdevelopment performed at Middlesex University,London (UK), as part of the computer science researchgroup.REFERENCES.[1] W. Cai, D. Feng, R. Fulton, "Content basedretrieval of dynamic PET functional images", IEEETransactions on Information Technology inBiomedicine, vol 4, No 2, pages 152-158,June 2000[2] S. Batty, J. Clark, T. Fryer, X. Gao, "Contentbasedretrieval of lesioned brain images from a databaseof PET scans", SPIE Conference Proceedings: <strong>Medical</strong><strong>Imaging</strong>, PACS <strong>and</strong> Integrated <strong>Medical</strong> InformationSystems, 4685, 128-136, San Diego, California, USA,February 2002.[3] A. Smeulders, Marcel Wooring, SimoneSantini, Armanath Gupta <strong>and</strong> Ramesh Jain, "Content-Based Image Retrieval at the End of the Early Years",IEEE Transactions on Pattern Analysis <strong>and</strong> MachineIntelligence, 22(12):1-32, 2000.[4] The Crump Institute for Molecular <strong>Imaging</strong>,700 Westwood Blvd. Los Angeles, CA 90095-1770,Email: crump@mednet.ucla.edu "Let’s Play PET",http://www.crump.ucla.edu/software/lpp/lpphome.html.[5] Volkow ND, Zhu W, Felder CA, Mueller K,Welsh TF, Wang GJ, de Leon MJ, ”Changes in brainfunctional homogeneity in subjects with Alzheimer'sdisease”, Psychiatry Research: Neuroimaging Volume114, Issue 1 , 15 Feb 2002, Pages 39-50.[6] Friston KJ, Ashburner J, Poline JB, Frith CD,Heather JD, Frackowiak RSJ, “Spatial Registration <strong>and</strong>Normalization of Images”, Human Brain Mapping2:165-189, (1995)[7] Talairarch J, Tournoux P (1988). Co-planarstereotaxic atlas of the human brain. Thieme, New York.[8] Gabor, D. "Theory of Communication." J. Inst.Electr. Engineering, London 93, 429-457, 1946.[9] W.Y. Ma <strong>and</strong> B.S. Manjunath, “TextureFeatures for Browsing <strong>and</strong> Retrieval of Image Data”,IEEE transactions on Pattern Analysis <strong>and</strong> MachineIntelligence, vol:18, No:8, August 1996.[10] P. J. Nestor, T. D. Fryer, M. Ikeda <strong>and</strong> J. R.Hodges “Retrosplenial cortex (BA 29/30)hypometabolism in mild cognitive impairment(prodromal Alzheimer's disease)”, European Journal ofNeuroscience Volume 18 Issue 9 Page 2663 -November 2003 doi:10.1046/j.1460-9568.2003.02999.x[11] Batty, S., Fryer, T., Clark, J., Turkheimer, F.,Gao, X.W. Extraction of Physiological Informationfrom 3D PET Brain Images. Visualization, <strong>Imaging</strong> <strong>and</strong>Image Processing, Malaga, Spain, 401-405, September2002.[12] Batty, S., Clark J., Fryer, T., Turkheimer, F.,Gao, X.W. Towards Archiving 3D PET Brain ImagesBased on Their Physiological <strong>and</strong> Visual Content.ICDIA 2002, Diagnostic <strong>Imaging</strong> <strong>and</strong> Analysis,Shanghai, China, 188-193, August 2002.7-5439-2012-3[13] Batty, S., Clark, J., Fryer, T., Gao, X.W.Extraction of Features from 3D PET Images. <strong>Medical</strong>Image Underst<strong>and</strong>ing <strong>and</strong> Analysis 2002, 22-23 July2002, University of Portsmouth, Portsmouth, U.K.[14] Gao, X.W., Batty, S., Clark, J., Fryer, T.,Bl<strong>and</strong>ford, A. Extraction of Sagittal Symmetry Planesfrom PET Images. Proceedings of the IASTEDInternational Conference on Visualization, <strong>Imaging</strong>,<strong>and</strong> Image Processing (VIIP'2001), pp 428-433, ACTAPress, Marbella, Spain, September 2001.74


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, China<strong>Medical</strong> imaging equipment ---Techniques <strong>and</strong> Applications75


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaDevelopment of a Portable ECG Monitor Based on TMS320VC5402Lin Qi 1 ,Jing Bai 2 ,Yonghong Zhang 3 ,Chenguang Liu 4Department of Biomedical Engineering, Tsinghua University, Beijing, Chinaqilin99@maisl.tsinghua.edu.cnAbstract The hardware system <strong>and</strong> control program weredesigned based on the high speed <strong>and</strong> low power DSPTMS320VC5402. The algorithm of arrhythmia detectionbased on wavelet transforms was embedded. Two leads ofECG signal could be recorded synchronously. Twomonitoring mode were provided for user’s requirement.Alarm could be raised from the speaker, when four types ofarrhythmia were detected. 30 minutes ECG signal could bestored with multi-user <strong>and</strong> time information management.The ECG data recorded could be sent to the hospital centervia either digital or analog method. With low powerconsumption <strong>and</strong> rechargeable Li+ battery, this monitorcould work continuously. The interactive operationinterface, which could display multilevel menu on LCD, wasfriendly.Keywords telemedicine, ECG monitor, digital signal processorⅠ. INTRODUCTIONThe emergency of heart disease often attacks dangerouslyin a short period. Since that It is very important to capture theECG signal timely, to analyse the ECG signal effectively <strong>and</strong> totransmit ECG signal to hospital center accurately.The traditional method for getting the ECG requires thepatient to stay in hospital. With the singlechip, the portablemonitor became possible [1] .But for limited calculatingcapability of normal singlechip, the result of the signalprocessing was not very desired.This monitor used the digital signal processorTMS320VC5402 for data processing, also as the MCU tocomplete the control. The DSP C5402 that worked in the60MHz had high-quality to implement the algorithm ofarrhythmia detection based on wavelet transforms.Ⅱ. DESIGN AND METHODA. Entire System Hardware DesignIn Fig.1, there is an overview of the whole systemhardware Design. Two ECG leads, mv1 <strong>and</strong> mv5, were used inthis system for capturing more biomedical signal information.The two-led ECG were synchronously sampled through the twochannels McBSP of DSP. Since most frequency of ECG signalis from 0.05Hz~100Hz, as the Nyquist sampling theorem, thesample rate of Analog-to-Digital Converter was set to 200Hz<strong>and</strong> the resolution of ADC was 10-bit.The Flash memory is 8M bits. It enabled the monitor torecord two-leds ECG for about 30 minutes long. The largecapacity rechargeable Li+ battery provided the power for thewhole system.HumanBodyAmplifierFilterADCLi+ BatteryspeakermodemAnalogCommunicationModuleDigitalCommunicationModuleDSPFlashMemoryTimerChipLCDButtonFig.1. System hardware design76


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaEXITUPDOWNOK(a) (b) (c) (d)Fig.2. Appearance of the monitor (a) EXIT button (b) UP button (c) DOWNbutton (d) OK buttonB. Arrhythmia Detection Algorithm <strong>and</strong> implement on chipWavelet transform has many advantages in ECG automaticanalysis, such as the wavelet transform (WT) of a function f (t)is an integral transform defined by1*( , ) ∫ −+ ∞ ⎛ t − b ⎞WT fa b = f ( t ) ϕ ⎜ ⎟dt, a > 0∞a⎝ a ⎠where*ϕ () t denotes the complex conjugate of the waveletfunction ϕ ( t).This transform can decompose the ECG signal acrossdifferent scales, <strong>and</strong> describe the local feature of ECG signalwell in both time domain <strong>and</strong> frequency domain. Awavelet-based algorithm for arrhythmia detection was describedin reference [2].Since the similarity to the ECG signal <strong>and</strong> lots of test usingthe data of MIT-BIH Arrhythmia Database, Marr wavelet defined2t−by 2 2ϕ ( t ) = (1 − t ) e was finally chosen as the waveletfunction ϕ ( t)<strong>and</strong> the scale a was set to 3 <strong>and</strong> 5. Eventually wegot the discrete transform as below:49WT3 ( i)= 3∑k = 081x(i − k)(W (49 − k)−W(49 − k −1))33(1)(2)WT5( i)= 5∑x(i − k)(W5(81−k)−W5(81−k −1))(3)k = 0where the W 3(k ) <strong>and</strong> W 5(k ) are the discrete coefficients of Marrϕ t at the scale 3 <strong>and</strong> 5 as below:wavelet function ( )W3 ( k)= ψ ( k / 3), 0 ≤ k ≤ 48 (4)W5 ( k)= ψ ( k / 5), 0 ≤ k ≤ 80 (5)Arrhythmia of ECG was detected by the judging rule,compared the WT3(i ) <strong>and</strong> WT ( 5i)with previously computedthreshold <strong>and</strong> template. Based on test results using data ofMIT-BIH Arrhythmia Database on pc computer, the veracity ofQRS detection is 99 %. The veracity of detection of APC <strong>and</strong>PVC is also above 90 %.For more practical detection, this algorithm was embeddedin the DSP VC5402. The high speed dsp can process the WTevery an ADC sampling in real-time. Through the test using theelectronic ECG generator, the veracity was most approximate tothe result of pc computer. The speaker will raise the alarm sound,while the arrhythmia was detected.C. Monitor Mode <strong>and</strong> User ManagementFor different needs, this monitor provided two monitormodes: the automatic mode <strong>and</strong> the h<strong>and</strong>-operate mode. Ifchoosing the automatic mode, the monitor will be recording theECG until the flash memory full unless the EXIT button waspressed. If choosing the h<strong>and</strong>-operate mode, the ECG will beonly displayed on the LCD but not recorded. When the user feelsabnormal, he can press the OK button. The monitor will firstlyrecord the previous 30 seconds ECG before the action pressingthe OK button, then be recording the ECG as the h<strong>and</strong>-operatemode.This monitor can be used by three users. The ECG signal,for different users, would not be confused by recording theuser’s information, such as user ID, <strong>and</strong> the collection time readfrom the timer chip.D. ECG TransmissionThere are two methods for transmitting the ECG to hospitalcenter: the digital transmission method <strong>and</strong> the analogtransmission method.monitorModemPTSNModemserverHospital centerFig.3. Digital TransmissionIn Fig 3, the digital method was described that ECG wastransmitted from the client modem to the sever modem throughthe PTSN. The user information <strong>and</strong> collection time, by whichthe hospital center server could establish ECG database fordifferent users, were transmitted first. The maxim transmit speedwas set to 19200bps for almost safety <strong>and</strong> stability.digitalECGDACV/FFig.4. Modulation module77


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaFor analog transmission method, in Fig 4,the digital ECGstored in the Flash was converted to analog voltage signalthrough the DAC. Then square wave, generated by thevoltage-to-frequency module, contains different frequency whichis linear with the amplitude of analog ECG signal, <strong>and</strong> drives thespeaker to sound. In Fig 5, since the PTSN in china provides thefrequency range 0~3300Hz for telephone line, the highest soundfrequency is no more than 3300Hz. In the hospital center, thesound got from the telephone mike was demodulated to analogsignal which was resampled <strong>and</strong> then recorded in the computerserver.monitorGSMPTSNFig.5. Analog TransmissionacousticdemodulatorHospital centerⅢ. RESULT AND CONCLUSIONThough the analog integrated circuit had reduced the 50Hznoise <strong>and</strong> low-frequency artificial interference, the digital filterwas also designed to further eliminate the noise for better record.serverACKNOWLEDGEMENTSince this project is being supported by the NatureFoundation of Beijing, the authors would like to thank thecommittee of this foundation.REFERENCE[1] Hongtao Xie,Yonghong Zhang, Jupeng Zhang, et al.Development of a portable home ECG <strong>and</strong> blood pressuremonitoring device based on 80C196KC Micro-controller [J].Beijing Biomedical Engineering, 2001, 20(4):271-274[2] Peng Hu, Yonghong Zhang, Jupeng Zhang, et al.A wavelet based arrhythmia detection method [J]. BeijingBiomedicalEngineering, 2003, 22(1):23-26[3] Jia Chen. A portable ECG Monitor with datacommunication [D]. Tsinghua University, 1999.[4] Jingwen Xi, Hongquan Zhou. Development of ECGmonitoring module using DSP [J].Application of electronictechnique, 2001, 27(11):75-78Fig.6.Digital Transmission ReceptionFig.7. Analog Transmission ReceptionIn Fig.6 <strong>and</strong> Fig.7, the comparison was given. For thetransmitting speed was not set the same, there was some error inthe interval between one beat of ECG <strong>and</strong> the next one.This monitor could work continuously for 10 hours with therechargeable Li+ battery. On the LCD, multilevel menuoperation interface makes the users easily to manipulate themonitor.78


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaColour Reproduction for Tele-imaging SystemsPing He, Xiaohong Gao*Kunming Metallurgy College, ChinaSchool of Computing Science, Middlesex University, London, N14 4YZ, UKAbstract - Tele-imaging is to provide aboutbringing specialist knowledge to a patient, fromafar, by the use of communication technology. Inany telemedicine system, a doctor uses digitalcameras to record clinical photographs; these aresent by electronic mail to a network of specialists,the colour digital photo is then displayed on acalibrated display device to obtain a secondary ortertiary opinion. The digital camera <strong>and</strong> thecomputer monitor have an important role intele-medicine systems. For a reliable diagnosis viavisual telecommunication systems, it isfundamental that the patient photographs arereproduced with the correct colour. However thecolour of the digital photograph is not a completematch with the actual original colour of targetobject/patient; in this paper we introduce a methodto revise the colour of the photo to make it matchthis actual real-life colour.INTRODUCTIONHigh-quality image acquisition <strong>and</strong> correct colourdisplay are essential for effective telemedicine. Inany telemedicine system, the digital colour camera<strong>and</strong> computer monitor are needed for theacquisition <strong>and</strong> display of photographs. The digitalcolour camera is a powerful tool for imageacquisition, image processing <strong>and</strong> colourcommunications, it is cheaper <strong>and</strong> more flexible touse a digital camera than it is to use otherequipment. However, everybody knows thatexisting digital cameras do not provide true,real-life colors. In order to achieve the correctcolour display, for a reliable diagnosis via visualtelecommunication systems, we first need tocalibrate both the computer monitor <strong>and</strong> digitalcolour camera. In this research, we introduce amethod to calibrate computer monitors <strong>and</strong> to findout the colour formula of digital cameras. Wechoose a digital colour camera Canon1DMarkⅡ.We also need several pieces of supplementaryequipment: Macbeth Colourchecker (24 colours);Chroma meter; <strong>and</strong> colour vision spyder. In thewhole experiment we have used the sameillumination, D65. There are three steps in thisexperiment:1. Calibration of computer monitor <strong>and</strong>built-in colour specifications for thatmonitor.2. Calculation of the colour formula for thedigital camera.a). Using the Chroma meter to measure 24colour samples with MacbethColourchecker to obtain CIEXYZvalues.b). Acquire photo of 24 colour sampleswith Macbeth Colour checker to obtainsRGB values.c). Perform linear regression of CIEXYZvalues <strong>and</strong> sRGB values, to obtain thecolour formula of the digital colourcamera.3. Revise the digital photo using the colourformula of digital camera.In this paper we used three colour spaces, CIEXYZ1931 space, CIELAB space, <strong>and</strong> sRGB space. TheRGB values of colour samples are imaged by thedigital colour camera <strong>and</strong> the CIEXYZ values ofcolour samples are obtained by Chroma meter. A791


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinatransformation between camera sRGB value <strong>and</strong>Chroma meter CIE XYZ value is required.EXPERIMENTAL1. Calibration of the monitor by colourvision spyderThe digital photos are stored digitally <strong>and</strong> aredisplayed on the computer’s monitor, therefore in atelemedicine system a colour accurate computermonitor is very important for photographic quality.If the monitor isn't a reliable indicator of thecontrast, brightness, <strong>and</strong> color balance of thephotographs which it displays, much of the datathat the digital photo provides is lost. In order tovisually adjust the colour for our digital photographit is first necessary to calibrate our monitor.For this experiment, we chose a CRT monitor <strong>and</strong>work in a darkened room. The computer <strong>and</strong>Colourvision Spyder are connected <strong>and</strong> themonitor's colour temperature (white point) is set toD65, gamma to 2.2, contrast to maximum, lightnessto minimum, the calibration software is thenexecuted.After calibration of the CRT monitor, theinformation presented in Fig 1 was is obtained, <strong>and</strong>the associated calibrated curves are shown in Fig 2.The gamut of the monitor can be approximated asin the Chromaticity Diagrams presented in Fig 3.We can compare the gamut of theoretical RGB <strong>and</strong>the gamut of real RGB of the monitor on Fig 3.Figure 2. curves of monitor calibrationy0.90000.80000.70000.60000.50000.4000500The CIE x,y chromaticity diagram520TheoreticalRGB540Monitor'sRGB5605806006200.30007700.20004800.1000 4703800.0000 4500.0000 0.2000 0.4000 0.6000 0.8000Figure 3. Theoretical RGB <strong>and</strong> monitor real RGBdisplay. The bigger triangle is the maximum colourrange; whist the small triangle represents themonitor colour range.After calibration of the computer monitor, we needto calibrate the digital colour camera.2. Obtaining the formula for the digital colourcameraThe digital colour camera is an important tool intelemedicine system, but digital cameras can notprovide true, real-life colours, so we need tocalculate the colour formula so as to revise thecolour of photograph.xFigure 1. information of monitor calibrationFirst, measuring CIEXYZ values of 24 coloursamples <strong>and</strong> transformation to non-linear sR ’ G ’ B ’values. We used the Chroma meter CS-100 tomeasure 24 colour samples with MacbethColourchecker to get its CIEXYZ values. In thewhole experiment, the D65 illumination wasalways used <strong>and</strong> the surround kept dark. The802


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaMacbeth Colourchecker was placed into the cabinetat an angle of 45 0 in relation to the illuminationlight line, <strong>and</strong> the Chroma meter was positioned soas to be perpendicular to the MacbethColourchecker. Accurate CIEXYZ values of eachcolour sample were obtained <strong>and</strong> are presentedbelow in TABLE 1.TABLE 1. CIEXYZ values of 24 colour samplestransformed to non-linear sR ’ G ’ B ’ as follows:If R sRGB , G sRGB , B sRGB ≤ 0.0031308R ’ sRGB = 12.92×R sRGBG ’ sRGB = 12.92×G sRGB (3)B ’ sRGB =12.92×B sRGBOr if R sRGB , G sRGB , B sRGB > 0.0031308R ’ sRGB = 1.055×R sRGB (1.0/2.4) - 0.055G ’ sRGB = 1.055×G sRGB (1.0/2.4) - 0.055 (4)B ’ sRGB =1.055×B sRGB (1.0/2.4) - 0.055COLOR SAMPLE Y x y R ’ G ’ B ’dark skin 55.3 0.389 0.368 0.62344 0.41124 0.36834light skin 198 0.378 0.367 1.06948 0.75122 0.68338Blue sky 111 0.251 0.281 0.5684 0.6371 0.82069Foliage 80.6 0.337 0.439 0.53762 0.57133 0.36987blue Flower 130 0.271 0.272 0.72643 0.64893 0.88541The next stage is to take photographs of 24 coloursamples with the Macbeth Colourchecker using theCanon1DMark Ⅱ digital camera <strong>and</strong> calculatesRGB value of these photos.Bluish green 225 0.263 0.369 0.63349 0.93504 0.86694orange 136 0.483 0.415 1.05271 0.57195 0.28521puplish blue 62.4 0.221 0.213 0.43149 0.46332 0.79398Moderale red 98 0.451 0.324 0.99857 0.40165 0.50348Purple 39.1 0.304 0.253 0.52347 0.3244 0.52738Yellow green 213 0.365 0.492 0.84333 0.89043 0.38285Orange yellow 202 0.459 0.443 1.14899 0.74614 0.30222Blue 28.7 0.198 0.168 0.2782 0.30143 0.6608Green 106 0.304 0.486 0.44401 0.68603 0.37199Red 55.8 0.513 0.326 0.85825 0.21743 0.32041Yellow 264 0.433 0.479 1.15847 0.90486 0.25498Magenta 87 0.372 0.263 0.90692 0.38254 0.6798Cyan 84.6 0.207 0.281 0.20716 0.60629 0.7576White 303 0.319 0.353 1.08516 0.97697 0.95874Neutral 8 210 0.314 0.349 0.91211 0.83306 0.83136Neutral 6.5 136 0.312 0.345 0.75234 0.68482 0.69513Neutral 5 77.1 0.313 0.346 0.58359 0.52884 0.53468Neutral 3.5 37.1 0.31 0.342 0.41404 0.37536 0.38648Black 15 0.31 0.331 0.27474 0.23626 0.25728The CIEXYZ values need transformation to thesRGB values. First we calculate XYZ values fromxyz using the following relationship:x+y+z =1x = X/(X+Y+Z)y = Y/(X+Y+Z) (1)z = Z/(X+Y+Z)Then transformation of the CIEXYZ values of 24colour samples to the sRGB values using thefollowing relationship:R sRGB = 3.2406X – 1.5372Y – 0.4986ZG sRGB = - 0.9689X + 1.8758Y + 0.0415Z (2)B sRGB = 0.0557X – 0.2040Y + 1.0570ZIn this research, the sRGB tristimulus values areTABLE 2. Mean sRGB value of 24 colour samplesfrom digital photosColour samples R G Bdark skin 129.6921 87.777 61.0239light skin 212.47 182.6978 156.3386Blue sky 132.0355 167.8497 186.16Foliage 107.5364 137.5426 65.3794blue Flower 168.2573 175.6572 201.4404Bluish green 163.1862 222.9044 204.1383orange 218.8674 133.1596 41.084puplish blue 91.3563 130.9844 183.8394Moderale red 224.4254 99.7291 106.9397Purple 132.5507 123.2468 110.9901Yellow green 182.5878 207.691 101.5866Orange yellow 227.7202 178.8358 57.4017Blue 48.9091 77.4168 154.0896Green 94.52 169.3627 82.7591Red 203.7716 47.5011 52.247Yellow 222.3644 205.1628 73.2399Magenta 209.6318 103.9602 162.3397Cyan 66.5004 171.2582 185.6296White 215.014 218.9147 205.6153Neutral 8 192.9953 199.404 187.1035Neutral 6.5 161.9014 169.3098 157.6354Neutral 5 117.8457 127.8098 112.7836Neutral 3.5 74.7069 84.3324 73.3092Black 30.0259 35.8785 29.6412We took photographs of 24 colour samples underthe D65 illumination <strong>and</strong> the surround was keptdark, using exactly the same method as before.After obtaining the digital images they weretransferred to a Matlab environment where themean sRGB values for each of the 24 colour813


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinasamples, in the photograph, were calculated. Thiswas carried out by manually segmenting a 20×20pixel area from the centre field of each coloursample <strong>and</strong> then calculating the mean sRGB valuesof these areas. These mean sRGB values for the 24colour samples are show above in TABLE 2. ThesesRGB values have 24-bit format.CIELAB values of colour samples from digitalphotos <strong>and</strong> its real CIELAB values are defined bythe Chroma meter as follow TABLE 3. Colourerrors of the digital colour camera are presented inFigure 4.TABLE 3. CIELAB values of colour samplesImageChroma meterCOLOR L* a* b* L* a* b*dark skin 48.59344 18.35402 22.28184 49.80266 15.21457 12.20788light skin 88.08896 11.38756 13.92953 84.66342 19.33571 16.02414Blue sky 77.50822 -5.54603 -21.3606 67.00406 -1.38351 -30.7269Foliage 62.55994 -23.5864 36.01588 58.60615 -17.0187 23.29369blue Flower 83.22695 6.706755 -22.3813 71.49243 12.46876 -32.9247Bluish green 97.3748 -21.915 -3.35287 89.04505 -34.4409 -4.32098orange 73.58695 33.09845 64.11485 72.81814 33.685 54.82936puplish blue 62.96232 4.276453 -42.2539 52.50593 13.91413 -49.534Moderale red 67.62033 59.0731 18.49969 63.62855 53.15476 12.69832Purple 60.9929 4.314747 4.611454 42.62268 25.17172 -23.7672Yellow green 91.89176 -23.7839 51.10876 87.14355 -28.2959 57.11123Orange yellow 87.18358 10.8294 69.691 85.3367 20.37047 66.4254Blue 41.24323 20.02867 -56.2648 36.88167 21.09655 -54.287Green 73.13719 -42.9985 38.14434 65.73868 -40.6588 31.74149Red 53.7764 71.08198 37.70896 50.00037 57.76685 21.61504Yellow 94.33289 -6.15724 69.27196 94.79336 0.050033 79.73242Magenta 67.75171 58.20296 -19.3127 60.53057 53.1258 -18.8782Cyan 75.33367 -25.5475 -24.6117 59.82034 -21.5189 -32.8952White 99.99738 0.010268 -0.00231 100 3.9E-05 5.57E-06Neutral 8 91.91298 -1.15515 -0.31199 86.65707 -0.6487 -2.28377Neutral 6.5 79.3617 -2.0429 0.137828 72.81814 0.093919 -3.49156Neutral 5 61.10909 -4.82458 3.35539 57.51033 0.11002 -2.51262Neutral 3.5 41.2627 -5.01485 2.525504 41.6057 0.25161 -3.05331Black 17.15314 -3.6059 1.820328 26.59753 2.199717 -3.88514b*100806040200-60 -40 -20 -20 0 20 40 60 80-40-60-80a*a*b* values of photoReal a*b* values(Chroma meter)Fig 4. Colour errors of photoThe final stage is to calculate the colour revisionformulae for the digital colour camera.Through linear regression, the R ’ , G ’ , B ’ values inTABLE 1 <strong>and</strong> R, G, B values in TABLE 2 are824


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinaobtained, which leads to formula (1) about R,formula (2) about G, <strong>and</strong> formula (3) about B.The value of black colour is ignored in the linearregression process.y = 0.0045x (5)y = 0.0041x (6)y = 0.0044x (7)In the three formulae, the y represents the R ’ , G ’ , B ’value from TABLE 1, which originates from theChroma meter; it represents the real values ofcolour samples. The x represents the R, G, B valuesfrom TABLE 2 <strong>and</strong> was obtained from the digitalcolour photograph, these are shown in Fig 5.co-operation with the Art Museum of MiddlesexUniversity, this project enables the Art Museum tokeep a record of the colours of their wallpaperarchives so that in the future, as the colours changewith age, archivists will be able to see the originalcolours. In this experiment the, digital colourcamera, Canon1DMarkⅡwas employed.First, we took the photographs of each wallpaperarchive <strong>and</strong> saved these digital images in thecalibrated computer. Then each image wastransferred into the Matlab environment <strong>and</strong> byutilizing the three formulae about RGB, three newvalues were obtained. Finally, the revised digitalphoto was displayed on the monitor.1.210.80.60.40.201.210.80.60.40.20y = 0.0045xR 2 = 0.9418formula of R0 50 100 150 200 250y = 0.0041xR 2 = 0.9296a.formula of G0 50 100 150 200 250系 列 1线 性 ( 系 列 1)系 列 1线 性 ( 系 列 1)Every digital photograph was revised <strong>and</strong> anexample is show in Figure 5. Photo a. is originalphoto; photo b. is revised photo through formulae(5),(6),(7). Comparison of the two photos with thereal wallpaper sample revealed that the colour ofthe revised photo is closer to the colour of actualwallpaper sample than the original, un-revised,photo.b.formula of B1.210.80.60.40.20y = 0.0044xR 2 = 0.94260 50 100 150 200 250c.B线 性 (B)Fig 5. a. the formula line of R; b. the formulaline of G; c. the formula line of B.RESULTUsing the three formulae, mentioned previously,describing R, G, B the colour of wallpaperphotographs has been shown to be reliably revised.The colour of wallpaper changes with age. InFig 5. a. Original photo b. Revised photoCONCLUSIONThe computer monitor <strong>and</strong> digital colour cameraare necessary in telemedicine imaging, correctcolour reproduction is very important for correctdiagnosis. The main purpose of this research was tofind a method that reproduced colour correctly.This paper describes a method for establishing amodel for colour revision of digital photographs.835


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaIn this experiment, we employed a colourvisionSpyder to calibrate the computer monitor <strong>and</strong> tobuild-in colour specifications for that monitor. Thecolour formulae for the digital colour cameraCanon1DMarkⅡ, has also been calculated. Therevised photo was displayed on the calibratedmonitor. The colour of the revised photograph isvery close the real colour of object, but there ishowever still some difference; improvements to theexperimental method of processing data can bemade to increase the accuracy of the colour revisionformulae.REFERENCE1. Joni Orava, Timo Jaaskelainen, Jussi Parkkinen.Color Errors of Digital Cameras.2. International St<strong>and</strong>ard ICE 61966-2-1 Firstedition 1999-10. P23-25.3. Guowei Hong, M.Ronnier Luo, Peter A. Rhodes.A Study of Digital Camera ColorimetricCharacterization Based on Polynomial Modeling846


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaA processing to B-mode ultrasonic image for noninvasive detecting temperature inhyperthermiaWu Shuicai, Ren Xinying, Bai Yanping, Liu Youjun, Zeng YiBiomedical Engineering Center, Beijing University of Technology, Beijing 100022,ChinaAbstract- This paper is to process B-mode ultrasound imagefrom heated tissue for noninvasive detecting temperature inhyperthermia. The experiments are carried out on in-vitrofresh pig liver tissues during the temperature range from 28℃-45℃, <strong>and</strong> a series of B-mode ultrasonic images of liver wereobtained in different temperature. The experiment results showthat the mean gray value of AOI (Area of Interest) in B-modeultrasonic images of liver were correlated with temperature.The gray value varied in accordance with temperature changein tissues, <strong>and</strong> which can be used to estimate the heated tissuetemperature in hyperthermia.Keywords- B-mode ultrasound; Image gray; TemperaturecorrelationⅠ. INTRODUTIONAt present, the hyperthermia using microwave isbecoming a kind of important tumor treatment methods.Hyperthermia is to heat tumor tissue to exceed itsheat-resisting temperature( 43 ℃) in order to kill cancer cell.Temperature of tissue must be controlled in suitable scopefor effective therapy. In order to kill cancer cell <strong>and</strong> do notharm normal tissue, it is very important to accurate measure<strong>and</strong> control tissue temperature because it directly affectshyperthermia effect.At present, the technology for measuring temperatureon clinical is invasive, <strong>and</strong> the temperature sensors ofthermocouple or thermistor are inserted into tissue to directmeasure tissue temperature. However, it is very danger toinvade tumor for measuring temperature. Therefore seekingnoninvasive temperature measure methods with suitableprecision( 0.5 ℃) have become the pressing program.Now there have been many noninvasive temperaturemeasure methods reported in international. These methodsinclude microwave method, ultrasound method,nuclear-magnetic resonating method, electrical resistancemethod, <strong>and</strong> so on. Ultrasound method for getting tissuetemperature information is usually based on the temperaturecorrelation of ultrasonic property parameters of tissue, <strong>and</strong>the ultrasonic methods have following major advantages:relatively low cost, real time data collection <strong>and</strong> highertime-space resolution. The ultrasonic parameters used toThe research was supported by National Nature Science Foundation(30470450), Education Committee Foundation (KP0608200201) <strong>and</strong> ElitistFoundation (KW5800200351) from Beijing City, China.measure temperature have attenuation coefficient, backwardscattering power, velocity of sound, <strong>and</strong> so on. Each kind ofultrasound method for measuring temperature has itsadvantage <strong>and</strong> shortcoming [1]. At present, these methodsare still in experiment research stage. The purpose of thispaper is to study a kind of new ultrasonic method formeasuring tissue temperature; it is a noninvasivetemperature measure method using the temperaturecorrelation of B-model ultrasonic image gray of tissue inhyperthermia.A. Experiment PrincipleⅡ. PRINCIPLE AND METHODThe velocity of sound is correlation with temperature,the speed of ultrasonic wave will change with the change oftissue temperature when ultrasonic wave propagate in tissue.In solid medium, the longitudinal spread speed of ultrasoundcould be described by following equation (1) [2].χc =(1)ρεwhere c is speed , χ is Poisson constant, ε is theisothermal compressibility <strong>and</strong> ρ is density. The equation(1) could be simplified for water toc ≈ 1403 + 5T+ higher order polynomial, (2)where c is the speed , T is the temperature. Severalexperiments have confirmed that the temperature correlationof ultrasonic speed is a monotone function of temperature insoft tissues [2]. The sensitivity was determined 0.5/4.0ms -1 .℃ -1 .Ultrasonic reflection coefficients are correlation withtemperature, the strong of ultrasonic reflection echo changesin the difference of tissue temperature. The reflectioncoefficient is defined by⎛ Z ⎞⎜a2− Za1r =⎟(3)⎝ Za2+ Za1⎠where r is reflection coefficient, Za1<strong>and</strong> Za2is anacoustic impedance of medium 1 <strong>and</strong> 2 representing theacoustic transition in medium. The acoustic impedance isdefined by= ρ ⋅ c(4)Z a85


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaBecause impedance is correlation with temperature, so thereflection coefficient is correlation with temperature too.The gray level of B-model ultrasonic image is adjustedby the strong of ultrasonic reflection echo from the tissue.When ultrasound reflection coefficient changes in differenttissue temperature, the strong of ultrasonic echo signal isalso change. Thus the gray value of the B-mode ultrasonicimage of tissue has correlation with tissue temperature.B. Experiment MethodExperiment system mainly consist of the electrothermalwater tank <strong>and</strong> the B-model ultrasonic diagnosis instrumentbased on personal computer (PC), the structure of system isshown in Fig.1.ThermometerBeakerUltrasonicprobeFig.1 the structure of experiment systemB-mode ultrasonicinstrumentComputerDegassed waterWater tankPig liverThe ultrasonic diagnosis instrument (model: MBC-I) ofB-model used in this experiment system was produced bythe Chinese aerial industrial academe, <strong>and</strong> B-modeultrasonic image can been sampled <strong>and</strong> stored in diskette foranalysis. The temperature fluctuation degree ofelectrothermal water tank ( HH-W21-600 C) is within ±0.5℃ . The mercury thermometer is used to measuretemperature in water tank with the precision of 0.1 ℃.Entire experiment system is showed in Fig.2.in Fig.1, ultrasonic probe is fixed in the top of vessel withno-air water <strong>and</strong> pig liver, <strong>and</strong> the ultrasonic wave generatedby probe is sent downward, <strong>and</strong> no-air water is used assound couplant between probe <strong>and</strong> liver. The water in tank isheated slowly from 28 ℃ to 45 ℃, <strong>and</strong> in the course oftemperature going up gradually, the B-mode ultrasonicimages of the pig liver are sampled <strong>and</strong> stored in diskette indifferent temperature. Then B-mode ultrasonic images areprocessed for extracting gray value of images in differenttemperature, <strong>and</strong> the temperature correlation of image grayis calculated.Table 1Table 2The gray value of the B-mode ultrasonic images of pig liver1 indifferent temperature.Temperature (℃) 27.60 28.45 29.30 30.20Gray (level) 95.87 100.36 102.52 104.41Temperature (℃) 31.10 32.00 33.00 33.90Gray (level) 105.83 107.42 111.82 110.10Temperature (℃) 35.75 36.70 37.55 38.40Gray (level) 114.65 114.48 112.96 115.51Temperature (℃) 39.20 41.20 42.25 43.10Gray (level) 118.92 122.86 130.02 138.54The gray value of the B-mode ultrasonic images of pig liver2 indifferent temperature.Temperature (℃) 27.65 28.45 29.35 30.20Gray (level) 74.56 77.42 78.00 75.31Temperature (℃) 31.15 32.10 33.00 34.10Gray (level) 80.61 81.86 82.58 84.94Temperature (℃) 36.10 37.15 38.10 39.10Gray (level) 84.78 84.81 90.81 91.25Temperature (℃) 40.15 42.05 43.00 43.85Gray (level) 91.94 88.11 95.04 102.67Table 1 <strong>and</strong> 2 give the average gray value of the interestregions (12 × 12 pixels) selected in B-mode ultrasonicimages of pig livers in different temperature. The unit oftemperature is ℃, the unit of gray value is level( B-modeultrasonic image is the gray image with 256 levels).The temperature correlation curve of image gray valueis showed in Fig.3. The correlation analysis is processedusing experiment data, <strong>and</strong> the results show that the grayvalue of image have obvious correlation with liver tissuetemperature, the correlation coefficient is r = 0. 9560 , <strong>and</strong>correlation function is y = 2 .045x+ 41. 157 ,R 2 = 0.9165 . Evidently, the linear degree of correlationcurve is very high.Fig 2. Experiment systemⅢ. RESULTS AND ANALYSISWe take the fresh pig liver as experiment object. Seen86


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaHigh Quality Volume Rendering usingProgrammable Graphics HardwareCHEN Yu, LIN Peng, WANG Bo-liang, XU Xiu-yingDepartment of Computer ScienceXiamen UniversityXiamen, 361005, ChinaAbstract— Volume rendering has become an important methodto visualize 3D scalar data for medical data analysis. Due to thelarge number of data set produced by medical imaging devices,at one time, volume rendering has yet been restricted to highendworkstations. Nowadays, a 3D acceleration graphics boardis included in almost every consumer PC. With the advent ofShader Model 3.0, graphics hardware has become capable ofrendering hardware-accelerated voxel. This paper described howto render voxel objects using programmable graphics hardware<strong>and</strong> OpenGL Shading Language. A method which differed from”Stacked Quads” method was present. A ray path from eachscreen pixel was traced step by step through the volume object,sampling the texture at each step, searching the matter. Thenormal vector per voxel was approximated using fragmentshader. Then, voxels were rendered using the normal. At last,the implementation visualized a CT hepatic data set on a midrangecomputer system using soft rasterizer. The rendered imagequality is greatly enhanced.I. INTRODUCTIONVolume rendering has become an important method tovisualize 3D scalar data for medical data analysis. Due to thelarge number of data set produced by medical imaging devices<strong>and</strong> trilinear interpolations that must be processed in order toproduce image results of high quality, at one time, volumerendering has yet been restricted to high-end workstations orspecialized hardware.Nowadays, a 3D acceleration graphics board is included inalmost every consumer PC. Many approaches has been proposedto implement interactive volume rendering on st<strong>and</strong>ardPC graphics hardware. Typically, volume object are convertedto polygons before rendering using ”Marching Cube” algorithm[1] or something similar. The other method is directvolume rendering method [2]. Earlier graphics hardware onlyprovide 2D texture <strong>and</strong> multi textures [3] mapping functionality,three stacks of object-aligned slices (see Figure 1(a)) areused to resample data set. It results in huge requirement ofmemory. Currently, 3D texture mapping is popularly availablein consumer level graphics board (introduced by ATI Radeon7500 <strong>and</strong> NVIDIA GeForce3 Ti). With 3D texture, it is nolonger necessary with three copies of data set, since trilinearinterpolation allows the extraction of slices with arbitraryorientation (see Figure 1(b)). We call all of them ”StackedQuads” method.In this paper, we present a new volume rendering methoddiffer with ”Stacked Quads” method. It is implemented onmodern consumer level graphics hardware using OpenGLFig. 1.Object-aligned slices (a) <strong>and</strong> viewport-aligned polygon slices (b)Shading Language. Firstly, we review OpenGL Shading Languagebriefly. Then, our algorithm is described in detail. Atlast, we draw the conclusion <strong>and</strong> summarize this paper.II. OPENGL SHADING LANGUAGEA further limitation of these older generations of graphicshardware was their lack of programmability. The major leapforward made by new generations of GPUs is the fact that theyoffer programmability at floating point precision in the graphicspipeline. A direct consequence of this added functionalityis that now the entire reconstruction can be performed withinthe GPU, at CPU precision.There are several kinds of shading language designed forprogramming GPU, e.g. Microsoft’s HLSL, NVIDIA’s Cg <strong>and</strong>OpenGL Shading Language (GLSL). GLSL is designed toreplace OpenGL fixed functionality in the OpenGL processingpipeline [4]. In GLSL, there are two kinds of closely relatedlanguages: vertex <strong>and</strong> fragment shader [5]. Both the vertex<strong>and</strong> pixel shaders are small programs, which when enabled,replace the corresponding fixed function pipeline. In Figure 2,vertex <strong>and</strong> fragment shader can be loaded to vertex <strong>and</strong>fragment processor. Through vertex <strong>and</strong> fragment shader, wecan achieve as much realism as possible with as much speed<strong>and</strong> interactivity as possible.Recently, graphics hardware programmable model hasevolved into Shader Model 3.0 (SM 3.0). Up to this paper’spresentation, there have been some products supported SM3.0 in the market, for example, NVIDIA GeForce 6 series,GeForce 7800GTX <strong>and</strong> ATI R520. It allows developers usingloops, dynamic flow control (IF, BREAK), unlimited texture87


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaFig. 2.OpenGL graphics hardware pipeline diagramreads, <strong>and</strong> unlimited dependent reads in the shader programs.All of these features are required by our algorithm. Furthermore,to use shader technologies, the driver of graphics cardmust support OpenGL 2.0 fully. The last driver from vendoris recommended. Up to this paper’s written, ATI Catalyst5.6 <strong>and</strong> NVIDIA ForceWare 77.72 can support OpenGL 2.0excellently.⎡⎢⎣stp1⎤ ⎡⎥⎦ = ⎢⎣1 /Bx 0 0 00 1 /By 0 00 0 1 /Bz 00 0 0 1⎤⎡⎥⎢⎦⎣xyz1⎤⎥⎦ (1)The second one we must calculate is camera direction intexture space. We must advert the length of camera to vertexvector, which directly controls the distance through the 3Dtexture each loop, thus small object will require the vectorlength to be no longer than the size of a voxel. For example,a 64 3 volume would require the length of the step vector tobe 64 −1 .The vertex shader program is shown in Listing 1.uniform vec3 s c a l e ;/ / t h e r e c i p r o c a l o f box s i z euniform vec3 camPos ;/ / Camera p o s i t i o n ( model space )uniform f l o a t lVec ;/ / L ength o f s t e p v e c t o rA. PrincipleIII. VOXEL-BASED VOLUME RENDERINGVolumetric object are stored as a three-dimensional map ofmatter, with each voxel indicating some information about thematter - color, translucency, or density. To produce high qualityimages, we submit a single eight-vertex cube to contain thedata set; the ray path from each screen pixel is traced step bystep through the volume object, sampling the texture at eachstep [6].The approach is described below:1) Use a 3D-Texture containing the volumetric object. Theattribute of GL CLAMP could be used to allow texturerepeats if that is desired.2) Render the cube. The eight vertices coordinates of 3D-Texture simply map the entire volume texture to thecube. For example, one vertex has coordinates (0, 0, 0)<strong>and</strong> the opposite corner is (1, 1, 1) .3) In the vertex shader, output the 3D-Texture coordinate<strong>and</strong> a vector indicating the direction from camera tovertex.4) In the fragment shader, start at the given texture coordinate<strong>and</strong> step along the line, deeper into the volume.Sample the texture at each step.B. Vertex ShaderIn the vertex shader program, we must calculate two things.At first, calculate the 3D texture coordinate for the fragmentshader program accessing the 3D texture. Due to the simplemap between object <strong>and</strong> texture space, we directly transformobject coordinates from object space to texture space usingEquation 1. In Equation 1, (B x , B y , B z ) is the bounding boxsize of volume object, (s, t, p) is the 3D texture coordinatetransformed from object coordinate (x, y, z).v a r y i n g vec3 stepVec ;/ / s t e p v e c t o r i n t h e camera d i r e c t i o nvoid main ( ) {g l P o s i t i o n = g l M o d e l V i e w P r o j e c t i o n M a t r i x∗ g l V e r t e x ;gl TexCoord [ 0 ] . xyz = g l V e r t e x . xyz∗ s c a l e ;}camDir = lVec∗ n o r m a l i z e ( g l V e r t e x . xyz−camPos ) ;Listing 1.C. Fragment ShaderVertex Shader ProgramThe voxels may be rendered in many ways, such as theaccumulation method (the color are accumulated as the raytraces through the volume), the solid matter detection method(step through the 3D texture until a non-empty voxel is found<strong>and</strong> then render its color).In this paper, we use a different method. In order to renderthe voxels, we must obtain the normal vector per voxel.Computing an accurate normal from volume data is a complextask, the cost is expensive. The techniques described belowgive a good approximation of normal calculation:1) Trace the path of light ray. Find the collide point of ray<strong>and</strong> matter.2) Sample a number of points on a sphere around thecollide point (eight sample points are shown in theFigure 3).3) Sum the offset vectors (the vector from collide point tosample point) that do not hit the matter (e.g. a, b, c <strong>and</strong>d in the Figure 3) <strong>and</strong> normalize the resulting vector(N in the Figure 3). The result is the approximation ofnormal.88


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinac o n s t vec3 l i g h t V e c = vec3 ( 0 . 0 , 1 . 0 , 0 . 0 ) ;void CheckOnePosition ( vec3 colPos , vec3 o f f s e t, o u t vec3 norVec ) ;void C h e c k P o s i t i o n s ( vec3 colPos , vec3 o f f s e t ,o u t vec3 norVec ) ;void main ( ) {f l o a t i t e r , magnitude ;vec3 texCoord , normalVec , c o l l i d e P o s , o f f s e t ;vec4 tex , c o l o r = vec4 ( 0 . 0 , 0 . 0 , 0 . 0 , 0 . 0 ) ;Fig. 3.Calculate an approximate normal by additional texture samplesThere is a potential error if the ray hits a thin object, asshown in Figure 4. Offset vectors 2, 3, 4, 6, 7 <strong>and</strong> 8 all detect”no matter”; the summed result is a zero-length normal.An additional check solves this problem: When calculatingthe normal, only perform texture samples for cases wherethe dot product of the offset vector <strong>and</strong> the step vector isnegative (i.e., in Figure 4, sample points 5, 6, 7 <strong>and</strong> 8 wouldbe sampled).texCoord = gl TexCoord [ 0 ] . xyz ;f o r ( i t e r = 0 . 0 ; i t e r < M a x I t e r a t i o n s ; ++i t e r ) {t e x = t e x t u r e 3 D ( Volume , texCoord ) ;i f ( t e x . a > t h r e s h o l d ) {normalVec = Zero ;o f f s e t = o f f s e t 1 ∗ lVec ;C h e c k P o s i t i o n s ( c o l l i d e P o s , o f f s e t ,normalVec ) ;i f ( a l l ( e q u a l ( normalVec , Zero ) ) )normalVec = −stepVec ;magnitude = l e n g t h ( normalVec ) ;c o l o r = t e x t u r e 2 D ( ColorTable , vec2 ( t e x . a, magnitude ) ) ;c o l o r ∗= d o t ( normalVec , l i g h t V e c )∗ 0 . 5 + 0 . 5 ;break ;}texCoord += stepVec ;}i f ( a l l ( e q u a l ( clamp ( texCoord , 0 . 0 , 1 . 0 ) ,texCoord ) ) )break ;}g l F r a g C o l o r = c o l o r ;Fig. 4.codeThin object can generate invalid normals unless fixed with additionalThe fragment shader program is shown in Listing 2.uniform f l o a t t h r e s h o l d ;/ / T h r e s h o l d v a l u e t o d e t e c t m a t t e r u n i f o r mf l o a t lVec ; / / L ength o f s t e p v e c t o runiform f l o a t M a x I t e r a t i o n s ;/ / Loop enough t i m e s t o s t e p c o m p l e t e l yt h r o u g h o b j e c tuniform sampler3D Volume ;uniform sampler2D C o l o r T a b l e ;v a r y i n g vec3 stepVec ;/ / Camera−d i r e c t i o n i n t e x t u r e spacec o n s t vec3 o f f s e t 1 = vec3 ( 1 . 7 3 2 , 1 . 7 3 2 ,1 . 7 3 2 ) ;c o n s t vec3 Zero=vec3 ( 0 . 0 , 0 . 0 , 0 . 0 ) ;/ / Check f o r a p o s i t i o n f o r matter , sum t h eo f f s e t v e c t o r i f no h i tvoid CheckOnePosition ( vec3 colPos , vec3 o f f s e t, o u t vec3 norVec ) {vec3 samplePos ;vec4 t e x ;i f ( d o t ( o f f s e t , stepVec ) < 0 . 0 ) {samplePos = c o l P o s + o f f s e t ;t e x = t e x t u r e 3 D ( Volume , samplePos ) ;i f ( t e x . a < t h r e s h o l d )norVec += o f f s e t ;}}/ / Check f o r m a t t e r around a p o s i t i o nvoid C h e c k P o s i t i o n s ( vec3 colPos , vec3 o f f s e t ,o u t vec3 norVec ) {CheckOnePosition ( colPos , o f f s e t , norVec ) ;CheckOnePosition ( colPos , o f f s e t ∗ vec3 (1 ,1 , −1) ,norVec ) ;CheckOnePosition ( colPos , o f f s e t ∗ vec3 (1 , −1 ,1) ,norVec ) ;CheckOnePosition ( colPos , o f f s e t ∗ vec3 (1 , −1 , −1), norVec ) ;CheckOnePosition ( colPos , o f f s e t ∗ vec3 ( −1 ,1 ,1) ,norVec ) ;89


}August 16-19, Wuyi Mountain, ChinaCheckOnePosition ( colPos , o f f s e t ∗ vec3 ( −1 ,1 , −1), norVec ) ;CheckOnePosition ( colPos , o f f s e t ∗ vec3 ( −1 , −1 ,1), norVec ) ;CheckOnePosition ( colPos , o f f s e t ∗ vec3(−1,−1,−1) , norVec ) ;D. ResultProceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)Listing 2.Fragment Shader ProgramUnfortunately, we have no graphics card that supportsShader Model 3.0 at h<strong>and</strong>. The program was performed on aAthlonXP 1700+ CPU with a ATI Radeon 9600 Pro graphicscard. The fragment shader was running in the soft rasterizermode. We visualized a CT hepatic data set (256 × 256 × 242)with our algorithm. As show in Figure 5, the rendered imagequality is greatly enhanced.ACKNOWLEDGMENTThis work is supported by National Natural Science Foundationof China (Grant No.60371012), The Special TechnologyProject of Fujian (Grant No. 2002Y021).REFERENCES[1] W.Lorenson <strong>and</strong> H.Cline, “Marching cubes: A high resolution 3d surfaceconstruction algorithm,” Computer Graphics, July 1987.[2] C. Rezk-Salama, “Volume rendering techniques for general purposegraphics hardware,” Ph.D. dissertation, Friedrich-Alex<strong>and</strong>er University,Erlangen-Nuremberg, Dec. 2001. [Online]. Available: http://www9.informatik.uni-erlangen.de/Persons/Soza/Resume/Publications[3] M. B. G. G. C. Rezk-Salama, K. Engel <strong>and</strong> T. Ertl, “Interactive volumerendering on st<strong>and</strong>ard pc graphics hardware using multi-textures <strong>and</strong>multi-stage rasterization,” in Proc. Eurographics/SIGGRAPH Workshopon Graphics Hardware 2000 (HWWS00), 2000.[4] The OpenGL Graphics System:A Specification(Version 2.0 - October 22,2004), OpenGL ARB, 2004.[5] The OpenGL Shading Language, 3Dlabs, Inc. Ltd., 2004.[6] J. Pawasauskas. (1997, Feb.) Volume visualization with ray casting.[Online]. Available: http://www.cs.wpi.edu/ ∼ matt/courses/cs563/talks/powwie/p1/ray-cast.htm[7] ATI, “Dynamic branching using stencil test,” ATI Software Developer’sKit, June 2005.Fig. 5.Visualization of a CT hepatic data setIV. CONCLUSIONS AND FURTHER WORKWith the advent of Shader Model 3.0, graphics hardware hasbecome capable of rendering hardware-accelerated voxel. Thispaper described how to render voxel objects using programmablegraphics hardware <strong>and</strong> OpenGL Shading Language. Therendered image quality is greatly enhanced.One drawback of our algorithm is the rendering speed. Inthe algorithm, we used the dynamic branching, which is oneof the most exciting <strong>and</strong> useful features of SM 3.0 <strong>and</strong> GLSL.Unfortunately, hardware support for this feature is still sparse<strong>and</strong> the performance characteristic on the currently availablehardware supporting this feature may not always be ideal. Inthe future, we want to make our current hardware-acceleratedalgorithm more efficient in order to support older chips.Several researchers report that using stencil test can replacesome of the most common uses of dynamic branching [7].Weplan to use a similar technique to optimize voxel-based volumerendering.90


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaGray(level)150140130120110100908025 30 35 40 45 50Temperature(degrees centigrade)Fig.3 the temperature correlation curve of ultrasonic image gray of pigliver1The temperature correlation curve of pig liver2 isshowed in Fig.4. The temperature correlation coefficient ofB-mode image gray of pig liver 2 is r = 0. 8958 , the linearcorrelation function is y = 1 .545x+ 31. 240 , R 2 = 0. 8557 .Obviously the correlation curve also has better linearrelation with temperature.Gray(level)1201101009080706025 30 35 40 45 50Temperature(degrees centigrade)Fig.4 the temperature correlation curve of ultrasonic image gray of pigliver2Based on above experiment results, it is easy todiscover that the gray of B-ultrasonic image <strong>and</strong> thetemperature in pig liver tissue have obvious correlation.When the tissue temperature of pig liver changes, itsB-ultrasonic image gray also change. The change trend ofimage gray go up along with tissue temperature going up<strong>and</strong> is basic linear change. According to the temperaturecorrelation curve of image gray in Fig.3 <strong>and</strong> 4, the detectedtemperature resolution can meet the needs of the clinical fortemperature precision.( such as: pig liver ). Therefore it is possible to detect <strong>and</strong>monitor tissue temperature in hyperthermia according to thetemperature correlation of B-mode ultrasonic image gray.This kind of method can noninvasive measure thetemperature distribution in tissue, <strong>and</strong> it is more accurate <strong>and</strong>safe than invasive temperature measure technology used inpresent clinical. So long as the temperature correlationfunction of B-mode ultrasonic image gray in tissue isestablished, the temperature change of tissue can be detectedthrough measuring the change of image gray value of tissue.It is necessary to establish the temperature correlationfunction of B-mode ultrasonic image gray of tissue beforemeasuring tissue temperature. But it is difficulty to obtainthe temperature correlation of B-mode ultrasonic image grayof tissue. Experiment result is affected easily by manyfactors. The existing problems have (1) In above twoexperiment results, the correlation function of two pig liversis not identical as we hope. Because two pig livers havedifferent volume, shape <strong>and</strong> composition, <strong>and</strong> these factorsaffect the strong of ultrasonic echo signal; (2) The shake ofultrasonic probe <strong>and</strong> the expansion displacement of pig liverwill also affect experiment result.The methods used for improving experiment result have:(1) Improvement of heating system. Such as insertingmicrowave antenna into pig liver to heat because it canreduce the expansion influence of tissue heated in water tank;(2) Using temperature probe to invasively measuretemperature of pig liver because it can more accurate get thetemperature correlation of ultrasonic image gray of pig liver.In a word, using the temperature correlation of tissueultrasonic image gray to measure its temperature is feasiblein principle, but there still are many problems needed tosolve. The experiment results in this paper have verified thefeasibility of this kind of method in certain level. Next wewill make deeper study on improving theoretical <strong>and</strong>experiment methods.REFERENCES[1] Wu SC, Bai YP, Nan Q, et al. The research on method of noninvasivetemperature estimation in hyperthermia. Foreign medical sciencesbiomedical engineering fascicle, 2002, 25(1): 35-38[2] Novak P, Pousek L, Schreib P, et al. Noninvasive temperaturemonitoring using ultrasound tissue characterization method. Ultrasonic<strong>Imaging</strong> <strong>and</strong> Signal Processing, Proceedings of SPIE, 2001, Vol.4325:566-574Ⅳ. DISCUSSIONExperiment result <strong>and</strong> theoretical analysis processed inthis paper show that the B-mode ultrasonic image gray <strong>and</strong>temperature have better correlation in biological tissue91


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaDynamic Spectrum: A Noble Method for Non-invasiveBlood Components MeasurementY. Wang 1.2 , G. Li 1 , L. Lin 1 , X.X. Li 1.3 , Y.L. Liu 11) College of Precision Instruments & Opto-Electronics Engineering,Tianjin University, Tianjin 300072, China2) Liaoning Technical University, Liaoning Fuxin, 123000, China3) School of Electrical <strong>and</strong> Automatic Entineering, Hebei University of Technology, Tianjin, 300130, ChinaE-mail: ligang59@163.comAbstract-For patients with metabolism disorders, bloodcomponents monitoring is an important part in themanagement of their disease, however the influence of theindividual discrepancy has been shown to be one of themain barrier to percutaneous absorption, but there havebeen claims that dynamic spectrum can eliminate most ofthe individual discrepancy of the tissues except the pulsatilecomponent of the artery blood theoretically. This indicatesa br<strong>and</strong> new way to measure the blood componentsconcentration <strong>and</strong> a potential to provide absolutequantitation of hemodynamic variables. In this paper, themeasure method to acquire the DS fromPhotoPlethysmoGraphy (PPG) is studied.I. INTRODUCTIONThe various chemical components present in humanbody carry important information on health status, <strong>and</strong>serve as important indicators to a number of clinicaldiagnostics <strong>and</strong> therapeutic effect. The acquisition ofblood requires the use of invasive <strong>and</strong> often painfulmethods, <strong>and</strong> the development of a technique thatremoves these problems would represent a majoradvance. There is growing interest in the use ofnear-infrared spectroscopy for the noninvasivedetermination of the blood component [1-3]. Toaccurately quantify the components concentration, therelative contributions of these absorbers must beseparated from the raw optical signals. To do so,measurements at several wavelengths aresimultaneously acquired, <strong>and</strong> a model is applied toconvert the wavelength data to chromophoreconcentrations. The st<strong>and</strong>ard model used to performthis conversion is based on the modifiedBeer–Lambert law [4]. The influence of the individualdiscrepancy has been shown to be one of the mainbarriers to percutaneous absorption [5-6].In the previous reports [7-8], we presented a newnon-invasive blood component concentration measuremethod: dynamic spectrum (DS) measure method. Itcan eliminate the individual discrepancy of the tissuesexcept the pulsatile component of the artery blood(PCAB) theoretically. This indicates a br<strong>and</strong> new wayto improve the measurement accuracy of the bloodcomponents concentration <strong>and</strong> a potential to provideabsolute quantitation of hemodynamic variables. Theprinciple <strong>and</strong> the measure method of DS are discussedin this paper. DS is defined on the optical density (OD)of the PCAB, <strong>and</strong> is extracted from the photoelectricpulse wave. The photo pulse plethysmography (PPG)is used to measure the raw data <strong>and</strong> the modifiedBeer–Lambert law is applied to convert DS tochromophore concentrations. To accurately quantifythe components concentration, two key points must beensured: the accuracy of PPG measurement <strong>and</strong> theactive arithmetic of extracting the DS.II. PRINCIPLE OF THE DSTo explain the principle of DS measure method, wepresented the definition of the DS: the spectrum thatconstructed by the OD of the pusatile component ofthe artery blood (PCAB) corresponding to eachmonochromatic light. DS can be obtained by thePhoto pulse PlethysmoGraphy.The photoelectric pulse wave obtained from thefinger tip represents the change of absorption of thetissues. OD of the photoelectric pulse wave is mainlycontributed by the absorption <strong>and</strong> scattering of theblood <strong>and</strong> other tissues within the detected regions.The sketch diagram of the photoelectric pulse waveFig.1 The photoelectric pulse waveconstitution is shown in Fig 1.The modified Beer–Lambert law is typically used todescribe the change in light attenuation in scatteringmedia because of absorption changes (which in turnresult from changes in chromophore concentrations).When the timely-change in absorption is in the PCAB,then according to the MBLL,92


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaIOD = − lg = 2.303εcl+ GI0(1)where I 0 <strong>and</strong> I specify the intensity of the incidentlight <strong>and</strong> the transmitted light respectively, ε is themolecular extinction coefficient, c is theconcentration of the analyte, l is the mean opticalpath length <strong>and</strong> G is OD due to the scattering of thelight. In transmission measurement using NIR light,OD is mainly contributed by the absorption <strong>and</strong>scattering of the blood <strong>and</strong> other tissues. According toour studies, the scattering of the PCAB is muchsmaller than the absorption (the detail of this resultwill be reported in another paper), we can ignore ithere. Then G is only contributed by tissues except thepulsatile component of the artery blood, <strong>and</strong> keepsconstant in the measurement. So, take the multi-wavespectrum into account, we have,ni ∑ ij j )j=1∆OD= 2.303( ε c L , i=1,2,…,m j=1,2,…,n (2)where ∆OD is the change in OD measured at agiven wavelength according to the pulsatile arteryblood, cj is the concentration of blood composition j ,εijis the molecular extinction coefficient of bloodcomposition j at the single wavelength i , L is thedifference of the max mean optical path length <strong>and</strong> themin mean optical path length of the artery.According to the Eq.(2), OD contributed bynon-pulsatile component of the blood <strong>and</strong> other tissueswith the individual discrepancy has been eliminatedby calculating the difference of OD max <strong>and</strong> OD min .∆OD , which is simply contributed by the PCAB,reveals the absorption of the PCAB. In nature, it isequivalent to getting rid of the other tissues <strong>and</strong> onlyremains the PCAB to conduct the measurement.According to the definition of DS, the spectrumconstructed by ∆OD corresponding to eachmonochromatic light in one period of the photoelectricpulse wave is regarded as the DS. The DS, therefore,will not be influenced by most of the interference ofthe individual discrepancy. If we can establish thesystem of equations with the unknown cj <strong>and</strong> theoptical length of the PCAB, the concentration of theblood compositions c 1 , c 2 ,……,c n can be obtained.DS can be obtained by the Photo PlethysmoGraphy.Supposing I 0 as the incident light,the I min <strong>and</strong> I max asthe transmitted light is relative to the maxim arteryblood volume <strong>and</strong> the minimum artery blood volume.Then, the difference of the max OD (OD 1 ) <strong>and</strong> themin OD (OD 2 ) in one period of the photoelectric pulsewave can be expressed as:I0∆ OD = OD1− OD2= lg(IminI) − lg(I0max)⎛ Imax⎞ ⎛ Imax− Imin⎞= lg⎜⎟ = lg⎜ + 1⎟(3)⎝ Imin ⎠ ⎝ Imin ⎠According to the Eq. (3), to obtain the DS, we needto obtain the PPG relative to every selectedwavelength, identify the ( I −Ifrom each PPG,max min)extract the DS from the information, <strong>and</strong> then, workout the concentration of artery blood by the analysischemistry methods.III DISCUSSION OF THE DS MEASURE METHODTo accurately quantify the componentsconcentration, two key points must be ensured: theaccuracy of PPG measurement <strong>and</strong> the activearithmetic of extracting the DS.A. The detected tissues <strong>and</strong> the raw signalIn the normal NIR noninvasive detecting methods,the light-source is regarded as the incident light of thetissues. According to the Eq. (3), the I max is theincident light of the PCAB, <strong>and</strong> the I min is theminimum transmitted light of the PCAB.1) Since the light-source signal is much strongerthan the Io, they cannot be detected with the samechannel of the detect instrument, which will bringadditional error to the system. To the DS, it onlyneeds to record the PPG of Io, which ably avoids thiserror.2) The light path from the light-source to the Ioincluding the hair, epidermal, dermis,subcutaneous-tissue, muscle, bone <strong>and</strong> the blood.These factors are different one by one, <strong>and</strong> exert aconsiderable influence to the result of measuring. DSis only defined on the OD of the PCAB. It caneliminate the individual discrepancy of the tissuesother than the PCAB theoretically. This can increasethe system accuracy greatly.3) The value of ∆ODis only relative with the( Imax − Imin)I . The accuracy of the eigenvalue ofmin( Imax − Imin)I is more important than the accuracy ofminI max <strong>and</strong> I min . This reduces the requirement of thestability of the system <strong>and</strong> the light source. It is notstrongly insist the same gain of every PPG relatives todifferent wavelengths. Thus the DS spectrometer canemployed the multi-channel structure to improve thedetect race.B. The DS spectrometerTo acquire the DS, we must scan the spectrumseveral times in one period of the photoelectric pulsewave (about 1 second) to obtain the PPG of everyselected wavelength. The DS spectrometers must havethe Time-resolved regimes. The normal spectrometersusually increase the accuracy by long time integration.They are not suit to the DS. According to Eq. (3), the93


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinadynamic spectrometer must identify the alternatingcomponent of the photoelectric pulse wave. These aretow major differences between the dynamicspectrometer <strong>and</strong> the normal spectrometers. The DSneeds a kind of special spectrometer.According to our study, ( I −I) is only less thanmax min3% of Imax. To detect the ( I − I max min)with high accuracy,we need DS spectrometer with high performance parts.Fortunately, the improvement of the electrictechnology has strongly supported our studies, forexample we have 18 bit ADC with 1M transmit race.C. The arithmetic of extracting the DSThe particular computing part of DS spectrometer isthe arithmetic of extracting the DS. There are towways to identify the ( I −I), in the time domain or inmax minthe frequency domain. The validity of the arithmetic isvery important to the system.1) In the time domain, it’s hard to detect the PPGsright at the point of I max <strong>and</strong> I min . Usually, we useinterpolation interpolation method to estimate them.By this way, the system must have excellent transmitrate <strong>and</strong> accuracy.2) In the frequency domain, fast Fourier transform(FFT) arithmetic can be used to analysis the frequencycompose of the signal. Normally, the frequency b<strong>and</strong>of pulse wave is within 0.1-20Hz. A digital b<strong>and</strong>-passfilter with FFT is employed in the system.Every signal can be exp<strong>and</strong>ed into series by FFT, asshown in the Eq.(4).jω∑ ∞−∞− jωtX ( e ) = x(t)e(4)The direct part of the series is,1X ( 0) = ∫ x(t)dt(5)TThe base pulsating:1(1) = ∫ x(t)eT TThus, we have,T− j2πftX0dt(6)Imin− ImaxX (1)∆ OD = f ( ) = f ( k ) (7)IX (0)maxThe equation (7) implies that the DS can becalculated by the first tow items of the PPG Fourierseries. 25SPS transmit rate per channel is acceptablein the frequency domain arithmetic.DISCUSSIONThe principle <strong>and</strong> measure method of the DSmethod are discussed in this paper. It is provedtheoretically that DS method can really eliminate mostof the interference of the individual discrepancy.Noninvasive measurement of blood compositions maybe realized without the influence of individualdiscrepancy using the dynamic spectroscopy. It willhave great effect on the development of thenoninvasive measurement of blood compositions inthe methodology. The measure method of the DS isalso studied in this paper. The DS can be acquired bya simple system with linear circuit. The promise of DSmethod lies in the potential to provide absolutequantitation of hemodynamic variables. Study on theDS still has a long way. There is a lot of work to bedone.REFERENCES[1] Wabombaa M J, Small G W <strong>and</strong> Arnold M A2003 Evaluation of selectivity <strong>and</strong> robustness ofnear-infrared glucose measurements based onshort-scan Fourier transform infrared interferogramsAnalytica Chimica Acta 490 325-340[2] Lafrance D, L<strong>and</strong>s L C <strong>and</strong> Burns D H 2003Measurement of lactate in whole human blood withnear-infrared transmission spectroscopy Talanta 60635-641[3] Chaur<strong>and</strong> P, Schwartz S A <strong>and</strong> Caprioli RM2002 <strong>Imaging</strong> mass spectrometry: a new tool toinvestigate the spatial <strong>org</strong>anization of peptides <strong>and</strong>proteins in mammalian tissue sections CurrentOpinion in Chemical Biology 6 676-681[4] Delpy DT, Cope M, Zee V D, Arridge P, Wray S<strong>and</strong> Wyatt S 1988 Estimation of optical pathlengththrough tissue from direct time of flight measurementPhys. Med. Biol 33 1433–1442[5] Kline J A, Hern<strong>and</strong>ez-Nino J, Craig, Newgard,Cowle D N, Jackson R E <strong>and</strong> Courtney M 2003 Useof Pulse Oximetry to Predict In-HospitalComplications in Normotensive Patients withPulmonary Embolism The American journal ofmedicine 115 203-208[6] Heise H M <strong>and</strong> Glucose 2000 In Vivo Assay ofEncyclopedia of Analytical Chemistry[7] Li G, Wang Y, Lin L, Liu Y L <strong>and</strong> Li X X 2004An excellent non-invasive blood component measuremethod Life Science Instruments (in Chinese) 2-533-35[8] Li G, Liu Yu-liang, Lin L, Li X X <strong>and</strong> Wang Y2004 Dynamic Spectroscopy for NoninvasiveMeasurement of Blood Composition 3rd InternationalSymposium on Instrumentation Science <strong>and</strong>Technology Aug 18-22 Xi’an China 3-0875-088094


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaFuzzy Enhancement about 2D- EchocardiographyZhang hua (Biomedical Engineering Institute,Fu zhou University,Fu zhou, Fu jian 350002Email: babyg5758@sina.com.cn)Abstract : This article introduces a newmethod about image enhancement-FuzzyEnhancement. Pal developed the fuzzy setstheory in 1965. This paper analyses it’sdrawbacks <strong>and</strong> proposes an improvingalgorithm by defining a new membershipfunction <strong>and</strong> a new fuzzy enhancementfunction. Also this article adopts thetechnique of smooth filter. After beingdisposed, the edge of the 2D-Echocardiography showing on the screen ismore apparent. It is easy to detect.Keyword: EchocardiographyImage showing Fuzzy enhancementMembership functionEchocardiography is a valuablenon-invasive tool for imaging the heart <strong>and</strong>surrounding structures. It is used to evaluatecardiac chamber size, wall thickness, wallmotion, valve configuration <strong>and</strong> motion, <strong>and</strong> theproximal great vessels. Usefulness ofechocardiography is limited by the quality of theimages of which the gray scale is very low. Ingeneral conditions, the images must beenhanced.Image enhancement is the way thatincreases the contrast of the images to extrudethe details. There are many means to enhancethe images such as gray equilibrium. [1] In thisarticle, I adopt the fuzzy enhancement.Fuzzy enhancement is based on the fuzzymathematics. This method is rigid in math. L .A.Zadeh, an American cybernetist, proposed thefuzzy sets theory firstly in 1965. [2] After thatfuzzy math developed rapidly <strong>and</strong> showed thevalues in the theory <strong>and</strong> practice. Especially indisposing information ,it is used to describe <strong>and</strong>deal with things in many fields such as auto-control、 image disposing、mode identifying、machine vision.[3] In 1980, Pal <strong>and</strong> Kingbrought forward a monolayer fuzzyenhancement. This article uses an improvingfuzzy enhancement on the basis of analyzing thePAL. [4]1、 Fuzzy enhancement of PALThe model about the fuzzyenhancement raised by PAL is following:Image volumeImage volumeFuzzy volumeSuppose there is an image(M×N).Therank of the gray is K. According to theconcept of fuzzy aggregation, it isexpressed as the following matrix:Px⎡ ⎤ijX = ⎢ ⎥ The parameter signs thexij⎢⎣ij ⎥⎦M × Nrelevant point’s gray scale. It is subject to'xij. The parameter is the maximalxijgray scale. The parameter'p ijexpresses thesubject relationship. If the image isenhanced by PAL arithmetic, firstly, it willbe mapped from space aggregation to fuzzyaggregation. The steps of PAL arithmeticare as follows:1) Defining membership function:In the PAL’ aggregation, the membershipfunction is following:pijF c−K −1−xij[1+]F dThe parameter F ( c F d= …………(1)with the form of p ij. Usually,) is associableK −1−F =2, =I cc F. Thed2 −1parameter I is named inflexion. It iscselected by experience. In the PALarithmetic, if the gray scale pixel values arezero, the corresponding membershipfunction p ija≤ p ij≤1(a>0).is greater than zero. That is2) Changing the membership function95


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaThe images are enhanced by changing thep ij. It is transformed from theFollowing formula:⎧ j−1j⎪≤ ≤ 0.5' 2 p 0 pijijp = ⎨jijj−1⎪1− (1−) 0.5


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaPAL <strong>and</strong> image 4 is by the gray equilibrium.The image 2 is contaminated by the yawp.The edge of image 2 is fuzzy. By comparingimage 3 with image 2 <strong>and</strong> 4, we will findthat the contrastion of gray of image 3 isclearer <strong>and</strong> the function of restraining theyawp is better.low graymulti-level fuzzy enhancement <strong>and</strong> edgedrawing[J]. Fuzzy system <strong>and</strong>mathematics,2000,14(4):77-83.[4] Pal S K, King R A. Image enhancementusing fuzzy sets [J]. Electron. Lett,1980.16:376-378.Image(1)yawp disturbingLegible edgesImage(2) Image (3)Fuzzy edgesImage(4)Reference[1] Chang Dah Chung, Wu Wen Rong . Imagecontrast enhancement based on a histogramtransformation of local st<strong>and</strong>ard deviation[J].IEEE Trans on <strong>Medical</strong> <strong>Imaging</strong>,1998,17(14):518-531.[2] Timothy J Ross. Fuzzy logic <strong>and</strong> projectapplication. Electronic Industry Press,2001.[3] Li zhou cheng,Guo zhi gang. Image97


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaThe Research of Omnidirectional M-mode Echocardiography System<strong>and</strong> its Clinical ApplicationLinQiang, Wu Wenji(Biomedical Engineering Institute, Fuzhou University, Fujian Fuzhou, China, 350002)Abstract-This paper introduces a new method-Omnidirectional M-mode Echocardiography 1 ,which can detect the dynamic information fromsequential echocardiography. The method fordetecting dynamic information is based on therebuilding of their gray(position) ~ time functionon direction lines. So the system can get motiontrack, motion velocity <strong>and</strong> acceleration of a partof a cardiac structure at moment. The systemalso can show a lot of Omnidirectional M-modeEchocardiography with ECG synchronously, thatis rebuild from 2D echocardiography. The ECGsupplies a st<strong>and</strong>ard of time for OmnidirectionalM-mode Echocardiography. The system hasbeen applied into clinical practices for threeyears, <strong>and</strong> the result is scientific, accurate <strong>and</strong>creditable.Index terms-Echocardiography, Gray(Position)~Time Function, Motion information, OmnidirectionalM-mode EchocardiographyⅠ. INTRODUCTIONEchocardiography are sequential imagesreflecting figure <strong>and</strong> motion of 2-D cardiacsection. Currently, static images in echocardiographyfield have reached a remarkable level. Atthe same time, the factuality <strong>and</strong> quality ofrealtime of moving information hidden inechocardiography images have obtained guaranteein some extent, which improve the scientificnature <strong>and</strong> veracity of detecting dynamicinformation from them. Our research is based onall above facts.Echocardiography machine builds M-modeechocardiography with utrasound wave-beam,which was once the main method for detectingdynamic information of echocardiographysequential images. However, confined <strong>and</strong>limited utrasound wave-beam directions inducethe inaccurate echocardiograohy on cardiac longaxis section in clinical application. In addtion,the most evident character of the heart, whichmakes it different from other human tissues, isknown as its motion <strong>and</strong> distortion, <strong>and</strong> thatensures the existence of the life. For a long time,detecting dynamic information of cardiacstructure has been the extremely importantexploration <strong>and</strong> research subject in clinicial1.This work is Fujian Province SignificantTechnology Project (No.2003I019)application [1] . Besides of dynamic informationresearch on the inaccurate <strong>and</strong> confined M-modeechocardiography originally, methods of BloodFlow Doppler Echocardiography (what is calledmain subject of color scan), Tissue DopplerEchocardiography are also invented <strong>and</strong> appliedinto clinic.We rebuild the gray(position) ~time waveform(function) on arbitary direction lines from sequentialimages. Because the gray of images canst<strong>and</strong> for boundary of cardiac structures in someextent, the gray~time waveform actually representsthe stretched movement track of someonepart in cardiac structures when the direction ofselected line is consistent with that of the motionat this part, its one order differential st<strong>and</strong>s forvelocity, <strong>and</strong> two order for acceleration. So thisviewpoint is also an underst<strong>and</strong>ing to tissuevelocity of Tissue Doppler. Furthermore, comparingwith Tissue Doppler Echocardiography, itsown characters are illustrated as follows:• Its velocity comes from the differential ofgray~time waveform (function) directlywhile the velocity of Tissue Doppler istransformed indirectly from frequencyshift.• Its direction is flexible while the directionof Tissue Doppler is confined, troublesome.• The system can show a group of correlativeOmnidirectional M-mode echocardiographyon arbitary direction lines synchronouslywith ECG, which supplies a st<strong>and</strong>ard oftime for Omnidirectional M-mode echocardiography.Ⅱ. REBUILDING OF GRAY(POSITION) ~TIME WAVEFORM (FUNCTION) ONARBITARY DIRECTION LINES—OMNID-IRECTIONAL M-MODE ECHOCARDIOG-RAPHYRebuilding of gray(position) ~ time waveform(function) on arbitary direction lines is a methodfor capturing the track of the moving object <strong>and</strong>also a method for mining dynamic information(data) in sequential images [2,3] . When thedirection line is an arbitary direction line thatfixed on coordinates, it is propitious to thesitutation that move range of target is smaller<strong>and</strong> target is easier to be captured with a certainstraight line. The research target of echocardiographyis cardiac inner structure. The whole sizeof heart is several centimeters, what is more,98


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinaevery part of cardiac structure moves betweenone centimeter <strong>and</strong> several millimeter. Furthermore,the movement course of cardiac structureespecially the dynamic parameter which reflectsfunctional motion is periodic-like, that is movingback <strong>and</strong> forth nearby the balance point. So itcan be tracked with a certain direction straightline. And gray points in sequential frames arestretched into gray(position) ~ time waveform(function), which is corresponding to eachselected direction line, then all above waveformsrebuild series of omnidirectional M-modeechocardiography.Sketch map—rebuilding gray(position) ~ timewaveform(function) on short axis direction lineA,B,C <strong>and</strong> long axis direction line A ’ , B ’ ,C ’ , areillustrated in Fig.1. If the selected direction lineA,B,C <strong>and</strong> A ’ , B ’ ,C ’ are consistent with actualmotion direction of target part in the short axissection or long axis section, when the samplelines(coordinates position) are fixed, the grayintensity(position) data of the pixels, whichcome from image database(memory address <strong>and</strong>content), can generate gray(position) ~ timewaveforms combined with corresponding temporalrelationships. They present the movementtrack of wall of ventricle of cardiac structure.What’s more, they form the foundation ofdetecting dynamic information.function on straight direction lines in sequentialimages [3,4] , then combines with rebuildingtechnique <strong>and</strong> correlative hardware equipment.Fig.2 Overall structure of the OmnidirectionalM-mode Echocardiography SystemAppropriative image card of PCI bus isintroduced into the system, which cooperateswith computer so that transmission speed ofimage data of 40Mbytes per-second can beobtained. Provided that one frame image has 1Mbytes data, it is competent for transmiting PAL(25 frames per-second) or NTSC (30 framesper-second) image data in real time.Broken lines in Fig.1 are utrasound wavebeamdirections, i.e. are the originally confined<strong>and</strong> limited direction in B-scan machine tocapture the dynamic information (gray intensity).As seen in the figure, the actual motion ofcardiac structure is difficult to track exactly inM-mode echocardiography made by old B-scanmachine.Fig.1 Sketch map—rebuilding gray (position)~ time waveform(function) on short axis (left)<strong>and</strong> long axis (right)Ⅲ. THE STRUCTURE OF OMNIDIRECTI-ONAL M-MODE ECHOCARDIOGRAPHYSYSTEM AND ITS UNDERSTANDING TODYNAMIC INFORMATIONThe overall structure of the OmnidirectionalM-mode Echocardiograhpy System is illustratedin Fig.2. The system bases on fundamentaltheory that rebuilding gray (position) ~ timeFig.3 Flow chart of rebuilding software of gray~ time function on arbitary direction linesThe rebuilding software of synchronous ECGis also introduced. By capturing the cardiogramon the bottom of B-scan or color scan echocardiography,the software can extract ECG signal<strong>and</strong> make it synchronous with omnidirectionalM-mode echocardiography, so as to supply ast<strong>and</strong>ard of time for further processing.99


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaThe most important component of software isabout the rebuilding of gray(position) ~ timewaveform (function) on arbitary direction lines.Its flow chart is illustrated in Fig.3. FunctionI n (x ij ,y ij ), which is obtained from the fifth blockin Fig.3, presents gray intensity value of point oncoordinates position (x ij ,y ij ) in n th frame, i.e. isthe gray(position) funtion. I n (x ij ,y ij ) changesdue to independent variable n, when frameinterval is T frame , We get the functionrelationship: I n (x ij ,y ij ) ~ nT frame .Because coordinats position (x ij ,y ij ) is confinedon direction line r v , i.e. is(x ij ,y ij )⇒ r v ij.So the above formula can be translated intointegrated form of gray(position) ~ time function:I v = I v ( nT ) frame.rijrijFurthermore, what we care is gray(position) ~time function other than the gray value of pixel,so the above formula is equal tov v vr = r ( nT ) = r ( t), t=n T frame.ijijframeijt is a discrete time variable sampled with T frame .i=1~m st<strong>and</strong>s for motion track of m series ofpixels that are corresponding to m direction lines.This is the time function(waveform) form ofmotion track of sampled pixels, namedOmnidirectional M-mode Echocardiography, onwhich we detect the dynamic information. Thenvelocity <strong>and</strong> acceleration will be obtained asfollows:v⎧ d rij⎪dt⎨ 2 v⎪d r2⎩ dtij==vijvaijFig.4 Omnidirectional M-mode echocardiographywith its synchronous ECGAs illustrated in Fig.4, the system generatedfour omnidirectional M-mode echocardiographywith its synchronous ECG as st<strong>and</strong>ard time onfour arbitary direction lines which were setmanually.Ⅳ。CLINICAL APPLICATIONWe manufactured LEJ-1 M-mode omnidirectionalgray~time waveform system <strong>and</strong> applied itto clinical practices in Fujian ProvincialResearch Iinstitute for Cardiovascular Disease<strong>and</strong> Fujian Provincial Hospital. Some of clinicalpractice report detected by Fujian ProvincialResearch Iinstitute for Cardiovascular Diseaseare given in Tab. 1.Tab.1 Some of clinical practice reportdetected by Fujian Provincial ResearchIinstitute for Cardiovascular DiseaseConclusion of table: The statistical value t ofcounter-sample at the same location whereomnidirectional M-mode echocardiograhpy <strong>and</strong>original B-scan echocardiography can measureaccurately are both smaller than the statisticalvalue t of free degree 29 <strong>and</strong> probability 0.05.Factually, their difference is insignificant instatistic, <strong>and</strong> the data are the same with eachother.The system has been applied into clinicalpractices in Fujian Provincial Research Iinstitutefor Cardiovascular Disease <strong>and</strong> Chinese People’sLiberation Army(CPLA) General Hospital ( 301Hospital, Beijing) for 3 years. They summarizedata <strong>and</strong> experiment in their own clinicalpractice <strong>and</strong> research according to the functionof system. Applications are as follows:• The application of OmnidirectionalM-mode Echocardiography in assessmentof left ventricle regional diastolicfunction [5] .• The application of OmnidirectionalM-mode Echocardiography in observation100


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinaof pulmonary artery motion curve ofpulmonary hypertension [6] .• The application of OmnidirectionalM-mode Echocardiography in studyingrelation between atrial septal motion <strong>and</strong>atrial load [7] .• etc.Ultrasound department of General Hospital ofCPLA has made a detailed research on regionalmotion function of heart with the system.Hundreds of cases including different age,normal <strong>and</strong> abnormal cardiovascular persons arediagnosed. The application of OmnidirectionalM-mode Echocardiography in assessment of leftventricle regional diastolic function may berecommended as a diagnostic method ofexamining earlier heart degradation that globalfunction is healthy but regional function is lightdegrading.Ⅴ. CONCLUSIONOmnidirectional M-mode EchocardiographySystem is an application of rebuilding gray(position) ~ time waveform(function) on relativesimple fixed direction line in sequential echocardiography,<strong>and</strong> that won the National patent ofinvention in P.R. of china. After constant clinicalpractices in several famous internal hospitals, weknow more profound on the fact, different fromother human tissues, the dynamic information ofmotion <strong>and</strong> distortion is more important than itsstatic figure information. Groups of arbitarydirection lines in any part of any structure can beset <strong>and</strong> there are corresponding omnidirectionalechocardiography with ECG below echocardiographyso as to compare with waveforms <strong>and</strong>temporal phase.However, motion of every part in cardiacstructure is complicated <strong>and</strong> correlative. Furthermore,the motion of ill heart is more complicated<strong>and</strong> difficult to comprehend. As a matter of fact,the research we have done is only a initiation,much problem will emerge in the process. Onthe other h<strong>and</strong>, the new invented methods <strong>and</strong>instruments will help to explore the subject inreturn.in Las Vegas U.S.A, Vol.Ⅱ:760-763.[3] LinQiang, Jia Wenjing, Yang Xiuzhi. A Methodfor Mining Data of Sequetial Images -Rebuilding of Gray(Position)~time Function onArbitrary Direction Lines. Proceeding of CISST’2002, June 2002 in Las Vegas U.S.A, Vol.Ⅰ:3-6.[4] LinQiang, ZhangLi, JiaWenjing. The Realizationof Omnidirectional Gray~Time WaveformSystem <strong>and</strong> its Application on Echocardiography.Journal of Electrnic Measurement <strong>and</strong> Instrument,June 2002, Vol.16, No.1: 70-75.[5] LiYue, WangQing, etc. Assessment of LeftVentricle Regional Diastolic Function by OmnidirectionalM-mode Echocardiography-Studyabout a Kind of New Method <strong>and</strong> Index. ChineseJ Med <strong>Imaging</strong>, 2002, Vol.10, No.3:169-172.[6] LiYue, WangQing, WenChaoyang. Observationof Pulmonary Artery Motion Curve of PulmonaryHypertension with Anatomical M-modeEchocardiography. Chinese Journal of Ultrasonography,Oct 2003, Vol.12, No.10: 593-595.[7] GuoWei, LuLihong, Chenbin. Relation betweenAtrial Septal Motion <strong>and</strong> Atrial Load UsingOmnidirectional M-mode Echocardiography.Chinese Journal of Ultrasonography, Sep 2004,Vol.13, No.9: 593-595.REFERENCES[1] LinQiang. A Detecting Method for contour-Based Optical Flow Field of Heart in UltrasoundB-Scan Images. Acta Electronics Sinica, 1996,Vol.24,No.4:122-125.[2] LinQiang, JiaWenjing, ZhangLi. A Methodfor Detecting Dynamic Information of SequentialImages-Omnidirectional Gray~time Waveform<strong>and</strong> its Applications in EchocardiographyImages. Proceedings of CISST’ 2001, July 2001101


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaAnalysis of Colour Images for Tissue Classification <strong>and</strong> Healing Assessmentin Wound CareH. Zheng*, D. Patterson, M. Galushka, L. Bradley{*h.zheng, mg.galushka,wd.patterson}@ulster.ac.uklilian.bradley@ucht.n-i.nhs.ukAbstract- Researches have been motivated to develop a noninvasivemeasurement system to automatically monitor thewound healing process, so as to reduce the workload ofprofessionals, provide st<strong>and</strong>ardization, remove subjectivity,reduce costs, <strong>and</strong> improve the quality of care for patients. Thisarticle overviews colour image analysis techniques for woundcare <strong>and</strong> proposes a new computational approach in tissueclassification for healing assessment. We investigate twodifferent feature extraction techniques <strong>and</strong> combine them with acase-based approach tissue classification. Preliminary resultsshow that in terms of the optimal features to be used, Red,Green <strong>and</strong> Blue histograms are best for classification <strong>and</strong> theproposed case-based approach to be ideal for this type of task.I. INTRODUCTIONVenous leg ulcers are a common problem in Westernsocieties, with an average 0.2% of the UK populationaffected by such wounds [1]. As leg ulcers may take manymonths to fully heal, the cost of treatment <strong>and</strong> morbidity ofpatients is high, therefore it is important for clinicians <strong>and</strong>researchers to monitor treatments <strong>and</strong> outcomes in terms ofhealing if we are to provide optimal care. Venous ulcers havea non-uniform mixture of yellow tissue (slough), redgranulation tissue <strong>and</strong> black necrotic tissue in the woundbase. The appearance of a wound has important clues thatcan help with the diagnosis, determination of severity,<strong>and</strong> the prognosis for healing. A high percentage of wetnecrotic tissue or slough in the ulcer bed indicates that anulcer is undergoing destructive changes, while apredominance of granular tissue may be indicative oftissue proliferation or healing [2]. The proportions ofeach tissue <strong>and</strong> the size of the wound boundary may beuseful in assessing the progress of the wound [1].Currently, the black-yellow-red colour model is used byclinicians as a descriptive tool. The interpretation ofcolour is a key element in this assessment. However, thisinterpretation is subjective <strong>and</strong> can vary from oneclinician to another due to their different experience ofcolour, visual acuity, <strong>and</strong> the lighting in the assessmentroom. The differing interpretations can result in varyingtreatment plans. It is therefore meaningful to develop anon-invasive measurement system to automaticallymonitor the wound healing process, so as to reduce theworkload of professionals, provide st<strong>and</strong>ardization,remove subjectivity, reduce costs, <strong>and</strong> improve thequality of care for patients. In order to build such asystem, the development of an accurate <strong>and</strong> objectivetissue classification technique is the first <strong>and</strong> fundamentalstep.Advances in digital technology <strong>and</strong> colour imageprocessing have facilitated the use of computationalapproaches in automatic wound assessment system.Researchers have proposed a number of techniques to processwound images, most focus on clustering <strong>and</strong> segmentation orclassification using techniques such as snakes [6] neuralnetworks [7] <strong>and</strong> support vector machines [8]. Thesetechniques extract <strong>and</strong> use different feature sets, includingRGB (Red, Green, blue) histograms, HSI (Hue, Saturation,Intensity) histograms, HSV (Hue, Saturation, Value)histograms <strong>and</strong> textural features [6]–[14]. In this paper, weexamine the applicability of RGB features <strong>and</strong> textualfeatures for tissue classification <strong>and</strong> the efficacy of using anovel classification approach based on case-based reasoning(CBR) [4].II. FEATURE EXTRACTIONWound images were captured by a digital camera in 24bit true color at the Ulster Community & Hospitals HSSTrust, Belfast, Northern Irel<strong>and</strong>. Two types of featurecan be obtained from colour images: (i) colourinformation <strong>and</strong> (ii) textural information. Therefore weinvestigated extracting both types of features <strong>and</strong> utilisingthem within separate case-based classifiers to determinewhich type is most effective for tissue classification.A. Colour SystemsThere are three common used colour models:1. RGB modelThe abbreviation “RGB” comes from the threeprimary colours. This model is a convenientcolour model for computer graphics because thehuman visual system works in a way that issimilar - though not quite identical - to an RGBcolour space.2. Cyan-magenta-yellow-key(CMYK) model102


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaThe CMYK uses subtractive colour mixingused in the printing process, because it describeswhat kind of ink needs to be applied so the lightreflected from them produces a given colour.One starts with a white canvas, <strong>and</strong> uses ink tosubtract colour from white to create an image.CMYK stores ink values for cyan, magenta,yellow <strong>and</strong> black.3. Hue-saturation-brightness (HSB)The HSB model defines a colour space in termsof three constituent components:• Hue: specifies the dominant wavelength ofthe colour, such as red, blue, or yellow);• Saturation: the "vibrancy" of the colour,also sometimes called the "purity". Thelower the saturation of a colour, the more"grayness" is present <strong>and</strong> the more faded thecolour will appear;• Brightness: a visual perception along theblack-to-whiteF j = {( hmax1,i1), ( hmax 2 , i2), ( hmax 3 , i3)},where j = { R,G,B}For each histogram, i.e. R-red histogram, B-bluehistogram <strong>and</strong> G-green histogram, a low-pass filter isapplied to smooth the signal, <strong>and</strong> then three highestpeaks are select as the colour features. Therefore weextract 9 feature sets in total for each case.Fig. 1 illustrates the filtered curves <strong>and</strong> the threehighest peaks detected in each histogram of three colourchannels.While the CMYK colour space has not been used inwound colour research, both RGB <strong>and</strong> HSB have beentested [2]. The RGB model has been a nature selectionfor wound research [10], [11], <strong>and</strong> the research using theHSB has been limited [12]. Our previous researchshowed that different types of tissue exhibit differentpattern in their R, G, B histograms, including the relativeintensity <strong>and</strong> position [3], while the HSB system cannotpick up this information. It has been claimed that theRGB model is inadequate for colour processing becauseof value of each channel strongly depends on the lightintensity, however, this problem can be solved byproviding a st<strong>and</strong>ardised lighting <strong>and</strong> a chromatic theRGB model [9]. In this research, the RGB model isexamined as a colour model for feature extraction.B. RGB Histogram FeaturesIn the RGB model, a colour is expressed in terms thatdefine the amounts of Red, Green <strong>and</strong> Blue light itcontains. In a 24-bit colour image (as used in this study),pure red would be represented as 255/000/000, where255 represents the highest level of red light possible,untainted by any green (000) or blue (000) light. Variouscombinations of the Red, Green <strong>and</strong> Blue values allow usto define (over 16 million) colours.Mekkes et al. [10] applied a three dimensional RGBhistogram to analysis the colour <strong>and</strong> concluded thatsimple RGB thresholds were not able to distinguishbetween tissue types. Instead of using the threedimensional RGB histogram, we define the followingcolour features to obtain the relationship informationamong three colour channels [3], [4]:Fig. 1. Histograms for the three colour channels with a filtered curve <strong>and</strong>the three highest peaks detected: (a) red channel, (b) green channel <strong>and</strong>(c) blue channel [4].C. TexturalFfeaturesFourteen textural features were defined by Haralick etal. in [15]. A Spatial-Dependence Matrix (SDM) presentsangular relationships between neighbouring cells in theoriginal image. In this research, four grey-tone SDMs areconstructed for each colour channel in 4 main angles (0 o ,45 o , 90 o , 135 o ). Three most significant features wereselected for feature extraction: angular second-momentfeature, contrast feature <strong>and</strong> correlation feature.• The angular second-moment feature f 1 , which is ameasure of the homogeneity of an image;• The contrast f 2 , which is a measure of the contrastor amount of local variation present in an image;• <strong>and</strong> the correlation f 3 , which is a measure of greytonelinear-dependencies within the image.103


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaThe mean <strong>and</strong> deviation of the three features are usedas the textural features for each channel [4] (Fig. 2). Thisprovides 18 textural features in total for each case.F j = {( µ 1,σ 1),( µ 2 , σ 2 ), ( µ 3 , σ 3 )}where j = R,G,Bprocess diagrammatically. In the first image the wound isanalysed <strong>and</strong> the grid overlayed (analysis). Each cell(ROI) is then taken in turn <strong>and</strong> using case knowledge aprediction is made. Finally the system predictions foreach ROI are compared to the expert’s.III. CBR METHODOLOGYFig. 3 illustrates the classification process using CBR.The classification procedure consists of three steps:feature extraction, retrieval <strong>and</strong> adaptation. A real woundimage was input into the system, <strong>and</strong> split into regions ofcells by applying a grid structure during a pre-processingstep (Fig. 3). Each cell element was 10x10 pixels in size<strong>and</strong> equates to a Region of Interest (ROI). Nine sets ofRGB features <strong>and</strong> eighteen textural features wereextracted from each ROI to form a RGB feature casebase <strong>and</strong> a textural feature case base. The 10 closestcases, which correspond to the regions of interest withthe most similar feature values, were selected from therelevant case-base during the retrieval process. Retrievalwas carried out by the nearest neighbour algorithm (10-NN) <strong>and</strong> the majority class was used to classify the targetROI [4]. The classification results were compared to“gold st<strong>and</strong>ard”, which was the identification of tissuetypes defined by experienced clinician. Fig. 4 shows thisFig. 3. CBR based tissue classification for leg ulcer careFig. 2. Textural features extraction from three channels: red, green <strong>and</strong> blue.104


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaIV. EXPERIMENTS AND RESULTSTwo different feature extraction techniques (RGB <strong>and</strong>Texture) were assessed in the experiments, which were splitinto two groups:I. Three tissue types II. Six tissue typess – sloughn – necroticg – granulations – sloughn – necroticg – granulatione – epithelialt – tendonh - haematomaThe first group of experiments considered only 3 tissueclasses (‘s - slough, n - necrotic, g - granulation’) <strong>and</strong> thesecond group 6 (‘s, n, g, e - epithelial, t - tendon, h -haematoma’). 30% of r<strong>and</strong>omly selected ROI were used toform a training case-base with the remaining 70% used astarget cases. The same experiment was carried out 10 times<strong>and</strong> the classification accuracy <strong>and</strong> Kappa value (also calledthe kappa coefficient) is calculated <strong>and</strong> used as a measure ofclassification agreement. Results shown are for the targetcases only, no traning cases were included in the evaluation.The average prediction accuracy <strong>and</strong> average Kappa valueresults obtaining during the experiments are presented inTable I for each feature extraction approach.TABLE IAVERAGE PREDICTION ACCURACY AND KAPPA TEST RESULTSExperiment3 Classes (‘s, n, g’) 6 Classes (‘s, n, g, e, t, h’)Accuracy % Kappa Value Accuracy Kappa ValueRGB 89.93±0.57 0.80±0.011 86.8±0.43 0.75±0.017Texture 58.3±3.84 0.20±0.07 54.4±2.67 0.20±0.084Classification using RGB features produced both veryhigh average accuracies <strong>and</strong> kappa values for the 3 <strong>and</strong> 6class problems respectively, whereas using texturalfeatures produced much lower accuracies <strong>and</strong> kappavalues. These results indicate that RGB features arebetter used in multi-class tissue classification, whilsttextural feature are less effective for this type ofclassification. One explanation for this may be due to thesmall size of the ROI’s (10x10 pixels). However,increasing the size of the ROI would not be appropriatefor our purposes as it would affect the classificationaccuracy, due to the potential for mixing two or moretissue types in the same ROI. A slight drop in accuracybetween experiments with 3 <strong>and</strong> 6 classes occurs due tothe introduction into the classification process of anFig. 4. The procedure of CBR based tissue classification.105


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinaadditional 3 classes of tissue, which are poorlyrepresented in the image. This will be overcome by theintroduction of more cases from their classes.These results also show that the CBR approach totissue classification has advantages over other classifiersreported in the literature, such as logistic regression,Artificial Neural Network (ANN), <strong>and</strong> Support VectorMachine(SVM) , where the highest kappa value reportedwas 0.717 (SVM, in slough classification) [8]. This wason a different image but it gives a rough benchmark forthese techniques.Future work will include using this method to identify<strong>and</strong> determine areas of one tissue type (through joiningsimilar ROI’s together) <strong>and</strong> the monitoring of patientshealing process over time so as to define an optimumtreatment plan for patients.REFERENCES[1] H. Oduncu, A. Hoppe, M. Clark, R.J. Williams <strong>and</strong> K. Harding,“Analysis of skin wound images using digital color image processing:a preliminary communiction,” Lower Extremity Wounds, vol. 3, no. 3,pp.151-156, 2004.[2] W. McGuiness, S.V. Dunn, T.Neild <strong>and</strong> M.J. Jones, “Developling anaccurate system of measuring colour in a venous leg ulcer in order toassess healing”, Journal of Wound Care, vol. 14, no. 6, June 2005.[3] H. Zheng, L. Bradley, D. Patterson, M. Galushka <strong>and</strong> J. Winder,“New Protocol for Leg Ulcer Tissue Classification from ColourImages”, Proc. of the 26th Annual International Conference of theIEEE EMBS San Francisco, CA, USA, September 1-5, 2004, pp.1389-1392.[4] M. Galushka, H. Zheng, D. Patterson, <strong>and</strong> L. Bradley, “Case-basedtissue classification for monitoring leg ulcer healing”, Proc. of the18th IEEE Symposium on Computer-Based <strong>Medical</strong> Systems, Dublin,Irel<strong>and</strong>, June 23-24, 2005, pp. 353-358.[5] Aroushka L James <strong>and</strong> Ardeshire Bayst, “Basic plastic surgerytechniques <strong>and</strong> principles: chronic wound management”, StudentBMJ, vol 11, pp. 406-407, 2003.[6] T. D. Jones, “Improving the Precision of leg ulcer areameasurement with active contour models”, PhD thesis,University of Glam<strong>org</strong>an, 2000.[7] B. A. Pinero, C. Serrano, <strong>and</strong> J. I. Acha. “Segmentation of burnimages using the L*u*v* space <strong>and</strong> classification of their depthsby color <strong>and</strong> texture imformation”. volume 4684, pp. 1508-1515, SPIE, 2002.[8] B. Belem. “Non-Invasive Wound Assessment by ImageAnalysis”, Ph.D Thesis, University of Glam<strong>org</strong>an, 2004.[9] M Herbin, F.X. Bon, A. Venot, F. Jeanlouis, M.L. Dubertret, <strong>and</strong> D.Strauch, “Assessment of healing kinetics through true color imageprocessing,” IEEE Transactions on Medial <strong>Imaging</strong>, vol. 12, no. 1, pp.39-43, 1993.[10] J. Mekkes <strong>and</strong> W. Westerhof, “Image processing in the study ofwound healing,” Clinics in Dermatology, vol. 13, no. 4, pp. 401-407,1995.[11] B.F. Jones <strong>and</strong> P. Plassman, “An instrument to measure thedimensions of skin wounds,” IEEE Transactions on BiomedicalEngineering, vol. 42, no. 5, pp. 464-470, 1995.[12] G.L. Hansen, E.M. Sparrow, J.Y. Kokate, K.J Lel<strong>and</strong> <strong>and</strong> P.A. Iaizzo,“Wound status evaluation using color image processing,” IEEETransactions on Medial <strong>Imaging</strong>, vol. 16, no. 1, pp. 78-86, 1997.[13] W.P. Berriss, “Automatic quantitative analysis of healing skin woundsusing colour digital image processing,” World wide wounds (online),1997.[14] W.P. Berriss, “Acquisition of skin wound images <strong>and</strong> measurement ofwound healing rate <strong>and</strong> status using colour image processing,” PhDthesis, the University of Reading, 2000.[15] M. Herbin, A. Venot, J.Y. Devaux, <strong>and</strong> C. Piette, “Colour quantitationthrough image processing in skin,” IEEE Transactions on Medial<strong>Imaging</strong>, vol. 9, no. 3, pp. 262-269, 1990.[16] R. M. Haralick, K. Shanmugam <strong>and</strong> I. Dinsten, “ Textural features forimage classification”, IEEE Tran. on System, Man, <strong>and</strong> Cybernetics,vol. SMC-3, no. 6, nov. 1973.106


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, China<strong>Telemedicine</strong> ----techniques <strong>and</strong> applications107


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, China<strong>Telemedicine</strong> in Western Africa (RAFT project)Cheick Oumar Bagayoko, MD, Henning Müller, PhD, <strong>and</strong> Antoine Geissbuhler, MDService of <strong>Medical</strong> Informatics, Geneva University Hospitals, Geneva, Switzerl<strong>and</strong>Abstract— Objectives: the evaluation of the feasibility,potential, problems <strong>and</strong> risks of an Internet-based telemedicinenetwork in developing countries of Western Africa.Methods: a project for the development of a nationaltelemedicine network in Mali was initiated in 2001, inMauritania in 2002, Morocco in 2003 using Internet-basedtechnologies for distance learning <strong>and</strong> teleconsultations. Severalother countries are currently in the process of joining thisnetwork.Results: the telemedicine network has been in productive usefor over 18 months <strong>and</strong> has enabled various collaborationchannels, including North-South, South-South, <strong>and</strong> South-Northdistance learning <strong>and</strong> teleconsultations. It has also unveiled a setof potential problems: a) the limited importance of North-Southcollaborations when there are major differences in the availableresources or the socio-cultural contexts between the collaboratingparties; b) the risk of an induced digital divide if the periphery ofthe health system is not involved in the development of thenetwork, <strong>and</strong> c) the need for the development of local medicalcontent management skills.Conclusions: the identified risks have to be taken into accountwhen designing large-scale telemedicine projects in developingcountries <strong>and</strong> can be mitigated by the fostering of South-Southcollaboration channels, the use of satellite-based Internetconnectivity in remote areas, <strong>and</strong> the appreciation of localknowledge <strong>and</strong> its publication on-line.Index Terms—telemedicine, developing countries, teleteachingI. INTRODUCTION<strong>Telemedicine</strong> tools enable the communication <strong>and</strong>sharing of medical information in electronic form, <strong>and</strong> thusfacilitate access to remote expertise. A physician located farfrom a reference center can consult his colleagues remotely inorder to solve a difficult case, follow a continuing educationcourse over the Internet, or access medical information fromdigital libraries. These same tools can also be used to facilitatethe exchange between centers of medical expertise, at anational or international level [2,3,8,11].The potential of these tools is particularly significant incountries where specialists are rare <strong>and</strong> where distances <strong>and</strong>the quality of the transportation infrastructure hinder themovement of physicians <strong>and</strong>/or patients. Many of the FrenchspeakingAfrican countries are confronted with theseproblems. In particular large <strong>and</strong> scarcely populated countriessuch as Mali (twice the size of France, 11 million inhabitants)<strong>and</strong> Mauritania (twice the size of France, 3 millioninhabitants) are concerned by this problem.The usefulness <strong>and</strong> risks of these new communication <strong>and</strong>collaboration channels have to be assessed before large-scaleprograms can be launched. Prior experiences in the fieldinclude ISDN-based video conferencing for tele-cardiology<strong>and</strong> tele-neurology between Dakar <strong>and</strong> Saint-Louis in Senegal,a demonstration project (FISSA) of the use of satellite-basedprenatal tele-ultrasound imaging between Dakar <strong>and</strong> theTambacounda region in Senegal, <strong>and</strong> tele-radiologyexperiences in Mozambique. However, there is little publishedmaterial on the use of low-b<strong>and</strong>width, Internet-basedtelemedicine applications, although there is a significantinvestment in these technologies in developing countries.The development of the national network for telemedicinein Mali was used as a pilot case in order to get a better insightinto these aspects.Other research projects on telemedicine have worked on thefinancial aspects <strong>and</strong> implications as well as [9]. Most of thepublished literature is actually on rather high-technologytelemedicine, most on teleradiology [13]. Other articles in thefields include [12].II. THE RAFT PROJECTA. History of the Pilot ProjectThe pilot project in Mali, named «Keneya Blown» (the“health vestibule” in Bambara language), was initiated in 2001by the Mali University <strong>Medical</strong> School in Bamako, <strong>and</strong>financed by the Geneva government <strong>and</strong> the GenevaUniversity Hospitals. Several goals were set: a) thedevelopment <strong>and</strong> use of Internet-based connections betweenthe national <strong>and</strong> regional health care institutions, b) the108


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinaimplementation of basic services such as e-mail <strong>and</strong> a medicalWeb portal to train users, c) the implementation of a lowb<strong>and</strong>width,Internet-based distance learning system, <strong>and</strong> d) theevaluation of the feasibility for long distance collaborationsfor continuing medical education <strong>and</strong> tele-consultations.The national network infrastructure is based on an IEEE802.11b wireless metropolitan area network in Bamako, <strong>and</strong>on the numeric telephony network to reach regional hospitalsthat are outside of the capital.The e-mail <strong>and</strong> Web services are hosted on Linux-basedservers [7], protected from the instability of the local electricpower supply by three dozens truck batteries.The distance learning system e-cours [10] was developed atthe University of Geneva <strong>and</strong> is specifically designed tominimize the use of network b<strong>and</strong>width while providing highqualitysound <strong>and</strong> display of didactic material, as well asfeedback from the students to the teachers via instantmessaging. The student can adjust the quality of the videoimage (the “talking head”), of which the educational value islimited, in order to save resources. A b<strong>and</strong>width of 28kbits/second is therefore sufficient to follow the course <strong>and</strong>enable remote areas to participate in distance learningactivities. It is based on free <strong>and</strong> widely available tools such asLinux, Apache, <strong>and</strong> Firefox. It is browser-based, <strong>and</strong> workson most desktop operating systems (see Table 1 <strong>and</strong> 2 fordetails on the required technique for server <strong>and</strong> clients).Similar projects using the same technologies are deployedin Mauritania, Morocco <strong>and</strong> Tunisia, <strong>and</strong> other countries suchas Djibouti, Madagascar <strong>and</strong> Burkina Faso have also joinedthe project recently.TABLE 1. HARDWARE AND SOFTWARE REQUIREMENTS OF THE DISTANCELEARNING CLIENT.• Operating system: Windows 95,98,2000, Mac OS,Linux, Solaris, or Irix;• PC 166 MHz, 64Mb RAM;• Sound card;• Screen 1024x768 preferred, 800x600 possible;• Netscape 4.0 or Internet Explorer 4.0 or later, Javaenabled;• 28 kbits/s Internet connection (56 kbits/s b<strong>and</strong>widthnecessary for video images);• Real Player <strong>and</strong> Acrobat reader pluginsTABLE 2. HARDWARE REQUIREMENTS FOR THE DISTANCE LEARNING SERVER(WEBCASTING EQUIPMENT).• PC 500Mhz, Windows 98, 128 Mb RAM, soundcard;• Webcam server AXIS 2400;• Microphone;• Document video camera WolfVision or equivalent;• Ethernet hub or switch, 10 or 100 Mbits/s.III. RAFT OBJECTIVESThe RAFT project has a number of objectives. One of themain goals was to motivate the talented medical professionalsto go to regions where they are most needed, medicine in firstline in rural areas farther away from the capital. This wasmotivated by the accessibility of Internet access <strong>and</strong> thusaccess to continuing medical education.A second objective is some help with the creation ofeducational content adapted to local realities in the countries,as most information published on the Internet is not applicablein these rural settings.The development of South-South telemedicine network inthe countries of French-speaking Africa was another objectiveto not only have a one way access to continuing education butto give the medical professionals also a possibility to expresstheir ideas to other local colleagues.The integration of the specific needs for the primaryeducation care <strong>and</strong> the rural sectors in the network were alsoregarded as an important part of the project.To increase the human capacity to develop, maintain, <strong>and</strong>publish medical content of quality with the added local valuewas another objective by creating local knowledge on thepublication of information. These technical skills areimportant to create a sustainable use of the technology.IV. ACTIVITIES OF RAFTOver 18 months, the project in Mali has enabled thedevelopment of a functional national telemedicine network,which connects several health institutions in Bamako, Segou<strong>and</strong> Tombouctou, where medical teams have been trained forthe use of Internet-based tools. The medical Web portal topublish information is in place. Webcasting systems fordistance learning have been implemented in Geneva <strong>and</strong>Bamako (for broadcasting). Continuing medical educationcourses are now broadcasted on a weekly basis. Several teleconsultationshave taken place, to follow patients that wereoperated in Geneva <strong>and</strong> then returned to Mali. The teleconsultationsystem is also used to select appropriate cases<strong>and</strong> guide their work-up in order to optimize patientevacuation to hospitals in the North or to preparehumanitarian missions. The number of these consultations iscurrently limited by the number of partners in the network.Various types of collaboration have been enabled by theproject <strong>and</strong> will be described in the following paragraphs:• North-South tele-education: topics for post-graduatecontinuing medical education are requested byphysicians in the RAFT countries; courses are thenprepared by experts in Switzerl<strong>and</strong> <strong>and</strong> broadcastedover the Internet from Geneva. New courses areproduced <strong>and</strong> broadcasted on a weekly basis (every109


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaThursday) on a variety of topics (see Table 3). Thematerial is also stored on a web server <strong>and</strong> can bereplayed from the medical Web portal. Typically,these courses are followed by 50 to 100 physicians<strong>and</strong> students in a specially-equipped auditorium inthe RAFT country University Hospitals. They arealso followed by smaller groups or individuals in theSegou <strong>and</strong> Timbuktu regional hospitals <strong>and</strong> in therural community of Dimmbal, which is about 875miles away from Bamako. Other French-speakingcountries in Africa also join these courses: Senegal,Mauritania, Morocco, Tunisia , Ivory Coast, BurkinaFaso , Madagascar, Niger <strong>and</strong> Djibouti.• Webcasting of scientific conferences: several sessionsof international conferences have also beenbroadcasted, with simultaneous translation in French,in order to make the presentations accessible tocolleagues in the RAFT country, where the practiceof the English language is still limited. Using theinstant messaging feature of the system, remoteparticipants can intervene <strong>and</strong> ask questions to thespeakers.• South-South tele-education: post-graduate <strong>and</strong> publichealth courses developed by the various healthinstitutions in Bamako are web-casted to regionalhospitals in Mali <strong>and</strong> to other partners in WesternAfrica (see Figure 1). The content produced isanchored in local, economical, epidemiological <strong>and</strong>cultural realities, <strong>and</strong> provides directly applicableinformation for the participants.• South-North tele-education: medical students trainingin tropical medicine in Geneva follow courses <strong>and</strong>seminars <strong>org</strong>anized by experts in Mali on topics suchas leprosy or iodine deficiency. The exposure to realworldproblems <strong>and</strong> field experts enables a betterunderst<strong>and</strong>ing of the challenges for developingcountries <strong>and</strong> implementing health care <strong>and</strong> publichealth projects in unfamiliar settings.• North-South tele-consultation: the same system of thetele teaching can also be used to send high-qualityimages from one partner to another enabling theremote examination of patients or the review ofradiographic images. Tele-consultations are heldregularly, in medical fields where expertise is notavailable in Mali, such as neurosurgery or oncology(Figure 2).• North-South tele-consultation: the same system canbe used to send high-quality images between thepartners enabling the remote examination of patientsor the review of radiographic images. Teleconsultationsare held regularly, in areas whereexpertise is not available everywhere such as in Maliwith neurosurgery <strong>and</strong> oncology (see Figure 2).• South-South tele-consultation: physicians in regionalhospitals can request second opinions or expertadvice from their colleagues in the UniversityHospitals via e-mail. This can include the exchangeof images obtained using digital still cameras orscanned radiographs.• South-North tele-consultation: the case of a leprosypatient, where the treatment is followed in Geneva,has been discussed using the teleconsultation system.It enabled the expert in Bamako to adjust thetreatment strategy.Figure 1. Screenshot of a student view of a course webcast showing theteacher (top-left), the didactic documents (main window) <strong>and</strong> controls for thesound <strong>and</strong> the instant messaging tool (left column).Figure 2. Screenshot of a teleconsultation session, showing the variousdocuments available: the image of the patient, of the physicians, theradiographic images <strong>and</strong> other clinical data.110


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaV. RESULTSAfter all, the RAFT project has been successfullyimplemented <strong>and</strong> is in routine use since two years. Thepartners are satisfied <strong>and</strong> several countries are joining theinitiave regularly.In total, over 100 teleteachings have been held from northto south <strong>and</strong> from south to north (see Table 3 for exampletopics). 20 telecosultations north to south have been hall <strong>and</strong>10 south to north.TABLE 3. TOPICS OF DISTANCE LEARNING COURSES REQUESTED BY THEPHYSICIANS IN THE RAFT PROJECT FROM THE COUNTRY UNIVERSITYHOSPITAL (AFRICA).• Antiretroviral therapies in Africa;• Iodine deficiency, public health strategies;• Shoulder radiology;• Arterial hypertension during pregnancy;• Ultrasonic evaluation of arterio-venous fistulae;• Herpes virus infections;• Hospital hygiene;• Thoracic traumatology;• Tomodensitometry of ENT pathologies;• Adjuvant therapies for breast cancer;• Drug prescription <strong>and</strong> dispensation;• Modern imaging of thoracic aneurysms;• Investigation of brain tumors in children;• Pharmacovigilance;• Hydrocephaly;• Obstrical vaginal Fistula :surgical approach.The experiences gained in the RAFT project highlight theneed for a strategy that is adapted to local needs <strong>and</strong> realities.Applications need to help locally; otherwise they will not getused. In our case this means a strong reduction of the availableb<strong>and</strong>width. Still, the sound turns out to be the important parttogether with the slides. The video only has a minorimportance.VI. LESSONS LEARNEDAt the infrastructure level, three kind of problems wereidentified: a) the instability of the basic infrastructure <strong>and</strong> inparticular of the electric power supply has caused manyproblems; b) the limitation of the international b<strong>and</strong>width,which is often misused, in particular by e-mail accountshosted out of the country instead of in the country (Mali hasless b<strong>and</strong>width for the entire country than some WesternUniversities), <strong>and</strong> c) the unavailability of reliable connectivitybeyond the large cities.These problems are improving with the overalldevelopment of the national infrastructure, although thederegulation movements in the ICT sector <strong>and</strong> the deploymentof mobile telephony will - at least initially - favor the mostprofitable markets, which are not those where telemedicinetools are most needed. For instance, the focus on mobiletelephony probably limits investments in wired infrastructurethat is needed for Internet access, especially in remote areas toavoid expensive satellite connections. Similarly, thedeployment of wireless metropolitan area networks providesrapidly the needed connectivity, but should probably begradually replaced by the more sustainable wired, opticalfiber-based communication infrastructure.Basic communication tools such as e-mail are efficient <strong>and</strong>can be used productively. It is important to develop localcapacity to implement <strong>and</strong> exploit these tools, not only toimprove the technical expertise <strong>and</strong> the reliability oftelemedicine applications, but also to limit the use ofinternational b<strong>and</strong>width for information transfer that remainslocal. Most physicians in Africa still use US-based e-mailaccounts for exchanges that remain local, due to a lack ofreliable local e-mail services.At the content level, there is a steady dem<strong>and</strong> for North-South distance learning. However, several topics for seminars,requested by physicians in Africa, could not be satisfactorilyaddressed by experts in Switzerl<strong>and</strong>, due to major differencesin diagnostic <strong>and</strong> therapeutic resources <strong>and</strong> techniques, <strong>and</strong>due to discrepancies in the cultural <strong>and</strong> social contexts. Forinstance, there is no magnetic resonance imaging capability inMali <strong>and</strong> the only CT-scanner has been unavailable formonths. Chemotherapeutic agents are too expensive <strong>and</strong> theirmanipulation requires unavailable expertise. Even thoughdiagnostic <strong>and</strong> therapeutic strategies could be adapted,practical experience is lacking, <strong>and</strong> other axes forcollaboration have to be found. A promising perspective is thefostering, through decentralized collaborative networks, ofSouth-South exchanges of expertise. For example, there isneurosurgical expertise in Dakar, Senegal, which is aneighboring country of Mali. A teleconsultation between thesetwo countries would make sense for two reasons: a)physicians in Senegal underst<strong>and</strong> the context of Mali muchbetter than those from northern countries, <strong>and</strong> b) a patientrequiring neurosurgical treatment would most likely be treatedin Dakar rather than in Europe.Beyond content, collaboration between the stakeholders oftelemedicine applications must be <strong>org</strong>anized, in order toguarantee the reliability, security, safety <strong>and</strong> timeliness forexchanging sensitive information, in particular when thecommunication is not synchronized. Computer-supportedcollaborative work environments have been developed. Forexample, the iPath project [5], developed by the Institute ofPathology in Basel, <strong>org</strong>anizes “virtual medical communities”,which replicate <strong>org</strong>anizational models of institutions in111


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinadistributed collaboration networks, including clearly identifiedresponsible experts <strong>and</strong> on-call schedules. These new forms ofcollaboration over distances, across institutions, <strong>and</strong>sometimes across national borders also raise legal, ethical <strong>and</strong>economical questions, questions that are beyond the scope ofthis paper.The “induced digital divide” is another potential problem.The centrifugal development of the communicationinfrastructure implies that the remote areas, wheretelemedicine tools could be most useful, will be served last.As in most developed countries, physicians are reluctant topractice in remote areas, <strong>and</strong> the ability to interact withcolleagues <strong>and</strong> follow continuing medical education coursescan be significant incentives. Besides the accessibilityproblem, this also influences the content of the telemedicinetools, which will typically be initially geared towards tertiarycare problems. It is therefore important to make sure that theneeds of the periphery of the health system are taken intoaccount in these projects. An efficient way to do so is to havethe periphery connected to the telemedicine network early.Satellite-based technologies for Internet access, such as mini-VSAT, are sufficiently affordable to consider developingremote access points before the ground infrastructure becomesavailable.Finally, there is a need to develop local contentmanagement<strong>and</strong> other technical skills. Local medical contentis a key for the acceptance <strong>and</strong> diffusion of healthinformation, <strong>and</strong> is also essential for productive exchanges ina network of partners. It enables the translation of globalmedical knowledge to the local realities, including theintegration of traditional knowledge. <strong>Medical</strong> contentmanagementrequires several levels of skills: technical skillsfor the creation <strong>and</strong> management of on-line material, medicallibrarian skills for appropriate content <strong>org</strong>anization <strong>and</strong>validation, <strong>and</strong> specific skills related to the assessment of thequality <strong>and</strong> trustworthiness of the published information,including the adherence to codes of conduct such as theHONcode [4].Now is the time for the development of a telemedicineinfrastructure in medical teaching centers <strong>and</strong> their connectionto national <strong>and</strong> international computer networks in order tofoster multi-lateral medical expertise exchange with apredominant South-South orientation.The deployment of Internet access points in rural areas(Dimmbal in Mali, see Figure 4), with the use of satellitetechnology, enabling not only telemedicine applications butalso other tools for assisting integrated, multi-sectorialdevelopment. In particular, education, culture <strong>and</strong> the localeconomy can profit from these developments. The mini-VSAT technology, recently deployed over Western Africa,offers an affordable, ADSL-like connectivity. Sustainableeconomical models, based on the successful experiences withInternet cafes in Africa, are being developed to foster theadaptation of this infrastructure by rural communities.VII. PERSPECTIVESThe use of asynchronous, collaborative environmentsenables the creation of virtual communities <strong>and</strong> the control ofworkflow for getting expert advice or second opinions in away that is compatible with the local care processes. Theopen-source tool developed for telepathology at the Universityof Basel [5] is being implemented, not only for telepathology,but also for radiology <strong>and</strong> dermatology.The development <strong>and</strong> maintenance of locally- <strong>and</strong>culturally-adapted medical content in order to best serve thelocal needs that are rarely covered by medical resourcesavailable on the Internet is another important point. New toolsare being developed: regionally adapted search engines, opensource approaches [14], <strong>and</strong> adapted ethical codes of conduct.The Cybertheses project [1] <strong>and</strong> the resources from the HealthOn the Net Foundation [4] are used to train physicians,medical documentation specialists <strong>and</strong> librarians.Figure 3. Geographic distribution of the institutions participating in the RAFTproject. Circles represent teaching institutions located in capitals or largecities. Squares denote remote access points (fixed or mobile) connected viasatellite links.112


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaFigure 4: <strong>Telemedicine</strong> in the first line: Dimmbal in Mali, 875 Miles awayfrom the capital without telephone or electricity.VIII. CONCLUSION<strong>Telemedicine</strong> tools have an important role to play in theimprovement of the quality <strong>and</strong> efficiency of health systems indeveloping countries. They offer new channels forcommunication <strong>and</strong> collaboration, <strong>and</strong> enable thedematerialization of several processes that are usuallyhindered by deficient physical infrastructures. They alsoexpose some risks, <strong>and</strong> in particular the exchange ofinappropriate or inadequate information, <strong>and</strong> the potentialaggravation of local digital divide between the cities <strong>and</strong> therural areas. These risks must be examined when designingtelemedicine projects <strong>and</strong> can probably be mitigated by thedevelopment of South-South communication channels, the useof satellite-based technologies to incorporate remote areas inthe process <strong>and</strong> by fostering a culture <strong>and</strong> skills for localmedical contents management. These aspects are being furtherinvestigated by the RAFT project.[7] The RAFT project on telemedicine in French-speaking Africa:www.keneya.<strong>org</strong>.ml[8] Sarbadhikari SN, The state of medical informatics in India: a roadmapfor optimal <strong>org</strong>anization. Journal of <strong>Medical</strong> Systems. 2005Apr;29(2):125-41.[9] Suri JS, Dowling A, Laxminarayan S, Singh S. Economic impact oftelemedicine: a survey. Studies in Health Technology Informatics.2005;114:140-56.[10] Teleteaching at the University of Geneva: www.unige.ch/e-cours[11] Wright D. <strong>Telemedicine</strong> <strong>and</strong> developing countries. A report of studygroup 2 of the ITU Development Sector. Journal on <strong>Telemedicine</strong> <strong>and</strong>Telecare. 1998;4 Suppl 2:1-85.[12] Zolfo M, Arnould L, Huyst V, Lynen L. <strong>Telemedicine</strong> for HIV/AIDSCare in Low Resource Settings. Studies in Health TechnologieInformatics. 2005;114:18-22.[13] Uwe Engelmann, Andre Schröter, Ulrike Baur, Oliver Werner, MarkusSchwab, Henning Müller, Hans-Peter Meinzer. A Three-GenerationModel for Teleradiology. IEEE Transactions on Information Technologyin Biomedicine 2(1) pp. 20-25, 1998.[14] Stéphane Meystre, Henning Müller, Open Source Software in theBiomedical Domain: Electronic Health Records <strong>and</strong> other usefulapplications, Swiss <strong>Medical</strong> Informatics, volume 55, pages 3-15, 2005.ACKNOWLEDGMENTThis project is supported by grants from the Geneva Stategovernment <strong>and</strong> Geneva University Hospitals.REFERENCES[1] Publication of theses online: www.cybertheses.<strong>org</strong>[2] Ganapathy K. <strong>Telemedicine</strong> <strong>and</strong> neurosciences in developing countries.Surgery in Neurology 2002;58:388-94[3] Graham LE, Zimmerman M, Vassallo DJ, Patterson V, Swinfen, P,Swinfen R, Wootton R, <strong>Telemedicine</strong>-the way ahead for medicine in thedeveloping world. Tropical Doctor 2003;33:36-8[4] The health on the net foundation: www.hon.ch[5] Oberholzer M, Christen H, Haroske G, Helfrich M, Oberli H, Jundt G,Stauch G, Mihatsch M, Brauchli K.Modern telepathology: a distributedsystem with open st<strong>and</strong>ards. Current Problems in Dermatology2003;32:102-14[6] Perednia DA, Allen A. <strong>Telemedicine</strong> technology <strong>and</strong> clinicalapplications. Journal of the American <strong>Medical</strong> Association. 1995 Aug9;274(6):461-2.113


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaDigitally Integrated Teleradiology Network in Hamburg, GermanyWolfgang Auffermann, MDBeatrice WarnckeJürgen Stettin, MDDepartment of Physical Technique <strong>and</strong> ScienceHamburg University of Applied SciencesAlte Holstenstraße 16, 21031 Hamburg, GermanyArmin Otterbach, Dipl.Ing.itec concepts gmbh, Hamburg, Germanywww.itec-concepts.dewww.radiologie-hbg.de/neuAbstract-The realization of a completely integrated radiologynetwork is a fascinating challenge in many aspects of the structureof the radiology workflow <strong>and</strong> its integration into the healthcareenterprise. The following report provides a summary on experiencegained during the realization of a comprehensive <strong>and</strong> state of theart teleradiology network.I. PLANNING AND PREPARATIONThe realization of a completely integrated radiologynetwork is a fascinating challenge on many levels of aradiology network enterprise. Implementing PACS is aprocess, not an event. The PACS process has a life cyclereplete with key decision points. Learning how to h<strong>and</strong>lethese events in a holistic fashion can be a crucial factor in thesuccess of a digital image management system [1]. Theprocess comprises many factors on many levels:• Filmless image transfer• Network of all radiology sites• Integration of home offices• Integration of referring physicians including web basedreferrals• Connection of hospital sites <strong>and</strong> IT-systems• Optimization of workflow through all process levels• Reporting system from all sites with speech recognitionAt any point, a review may cause your PACS planningteam to revisit a previous stage. Relying on that, here is thebasic sequence:• Request for information (RFI) <strong>and</strong> needs assessment <strong>and</strong>gap analysis,• Cost-benefit analysis,• Request for proposal (RFP) <strong>and</strong> analysis of vendorresponses,• Vendor selection <strong>and</strong> contract negotiation,• Transition planning,• Implementation management,• Acceptance testing,• System management, support, <strong>and</strong> quality assurance.Issues can be resolved even after your PACS is up <strong>and</strong>running. But as with any process, the earlier you identify <strong>and</strong>address issues, the quicker, less costly, <strong>and</strong> less complex theresolution will be.II. REALIZATIONFor the realization the question is important how theradiology appointments are set, who retrieves the patient,how patients are managed in radiology with <strong>and</strong> withoutfilm, <strong>and</strong> how quickly radiology releases patients to othercaregivers.There are many other strategic issues:• How will imaging modalities interact with the PACSnetwork?• How will imaging sessions be identified?• How will notes be associated with images?• How will image display quality be ensured long term sothat physicians maintain high confidence in imageaccuracy?• How will imaging <strong>and</strong> related data tie to the integratedpatient record, now or in the future?• How will you manage physician training <strong>and</strong> provideongoing user support from the medical staff’sperspective?• How flexible <strong>and</strong> scalable will your PACS solution be inthe future?Practical issues in our case included the following topics:• 4 sites for in patient <strong>and</strong> out patient services:o Hospital <strong>and</strong> Practice Bergedorfo Hospital <strong>and</strong> Practice Bethesdao Hospital <strong>and</strong> Practice Geesthachto Hospital <strong>and</strong> Practice Mümmelmannsberg• Network performance between sites: 2-4 MBit/s (must!)• 1 central database for each RIS <strong>and</strong> PACS• 2 STS with prefetching to dedicated STS ((Bethesda <strong>and</strong>Bergedorf)• HL7 interface between HIS (Bethesda) <strong>and</strong> RIS• DMWL between RIS <strong>and</strong> modalities114


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, China• Web distribution on each site, eViewer Configuration• Geesthacht/ Mümmelmannsberg:o Prefetching images to RA600 <strong>and</strong> displayingthem from a local disko Network routers to all other siteso Secure network access points for home offices,external <strong>and</strong> internal referrers• Hospital/Practice BethesdaFig. 1: System Configurationo Save images from modality on RA600 <strong>and</strong> sendthem to STS Bergedorf (during night)• Site independent speech recognition• DICOM Print in Bergedorf <strong>and</strong> Bethesda• Connection to Healthnet/Breastnet HamburgIII. SYSTEM CONFIGURATIONThe overall system [Fig. 1] is defined by the followingmajor building blocks:• Hospital /Practice Bergedorfo Central RIS <strong>and</strong> PACS Servero Web server for image distributiono Long term archive (DVD archive)o 100 mbps LAN for modalities <strong>and</strong> work stationso Central WAN nodeo RIS <strong>and</strong> PACS Server, short term archiveo 100 mbps LAN for modalities <strong>and</strong> workstationso Network router to Bergedorf• Hospital/Practice Geesthachto Network router to Bergedorf• Hospital/Practice Mümmelmannsbergo Network router to BergedorfThe wide area network (WAN) connecting the fourhospital sites is based on 2-4 mbps leased line connections.Since the costs of leased lines are basically defined byperformance <strong>and</strong> distance it is required to find a compromisebetween cost <strong>and</strong> performance.Network security is one more essential topic: To keep thenetwork secure, all border entry points are provided withfirewall appliances. Secure network access for teleradiology115


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinausers - home offices, external <strong>and</strong> internal referrers - is basedon web-based virtual private network (VPN) technology [2].System <strong>and</strong> network setup <strong>and</strong> maintenance, also systemadministration procedures quickly become critical for theradiology enterprise - it is a must to integrate manufacturer<strong>and</strong> on-site teams [3].IV. WEB BASED PATIENT SCHEDULINGUsing web based patient registration, order transmission<strong>and</strong> scheduling.• Search for free appointments dependent to examinationtype• Direct booking of appointments in the scheduler byreferrals• Display reports <strong>and</strong> images integrated CentricityRadiology Web required• Centricity RIS grants user rights• Timeslots for appointment booking dependent onexamination typeThe integrated order entry module yields completetransparency of the internal <strong>and</strong> external referral structure.This transparency is delivered instantly <strong>and</strong> con be monitoredstatistically. It ideally complements the quality managementsystem in many aspects.V. DIGITAL IMAGE ACQUISITIONThe application of PACS relates to many complicatedfactors in hospitals, such as financial status, equipmentcondition, <strong>and</strong> quality of personnel. Primary digital imageacquisition or secondary digital image transformation is aprerequisite for the function <strong>and</strong> full integration of the system.We were lucky to be able to install a completely new digitalradiology ensemble simultaneously with the PACS RISsystem. In most other radiology departments, themodernisation of imaging equipment is a tedious challengethat is done only stepwise, including digital interfaces to otherold modalities such as mobile x-ray tubes <strong>and</strong> conventionalmammography.VI. INTERFACE TO OTHER INFORMATION SYSTEMSAs an isolated, st<strong>and</strong> alone solution, RIS-PACS in manyhospitals has not yet brought out a significant benefit forradiology department <strong>and</strong> the whole hospital. Therefore, it isnecessary to deal with these problems. This is true for china[9] as well as for any other country [3,4]. The interconnectionbetween PACS <strong>and</strong> the Hospital Information System (HIS) isespecially important.DICOM 3.0 has become the st<strong>and</strong>ard for PACS <strong>and</strong>imaging equipment. Most equipments support DICOM 3.0protocol, which has become an important factor promotingthe fast development of PACS. The st<strong>and</strong>ardization of PACSnot only includes DICOM 3.0 protocol but also thest<strong>and</strong>ardization of patient basal information, examinationitems, modality type, diagnosis items, among others.St<strong>and</strong>ardization is a potential problem restricting the furtherdevelopment of PACS. First, DICOM 3.0 so did not offer aChinese version, <strong>and</strong> the worklist of most modalities did notsupport Chinese [9]. As a result, it caused the repeating ofrecords such as patient basal information between PACS <strong>and</strong>HIS. This discord leads to situations where physicians cannotobtain digital image data <strong>and</strong> patient reports from HIS. InChina, applications of PACS <strong>and</strong> HIS in each hospital wereisolated from others. With a missing common language, thecommunication of medical image data between hospitals isdifficult, leading to huge waste of time <strong>and</strong> money in thesystem [3,4].VII. WEB BASED IMAGE DISTRIBUTION - TELERADIOLOGYTeleradiology is one more of the system's key functions:• A web server is used for image distribution to internal<strong>and</strong> external referring physicians.• Web-based integration of home offices even in remotesites.Referring physicians also have ready access to patient data.They no longer have to wait to receive a patient’s chart orimages from another department. Everything can be viewedelectronically. This is a significant improvement in the wholeprocess. The physician can just go to the nearest PC, whetherit’s in the facility, in their office, or even in their homes.They can call up the exam online <strong>and</strong> review without havingto go to the radiologists’ reading room, which is reallydisruptive to their workflow. This is where the Webtechnology is a huge advantage [5]. It can integrate with theEMR on the presentation of the record. Instead of having tofind a specialized workstation, you can go to any PC <strong>and</strong>bring up a review quality image. Immediate reporting of thefindings of a radiology investigation, even in remote sites,requires a fast teleradiology network system.VIII. QUALIFICATION OF STAFFInitially we had progressed well into the PACSdeployment process with a tactical approach before realizingthat no provision had been made for providing PACStraining to physicians <strong>and</strong> other staff members, or formanaging workflow changes [1]. Radiologists, technicians,referring physicians, nurses, aides, <strong>and</strong> administrative staffmembers all have unique training requirements.Additionally, radiologists <strong>and</strong> physicians need time to adaptto the new system.116


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaHISPATIDOCRIS SchedulingWaiting RoomRIS BillingInformation• Order from Referrals• Patient Raw Data• Patient Register• Patient Register• Patient Schedule• Patient StartedRIS Status:• waiting• DocumentationRIS Status:• scheduled• startedRIS Status:• started• finishedWeb ServerPACS Archiving• Image Distribution toReferrals• Receipt of Imagesfrom Modality• Synchronization• Longtime ArchivingHIS Report/BillRIS SpeechRecognitionRIS/PACS ReportFig. 2: Workflow Optimization.IX. OPTIMIZATION OF WORKFLOWOptimization of workflow is the key for productivity gainon each step of the process.Radiology departments should firstly, prior to theimplementation of an IT-system, develop good workflow <strong>and</strong>management, then improve the efficiency of its imagingequipment, upgrade the comprehensive quality of its staff <strong>and</strong>advance the level of medical treatment <strong>and</strong> education <strong>and</strong>scientific research. Conversion from traditional radiology todigital radiology should be an essential step in the process.This is true for Europe [2, 4] as well as for China [9].Currently, many radiology departments are set apart fromother medical imaging departments in the hospital [8].Application of a PACS breaks this isolation <strong>and</strong> achievesinformation sharing between every department of the hospital.Secondly, radiology departments will rebuild workflow <strong>and</strong>management models according to the characteristics ofPACS, establish training mechanisms, develop correspondingrules of operation including various emergency solutions suchas single workstation operation, <strong>and</strong> define responsibilities.X. ECONOMIC SYNERGIESThe higher the integration richness of the implementedPACS-RIS system, the more money <strong>and</strong> time can be saved oneach step of the radiology workflow process [7]. There aremany steps on which money is saved by integration:• No films: saves between 5 <strong>and</strong> 20 Euro per patient.• Centralization of digital typing system saves workforceon the remote sites without time lost.• Web scheduling saves workforce in the reception desk<strong>and</strong> at the telephone center.• Teleradiology service helps to spare an onsite radiologistincreasing physician productivity.• Automated billing system saves interest money byacceleration of cash flow.• PACS system saves physical image archive <strong>and</strong>archiving workforce.• Web-based image distribution saves personal forpacking <strong>and</strong> mailing as well as package material <strong>and</strong>mailing fees.• Automatic speech recognition saves typing workforce.• PACS RIS integration enhances physician’s productivity<strong>and</strong> saves radiologist's time.Acceleration of the diagnostic process <strong>and</strong> improvedinformation distribution save money by shortening in patienttime <strong>and</strong> increases the profitability of the entire healthcareenterprise [3] by, for instance, accelerating the time neededfor the final diagnosis, prior to the start of therapy.117


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaXI. DISCUSSIONThis is to our knowledge the first completely digitallyintegrated <strong>and</strong> web-connected radiology department inGermany [1]. From our 9 months experience we want toaddress common mistakes as well as significant factors ofsuccess going along with the implementation of radiology ITsystems. Others have contributed to our knowledge bysharing with us their mistakes that had been made. There is awhole pile of mistakes that can be made in all areas <strong>and</strong> at alltimes of the implementation process [6].After living nine months with digital IT-embracement, thework flow has changed fro the entire group consisting ofaround 100 co-workers, starting from the call center to thereception desk, the technicians, the doctors up to the backoffice <strong>and</strong> the controlling. Feeling the gain in productivity <strong>and</strong>efficiency on many levels of the working chain, such asphysician performance, patient satisfaction, availability ofreference <strong>and</strong> historical images, acceleration of imagedistribution etc., we are far from taking full advantage of thecapabilities of an all-inclusive holistic IT-system. Our goalfor the system is, that it should at some point in the nearfuture reflect <strong>and</strong> materialize the physical process chain aswell as the intellectual workflow in a way that is adapted tothe individual preferences <strong>and</strong> tastes of each physician as wellas the other co-workers.With our global approach embracing the entire companywith its different sites <strong>and</strong> connected clients, we were movingtoward a strategic enterprise solution including all patient datawith reports <strong>and</strong> also entire imaging. For others, who build onpreviously installed discrete proprietary mini-PACS units,there is no simple, straightforward way to integrate these intoan integrated enterprise solution [5, 8].Finally, integrating PACS <strong>and</strong> RIS enhances the benefits ofeach separately. A good system improves workflow, patientcare, <strong>and</strong> operational efficiency. Over time, a facility shouldeven save money once its information management systemsare linked to filmless image acquisition.Factors of success:• Early integration of staff into the conception• High competence of partners <strong>and</strong> professional decisionmaking <strong>and</strong> realization• Integration of an independent consultant during concept,contract <strong>and</strong> realization phase• Pragmatic solution finding during realization• Lasting <strong>and</strong> continuous development of the workflowtowards integration of the new systemPioneers who adapted PACS implementation early on,often did not take the holistic approach, but installed amosaic solution with many isl<strong>and</strong> puzzles, not integratedamong themselves. If you are migrating to PACS now, youhave the luxury of transforming the hindsight of thesepioneers into precious foresight.The bottom line is that you can’t plan <strong>and</strong> implementPACS in a tactical manner <strong>and</strong> expect optimal results.Because PACS affects such a broad spectrum ofprofessionals in numerous ways, it dem<strong>and</strong>s a holistic,strategic approach.Providers who adhere to this basic wisdom will avoid theneed for remediation. And those in need of remediation canbest do so by honouring the underlying wisdom of a holisticapproach, applied at key decision points <strong>and</strong> throughout thePACS life cycle.XII. REFERENCES[1] W. Auffermann, A. Otterbach, “Zukunftsorientiertes Radiologiezentrumin Hamburg,“ RIS PACS J., vol. 2, pp. 18-19, May 2005.[2] A. Dzubur, R. Stern-Padovan, G. Mrak, “PACS/RIS Nation-WideConnections in a European C<strong>and</strong>idate Country: The Case of CroatiaE.U.T.,” Edizioni Università di Trieste, pp. 15-18, 2004.[3] M. Eichelberg, A. Kassner, B.B. Wein, P. Mildenberger, “IHE - dieneuen IT-Integrationsprofile und ihr Bezug zu RIS und PACS,“ Rö. Fo., vol.176, pp. 119-120, May 2005.[4] M. Eichelberg, E. Poiseau, B. Wein, J. Riesmeier, “Integrating theHealthcare Enterprise: Die IHE-Initiative in Europa,“ TelemedizinführerDeutschl<strong>and</strong>, pp. 230-234, 2004.[5] A. Koch, “Multi-site PACS für den Einsatz an mehreren St<strong>and</strong>orten,“RIS PACS J. vol. 2, pp. 66-67, May 2005.[6] W.-D. Lorenz, “RIS-PACS-Investitionen. Kardinalfehler und Erfolgsrezepte,“RIS PACS J., vol. 2, pp. 8-15, May 2005.[7] S. Popp, M. Brönner, “Technische und preisliche Entwicklungen derEDV-Komponenten,“ RIS PACS J., vol. 2, pp. 54-55, May 2005.[8] W. Riedel, “Digitale Archivierung im Krakenhaus,“ RIS PACS J.,vol. 2, pp. 58-59, May 2005.[9] T. Wang, P. Gao, “Some problems with picture archiving <strong>and</strong>communication system application in china,” Chinese <strong>Medical</strong> J., vol. 116,pp. 643-644, May 2003.Send correspondence to wolfgang.auffermann@t-online.de118


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaAPPLICATION OF LONG-RANGE FETAL HEART RATE MONITOR IN CHINAEDAN INSTRUMENTS, INC.XICHENG XIEABSTRACT5.Prospects <strong>and</strong> problems……………….5We can detect the fetal movement <strong>and</strong>fetal heart rate to estimate the fetal oxygensupply status. When the gestation is overeight months, the pregnant woman should goto hospital for regular fetal heart ratemonitoring once a week, but this is limitedto special time, <strong>and</strong> not to monitor the fetalheart rate on dem<strong>and</strong>. Long-range fetal heartrate monitoring is a new method based ontraditional ultrasound Doppler fetal heartdetector, telephone network <strong>and</strong> computer ,the computer is on hospital side , to collectthe Doppler signal from traditionalultrasound Doppler fetal heart detectorthrough telephone with audio line inputfunction, the computer analyses the signals<strong>and</strong> gives out calculated results , the doctorreviews the results <strong>and</strong> gives out thediagnosis . This method is an essentialreinforce to the traditional fetal monitoringmethod. This method has been widely usedin the big <strong>and</strong> middle cities in China sincethese two years, at the same time, there aremany law’s problems with this method.Keyword: Long-range monitoring, fetalheart rate, telephone network1. The origin of Long-range fetal heart ratemonitoring…………………………………12. The function of long-range fetal heart ratemonitoring…………………………………23. The clinical significance of long-rangefetal monitoring……………………………44. The MFM-TMS long-range fetal heartrate monitoring working flowchart………..45 . Prospects <strong>and</strong> problems……………….51. The origin of Long-range fetalheart rate monitoringThe health of the fetus influences themother <strong>and</strong> the family momently within tenmonth’s gestation, but there are a lot offactors endanger the health of the fetus ,basing on clinical experience , about onethird of the fetus may has the matter of cordentanglement , also may be entangled otherplace of the fetus . It will occur losing ofblood supply if coiling is too strict , theresult of cord entanglement is un-predicable ,besides the factors of it , the Fetal Distress、quickening abnormity <strong>and</strong> the environment ,spirit etc. will all cause the Fetal Distress ,even endanger the fetal life .Along with the rapid development ofthe medical electronic technology <strong>and</strong> thesignal processing technology , the fetalelectronic monitoring is prevalent as theemphasis routine examination item duringthe gestation , ultrasound Doppler fetal heartmonitoring <strong>and</strong> fetal heart rate examinationhave been accepted by a plenty of doctors .When the gestation is over eight months ,the pregnant woman should go to hospitalfor normal fetal heart rate monitoring once aweek , it is very important to estimate thefetal health status . but this is limited tospecial time <strong>and</strong> has the unavoidablelimitation , for example , the pregnant119


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinawoman have to be exhausted in bus orwaiting for monitoring in hospital , furthermore this monitor method is expensive forlong time use , so it is impossible to monitorthe fetal heart rate according to requirementwhen the pregnant woman feels that thefetus is uncomfortable . In a word , it is unconvenientto monitor in time for thepregnant woman when she feels that thefetus is uncomfortable.Along with gradually growing of theCTI (Computer Telephony Integration )technology , we –-Edan Instruments hasdesigned the product-- MFM-TMSLong-range fetal monitoring system whichcan provide the function of hospitalmonitoring <strong>and</strong> telephone communication , itcan transmit the Doppler fetal heart signal tothe hospital monitoring center by traditionalanalog telephone , the signal is analyzedautomatically by the computer , then theexperienced obstetrical doctor will analyzethe collected data <strong>and</strong> review the historicalrecord saved in computer , estimate the fetalhealth status <strong>and</strong> guide the pregnant woman .The Configuration as the follows:Long-range fetal monitoring systemconsiders adequately the status of thepregnant woman , it can accept themonitoring information of the pregnantwoman staying at home escape from busing<strong>and</strong> waiting in hospital , the mother can wantto know the health of the baby at anymoment , it provide the reliable safeguardfor the pregnant woman especially thehigh-risk pregnant woman , but this islimited to supply for traditional fetalmonitoring <strong>and</strong> fetal heart rate examination.Long-range fetal monitoring system isscreening tools to aid the healthcareprofessional <strong>and</strong> should not be used in placeof normal fetal monitoring.2. The function of long-range fetalheart rate monitoringMFM-TMSDoctorTelephone networkSonotraxPregnant womanThe whole system is based on traditional telephone network system to120


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinaconnect the doctor with the pregnantwoman , it can be divided into two parts :the family side <strong>and</strong> the hospital side .It is composed of a SONOTRAX(ultrasound Doppler fetal heartdetector) <strong>and</strong> an improved telephone withaudio line input in the family sid e , theSONOTRAX is a portable Doppler monitormade by Edan , it can be used as anUltrasonic Doppler FHR detector , <strong>and</strong> it isportable <strong>and</strong> convenient to take <strong>and</strong> use , themother can examine by herself according torequirement at any moment , the long-rangemonitoring system is completed whenconnecting with the telephone <strong>and</strong> the audiocable .The pregnant woman should find thefetal heart position first though the ultrasonicDoppler pocket , then connect the audiocable , dial the telephone of the hospital , thefetal heart Doppler audio signal will betransmitted to hospital, during thetransmission , the position of fetal heart maymove , the pregnant woman should adjustthe place of the probe in time , to achievebest monitoring effect , this is the crucialoperation for monitoring effects too , Theoperation of pregnant woman directly affectsthe signal quality of the fetal heart ratecollected , it also affects the FHR trendanalyzed automatically . we can increase theguidance doctor in the hospital side to guidethe operation of the pregnant woman toadvance the monitoring quality , furthermoreit is necessary to train the pregnant womanfor right operation , we can ensure the bestmonitoring effect when the pregnant womanmaster the operation skillfully only .In the hospital side there is a hostcomputer of long-range monitoringsystem , it can carry out the function oflong-range monitoring when connecting thetelephone cable <strong>and</strong> startup the software ofthe long-range monitoring system . Thesoftware system is automatic <strong>and</strong> all-timeworking , the pregnant woman can dialhospital at any moment <strong>and</strong> the system cancarry out the monitor function by itself ,when the doctor is on duty he can deal withit if there is a new un-diagnosed curve .When a pregnant woman dials hospital , ifthere is a doctor on duty, the doctor maystartup the man real-time guidancefunction , the doctor can hear the real-timeFHR audio , If the pregnant womancouldn’t find the fetal heart positioncorrectly , the doctor can guide thepregnant woman though the telephone , <strong>and</strong>help her to find the best position to getclear fetal heart Doppler audio signal, sothe monitoring quality can be improved .The process that the record <strong>and</strong> the fetalheart rate document can be finishedautomatically , it will be put at the filelisting to be processed for the doctor todiagnose , the system has the function ofauto-dial back to pregnant woman afterdiagnosing , if the doctor will permit , thesystem can auto-dial the phone of thepregnant woman <strong>and</strong> send the diagnosedresults to her .We can sum up that the long-rangemonitoring system has two monitoring modefrom the above introduction :First : auto-monitoring ,working in 24hour a day , the pregnant can finish themonitoring operation by herself , monitoraccording the requirementSecond : man-guidance monitoring , thedoctor guide the pregnant woman operatethough the telephoneFor the quality of monitoring , thepregnant should notice food , avoid hunger ,tire , tension etc.Keep the environment quiet . the fetalheart monitoring is limited to a lot of factors ,for example , the fetus will be tachycardia ifthe pregnant woman has a fever , the fetusmay be bradycardia if the pregnant woman121


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinatake medicine like ataractic3. The clinical significance oflong-range fetal monitoringThe advantage of the MFM-TMSlong-range fetal monitoring is that thepregnant woman need not go to hospital <strong>and</strong>can transmit the status signal of the fetusthough the telephone staying at home only<strong>and</strong> finish the monitoring of the fetus , Thedoctor can notice the pregnant woman intime if she feel any abnormities , it willreduce the death rate of fetus . The system ismainly face to the pregnant woman whosegestation is over many months before theylive in hospital .Who need special long-range monitoring?Many high-risk pregnant woman includingthat have been pregnancy or labour not sowell , for example the obituary baby . Andthose related diseases in the gestation , likethe hypertension , the diabetes , the anaemia ,the hepatitis , the nephritis <strong>and</strong> so on . Orsome abnormity of the gestation such ascord entanglement 、 hydramnion oroligoamnios .Actually the normal pregnant womanneed this kind health care tool too when thegestation is about 35 weeks , <strong>and</strong> thoseabnormal pregnant woman or high-riskpregnant woman can be monitored when thegestation is about 32 weeks , some also canbe monitored when the gestation is about 28weeks .But there are many differences inquality of long-range monitoring <strong>and</strong>hospital monitoring , first , the operation ofthe pregnant woman is not professional ,Second , the telephone network transmissionof the fetal heart signal may affect theDoppler signal quality , Finally , thetechnology for calculating fetal heart rate isimproving .The long-range monitoring is theavailable supplement to traditional routinefetal examination , but it can not be used inplace of the traditional routine fetalexamination , the pregnant woman should goto hospital for normal fetal heart ratemonitoring once a time according torequirement even at the long-rangemonitoring used, the hospital should notexaggerate the function of the monitoringonly for economical benefit .4. The MFM-TMS long-range fetalheart rate monitoring workingflowchartIn the MFM-TMS, the ultrasonicpocket Doppler can send out the Doppleraudio signal of the fetal heart movement ,one way is sent to the speaker of ultrasonicpocket Doppler , <strong>and</strong> the other way istransmitted to the host computer of thelong-range fetal monitoring system inhospital side through telephone network , thehost computer will convert the fetal heartDoppler signal to digital signals <strong>and</strong> save itby using CTI technology , this is theimportant function of the software system ofthe long-range monitoring . The digitalDoppler audio signal is processed withfiltering , noise reducing <strong>and</strong> re-sampling toadapt the auto-correlation arithmetic , theauto- correlation arithmetic is a generalmethod that acquire the period signal fromthe noising original signal in the signalprocessing technology . Finally the disposedaudio signal will carry out auto- correlation122


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinaarithmetic then acquire the fetal heart rate ,though the continuous record , continuouscalculate , finally we will acquire a FHRcurve .The audio signal flow chart of thelong-range monitoring is the followingThe fetal heart sound is alwaysfluent ,including channel 1-4-6to doctor <strong>and</strong> channel 1-2 topregnant womanUltrasonicPocket Dopplerpregnant 2woman 3The pregnant woman can hearthe sound of the doctor <strong>and</strong> thefetal heart . the fetal heart soundis always fluent , the sound ofthe doctor can be cut off1The host-computer of the longrangemonitoring , record inhere , can record the fetal heartsound <strong>and</strong> the pregnant womansound , the pregnant womansound can be cut offTelephone45The sound of the pregnantwoman can be cut offthough the monitor keyHostComputerThe doctor can hear the soundof the fetal heart <strong>and</strong> thepregnant woman , the fetalheart sound is always fluent ,the sound of the pregnantwoman can be cut off67DoctorThe sound of the doctorwill not effect the record<strong>and</strong> can be cut off5 . Prospects <strong>and</strong> problemsAlong with the community medicalhealthcare system under-construction inChina, the fetal monitor will gradually gointo every family which has the pregnantwoman . The ultrasonic pocket Doppler canbe hired by the pregnant woman, <strong>and</strong> signthe leasehold contract with the hospital withfew money.In hospital the obstetric monitoringcenter is on duty all the time, it can ensurethe connection with the pregnant woman,<strong>and</strong> give out suggestion to the pregnantwoman for the fetus . The pregnant womanalso can monitor the fetus at any momentwhen she feels that the fetus isuncomfortable, she can dial the hospitalphone <strong>and</strong> transmit the fetal heart audio todoctor by telephone , if the doctor findabnormity he can notice the pregnant womanto go to hospital for diagnoses in time .Based on the foundation of routineexamination now , it cost little charge canwe monitor the fetus at any momentaccording to the requirement .The long-range monitoring goes withrisk. There are a lot of factors we can notcontrol during the long-range monitoring , itis hard to avoid some suddenness , despitethe doctor will communicate with the patientwell in advance , but in some circumstance ,some mistake is still existent , carry out thisitem we can save the information of the fetusheart beat from over 28 weeks to beforelabour stage , in this way we can obtain thewhole monitor recordNow there is no any law <strong>and</strong> statute topermit this technology can be used. Duringthe long-range monitoring <strong>and</strong> thelong-range medical treatment, themanufacturers still have not a suit of currentinterface st<strong>and</strong>ard about their products , <strong>and</strong>now there is no industry criterion too .Reference literature:123


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, China1.ChengZhiHou , ShongShuLiang, Fetalmonitor electronics.20012.PanJunFeng,YeMeiMei,etc. TelephoneLong-range fetal monitoring system inclinic .2001,1(35)124


iM aciMacProceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaA physiology multi-parameter telemonitoring system based on InternetDu Fangfang, Zhang Song, Bai Yanping, Wu ShuicaiBiomedical Engineering CenterBeijing University of TechnologyBeijing 100022, ChinaAbstract-This paper describes an Internet-based system forphysiological multi-parameter telemonitoring. This system iscomposed of physiological multi-parameter signal collectors,personal computers <strong>and</strong> a server. We adopt a Peer-to-Peer(P2P) model mixed with a client/server(C/S) model. It providesreal-time measuring, analyzing <strong>and</strong> monitoring for manykinds of human physiological information, <strong>and</strong> realizestelemonitoring <strong>and</strong> telediagnosing of physiological data. Thissystem is suitable to be used in community hospital <strong>and</strong> home.KeyWords- Physiology multi-parameter; Telemonitoring;InternetⅠ.INTRODUCTIONTele-monitoring means transmitting remote physiologicaltelephone network or special network for data transmission.In this paper, we will introduce a new physiologicalmulti-parameter telemonitoring system based on PC. Itprovides real-time monitoring for the physiologicalinformation– ECG, respiration, SpO 2 , non-invasive bloodpressure(NIBP), <strong>and</strong> temperature at home. It also providesreal-time transmission for these physiological data throughInternet, which makes telemonitoring <strong>and</strong> telediagnosiscome true. This system is quite suitable to be used incommunity hospital <strong>and</strong> home.Ⅱ.SYSTEM STRUCTURE AND WORKING MODEThe structure of this physiological multiparametertelemonitoring system is shown in Figure1, <strong>and</strong> it isinformation through network in order to get medical composed of physiological multi-parameter signaldiagnosis. It shortens distance between doctors <strong>and</strong> patients, collectors, personal computers <strong>and</strong> a server.<strong>and</strong> makes patients in lonely distance or suburb can acquireimmediate medical treatment. Telemonitoring for importantphysiological information can do assistant treatment <strong>and</strong>alarm in case of emergencies. Tele-monitoring for healthypeople can forecast potential diseases <strong>and</strong> help to dotreatment before it is too late.Nowadays telemonitoring gets a great developmentInformationCollectorPCInformationCollectorInternetPCespecially in some developed countries. There are manyServerkinds of portable or PC-based telemonitoring systems,DataBasewhich mainly concentrate on monitoring just severalFig.1.System structurephysiological parameters such as ECG <strong>and</strong> blood This system could be regarded as an exp<strong>and</strong>ed virtualpressure [1] .These facilities mainly consist of monitors <strong>and</strong> aPC, even portable systems mostly use a PC as intermediateto transmit data [2] , <strong>and</strong> they usually adopt telephonenetwork or Internet for data transmission, <strong>and</strong> someportable systems adopt mobile network (such as GPRS) [3] .In China, telemonitoring systems for home usage usuallyaim at one or several physiological parameters, <strong>and</strong> adopthospital, <strong>and</strong> it has following functions: telemonitoring,cases management, network transmission <strong>and</strong> so on. Thesystem software consists of client part <strong>and</strong> server part. Theserver software provides the function of data storage,searching <strong>and</strong> web browsing. The software of home clientpart provides several functions as follows:1) ECG monitoring: Measures <strong>and</strong> displays one-lead ECG,calculates real-time heart rate <strong>and</strong> alarm in case of abnormalECG.The research was supported by Education Committee Foundation(KB00190)from Beijing City, China.125


iMac iMac iMac iMacProceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, China2) Non-invasive blood pressure (NIBP) measurement.:Provide intermittent blood pressure measurement for adult,pediatric, <strong>and</strong> neonatal patients. It uses the oscillometricmethod to produce numeric values for systolic, diastolic,mean blood pressure.3) SPO2 measurement: Use an oximeter sensor <strong>and</strong> providereal-time pulse oximetry wave <strong>and</strong> numeric value of oxygensaturation of blood.4) Real-time respiration monitoring: Respiration waveformis provided by calculating body impedance changes whenbreathing.5) Real-time temperature measuring: Provide atemperature measuring with resolution of 0.1 degreecentigrade.This system has two working modes: the one is offline,which works without network <strong>and</strong> can store physiologicaldata in local database <strong>and</strong> give automatic diagnosis; theother mode is online, <strong>and</strong> provides instant informationbetween patients <strong>and</strong> doctors so that doctors can dotele-monitoring <strong>and</strong> diagnosis through Internet.Ⅲ. SYSTEM DESIGNA. Hardware DesignHardware used at home consists of a PC <strong>and</strong> aphysiological multi-parameter information collector. Theinformation collector is assembled by multi-parametermodules, including ECG module, NIBP module,SPO2module <strong>and</strong> temperature module, <strong>and</strong> uses an RS232interface to communicate with PC. It structure is showed inFigure 2.many features from Java <strong>and</strong> C++, C# is destined to becomethe high-level programming language selected to buildhigh-performance Windows <strong>and</strong> Web applications. Thesystem software consists of client software <strong>and</strong> serversoftware. The structure of client software is showed inFigure 3.Data CollectionDataAnalysisAlarmDisplaseNetworkTransmissionLocalDatabaseFig.3.The structure of Software for Home UsageWe adopt a P2P model mixed with a C/S model in itsnetwork model. C/S model centralize most resource onserver <strong>and</strong> meet client computers’ need, <strong>and</strong> it is appliedwidely for its advantages such as fast response <strong>and</strong> dataconcentration. But it does not support real-time mass datatransmission. Structure of C/S model is showed in Figure 4.ServerInformation CollectorECG ModuleNIBP ModuleSPO2 ModuleTemperatureModuleRS232Fig.2.The structure of Hardware used at homeB. Software designThe system software is written in C#, which is acornerstone of Microsoft's new .NET platform. InheritingiMacClientFig.4.The network structure of C/S modeP2P model has following advantages in resource share [4][5] :distributed calculation, resource search, Instant messaging<strong>and</strong> online games [6] . Generally speaking, it allows people toconnect each other without any intermedia. The structure ofP2P model is displayed in Figure 5.126


iMaciMaciMaciMaciMacProceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaP2P ClientFig.5.The network structure of P2P modeOur system combines these two models <strong>and</strong> providesfunctions as follows:1) Via P2P mode, patients <strong>and</strong> doctors can communicatedirectly <strong>and</strong> realize telemonitoring;2) Via C/S mode, patients monitor themselves offline <strong>and</strong>can store physiological data <strong>and</strong> upload them to server, <strong>and</strong>doctors can download data from server <strong>and</strong> then dodiagnosis.Ⅳ.RESULTS AND ANALYSISMain interface of client software is showed in Figure 6.The waveforms of ECG, RESP <strong>and</strong> SPO2 are showed in theleft of Figure 6, <strong>and</strong> numeric values of heart rate, STsegment, SPO2, NIBP, temperature <strong>and</strong> breath rate areshowed in the right of Figure 6. Measuring mode <strong>and</strong>alarm criterion can be easily alerted by clicking on menu.The client software provides redisplay so you can see anyphysiological data measured at any time.In this paper An Internet-based system for physiologicalmulti-parameter telemonitoring at home is described. Itprovides real-time physiological data sampling, display,analysis <strong>and</strong> transmission, <strong>and</strong> is suitable to be used incommunity hospital <strong>and</strong> home.REFERENCES[1] V Traver, E Monton, JL Bayo, etc,“Multiagent Home Telecare Platformfor Patients with Cardiac Disease,” Computers in cardiology,2003,30:117-120[2] Richard Isais,Khoi Nguyen,Gabriel Perez,etc,“A Low-costMicrocontroller-based Wireless ECG-Blood Pressure Telemonitor forHome Care,” Peoceedings of the 25th Annual International Conferenceof the IEEE EMBS Cancun,Mexico,Sep.17-21,2003[3] Lin Wenggan <strong>and</strong> Lin Xin, “A system research of real-time ECGtelemonitoring,” Journal of Huazhong University of Science <strong>and</strong>Technology(Nature),suppl,2003.10(31):305-306[4] Neild, L. Lanae, Pargas, etc, “Investigating peer-to-peer systems forresource sharing within a small group of nodes,” Proceedings of theInternational Conference on Information Technology: Coding <strong>and</strong>Computing (ITCC’04)[5] Sen, Subhabrata, Wang, etc,“Analyzing peer-to-peer traffic across largenetworks,” ACM Transactions On Networking, 2004,12(2):219-232[6] Yeung, Man Chun, Chung, etc, “Peer-to-peer video distribution overthe Internet,” Video Processing, TENCON 2003:p359-363Fig.6.Main interface of client software127


1Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaWireless Communication Technologies in Mobile <strong>Telemedicine</strong>Chen MinCollege of Bioengineering of Chongqing University, Key Laboratory of Biomechanics & Tissue Engineering(Chongqing University), Ministry of EducationLei JianmeiCollege of Communication Engineering of Chongqing University, Chongqing 400044Peng ChenglinCollege of Bioengineering of Chongqing University, Key Laboratory of Biomechanics & Tissue Engineering(Chongqing University), Ministry of EducationGuo XingmingCollege of Bioengineering of Chongqing University, Key Laboratory of Biomechanics & Tissue Engineering(Chongqing University), Ministry of EducationAbstract Communication network constitutes the basis of<strong>Telemedicine</strong> System (TS). Traditional TS based on wiredcommunication technologies is significantly limited inapplication due to the poor network condition. However, the fastdevelopment of wireless communication technologies lately hasmade it possible to form a wearable <strong>and</strong> movable Mobile<strong>Telemedicine</strong> System (MTS). In this article we propose thestructure of a MTS, <strong>and</strong> analyze the present as well as trend ofavailable wireless communication technologies.I. INTRODUCTION<strong>Telemedicine</strong> System (TS) provides health care informationservice for doctors <strong>and</strong> patients at different locations throughthe use of remote communication technologies. The primarypurpose is to serve the people in far off areas with cheap yetadvanced medical treatment. Most of the telemedicineapplications nowadays are based on wired communicationtechnologies, while in many places starving for medicalservices, telemedicine applications, limited by the poorcondition of wired communication network, are lack offoundation. Today, the great process of wirelesscommunication technologies (including bothmiddle-<strong>and</strong>-short range communication <strong>and</strong> remotecommunication technologies), especially mobilecommunication technologies represented by digital celltechnology, has made it possible for traditional wiredcommunication based TS to transform to wirelesscommunication based Mobile <strong>Telemedicine</strong> System (MTS).II. STRUCTURE OF MTSGeneralized TS is made up of <strong>Medical</strong> Client,Communication Network <strong>and</strong> <strong>Medical</strong> Service Provider, asshown in Fig. 1.Fig. 1. Structure of a <strong>Telemedicine</strong> System<strong>Medical</strong> Client is a certain group of people in need ofmedical service, <strong>and</strong> <strong>Medical</strong> Service Provider is the medicalcenter in the central city. Communication Network build abridge between them.MTS differs from traditional TS in that wirelesscommunication instead of wired communication is used ascarrier of information transmission. In a MTS, short rangewireless communication technologies are used to buildconnection between the information collecting unit <strong>and</strong>processing unit, while remote wireless communicationtechnologies are used between <strong>Medical</strong> Service Clients <strong>and</strong><strong>Medical</strong> Centers.Fig. 2 shows the basic structure of a MTS client:Fig. 2. Structure of a Mobile <strong>Telemedicine</strong> System.The “Physiological Information Collecting” unit is made upof a physiological information sensor, a MCU <strong>and</strong> a shortrange wireless communication unit. It is wore on the patient’sbody to collect physiological information <strong>and</strong> build a WPANwith the “Physiological Information Processing” unit so as tocontrol the information collection <strong>and</strong> transmission.Information transmission between the client <strong>and</strong> medicalcenter is based on mobile communication network, as shownin Fig. 3:Fig. 3. The basic architecture of link between client side <strong>and</strong> medical center.MTS conquers the most deadly limitation that keepstraditional TS from popularization, location <strong>and</strong> time.Through MTS, people could get professional medicaltreatment whenever they need it, wherever they are. This isvery important for remote home nursing, emergency medical128


2Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinaaid in disaster <strong>and</strong> the improvement of medical level in far offareas.III. WIRELESS COMMUNICATION TECHNOLOGIESTable I shows some wireless communication technologiesclassified by their working distance:Table I Main wireless communication technologiesTitle Technology Utilized Working RangeWPANZigBeeBluetoothUWBWLAN 802.11x10-75 m10-100 m10 mIndoor: 10-150 mOutdoor: 300 mWWAN 2G, 2.5G, 3G Across cities <strong>and</strong> areasIn Table I, WPAN <strong>and</strong> WLAN work in short range, whileWWAN work in long range. The short range <strong>and</strong> long rangecommunication technologies support <strong>and</strong> complement eachother. They construct a wireless communication system bothnear <strong>and</strong> far reachable, making it possible for users to transmitinformation at any time <strong>and</strong> any location.A. Short-Range Wireless Communication Technologies[1]-[4]Among all the short-range wireless communicationtechnologies, WPAN (Wireless Personal Area Network ) hasthe shortest working range, normally within 10m around theuser’s body, <strong>and</strong> WLAN(Wireless Local Area Network)typically has a working distance of several hundred meters.Each of the 3 dominating technologies of WPAN has itsown application purpose. The st<strong>and</strong>ard of blue tooth <strong>and</strong>ZigBee have been officially authorized. UWB is still indiscussion <strong>and</strong> its st<strong>and</strong>ard is yet to be set. After several yearsof market competition, IEEE 802.11x st<strong>and</strong>ard has absolutelydominated in the global WLAN market. Table II comparesthese technologies:Table II Main technical characteristic of Bluetooth, UWB, ZigBee <strong>and</strong>802.11xTitle Data Rate Working RangePowerConsumptionBluetooth 1Mbps 10-100m 10-100mWUWB 110-480Mbps 10m < 1mWZigBee 10-250 kbps 10-75m < 1mW802.11x(a, b, g)5.5-54MbpsIndoor: 10-150 mOutdoor: 300 m100mW-350mWZigBee is a low power consumption <strong>and</strong> low cost wirelesstechnology, typically suitable for industrial, home <strong>and</strong> medicalcircumstances that call for low power consumption <strong>and</strong> lowcost wireless communication. UWB is technology with wideb<strong>and</strong>width, low power consumption <strong>and</strong> excellent interferenceresisting ability, aiming at high speed audio <strong>and</strong> video as wellas multimedia applications. The transmitting rate of blue toothis only 1Mbps, far lower than UWB. Besides, the cost <strong>and</strong>power consumption of blue tooth are higher than UWB.WLAN is a power consuming technology too, which makes itimproper for the wearable user end in a MTS. However,WLAN is capable of multi-user communication in one AP,<strong>and</strong> the data rate is quite high. Therefore, WLAN is a verygood choice for outdoor disaster emergency medical treatment<strong>and</strong> hospital sickroom monitoring.The wearable user end in a MTS requires the short-rangewireless communication technology to be low powerconsuming, low costing <strong>and</strong> easy to construct a network. Atthe same time, the ECG, temperature <strong>and</strong> blood pressureinformation that the user end collects doesn’t need very hightransmission data rate. It can be seen that among all theshort-range wireless technologies, ZigBee is most suitable fora wearable user end. A wireless physiological informationsensor network built using ZigBee is out of cables, <strong>and</strong> can becomfortably wore on the patient’s body all through for the bestmonitoring effect.B. Remote Wireless Communication Technologies [5]Satellite, microwave <strong>and</strong> mobile communicationtechnologies all fall into remote wireless communicationtechnologies. Both satellite <strong>and</strong> mobile communication cantransmit information without the restriction of time <strong>and</strong>location, but mobile communication is the first choice for aMTS because of its low cost. Satellite communication can aidto accomplish telemedicine under some special conditions.Mobile communication is developing by leaps <strong>and</strong> bounds.In present, most cities of our country are using 2Gtechnologies, including GSM or CDMA (IS-95), providingonly speech <strong>and</strong> low rate data transmission service. 2.5G isbetween 2G <strong>and</strong> 3G, providing services more than speech.Data rate in a 2.5G system is faster than that in a 2G system. Insome countries, 3G trial systems are under test. These systemssolve the problems in 2G <strong>and</strong> 2.5G. They can provideseamless global overlay <strong>and</strong> roaming, <strong>and</strong> support more dataservices. Table III compares their data rate.Table.III Data transmission rate of mobile communication systemMobileCommunicationSystemData Rate2G 2.5G 3G9.6kbps(GSM)171.2kbps(GPRS)316.8kbps(CDMA2000 1x)144kbps(For a running car)384kbps(For walking speed)2Mbps(Indoor)It will be some time before 3G systems are widely deployed.Therefore, only 2G <strong>and</strong> 2.5G wireless communicationtechnologies are available for MTS. 2.5G systems includeCDMA2000 1x <strong>and</strong> GPRS (General Packet Radio Service),both of which are based on package switch technology.CDMA2000 1x is better in data transmission performance,while GPRS has a wider overlay range <strong>and</strong> provides moreattractive service.129


3Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaWide overlay range, better support for data services (higherdata rate), support for simultaneous data <strong>and</strong> speechtransmission, all these make the 2.5G mobile system the bestcommunication carrier for a MTS.IV. CONCLUSIONWireless communication technologies constitute the basisof MTS. In the last ten years, WPAN, WLAN <strong>and</strong> WWANhave all been developed greatly. At the same time, MTS hasadvanced from laboratory to clinic experiment [6]-[9]. It canbe seen that with further development of wirelesscommunication technologies, MTS will help people turn thedream into reality of being served by advanced medicaltreatment without limitation of time <strong>and</strong> location.REFERENCES[1] Karaoguz J. High-rate wireless personal area networks. IEEECommunications Magazine, 2001; 39(12):96.[2] Callaway E, Gorday P, Hester L, et al. Home networking with IEEE802.15.4: a developing st<strong>and</strong>ard for low-rate wireless personal areanetworks. IEEE Communications Magazine, 2002; 40(8):70[3] Johansson P, Kazantzidis M, Kapoor R, et al. Bluetooth: an enabler forpersonal area networking. IEEE Network, 2001; 15(5):28.[4] Heegard C, Coffey JT, Gummadi S, et al. High performance wirelessEthernet. IEEE Communications Magazine, 2001; 39(11):64.[5] Samjani AA. General Packet Radio Service [GPRS]. IEEEPotentials, 2002; 21(2):12.[6] Pavlopoulos S, Kyriacou E, Berler A, et al. A novel emergencytelemedicine system based on wireless communicationtechnology-AMBULANCE. IEEE Transactions on InformationTechnology in Biomedicine, 1998; 2(4):261.[7] Istepanian RSH, Woodward B, Richards CI. Advances in telemedicineusing mobile communications. Proceedings of the 23rd Annual EMBSInternational Conference of the IEEE, 2001; 4:355.[8] Woodward B, Istepanian RSH, Richards CI. Design of a telemedicinesystem using a mobile telephone. IEEE Transactions on InformationTechnology in Biomedicine, 2001; 5(1):13M. Young, The TechincalWriters H<strong>and</strong>book. Mill Valley, CA: University Science, 1989.[9] Voskarides SC, Pattichis CS, Istepanian RSH, et al. Practical evaluationof GPRS use in a telemedicine system in Cyprus. 4th International IEEEEMBS Special Topic Conference on Information TechnologyApplications in Biomedicine, 2003:39130


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaSmart Bundle Management Layer for Optimum Management of Co-existing<strong>Telemedicine</strong> Traffic Streams under Varying Channel Conditions in HeterogeneousNetworksF.Shaikh, A. Lasebae <strong>and</strong> G.MappSchool of Computing Science, Middlesex UniversityWhite Hart LaneLondon. N11 1BAUnited Kingdom{f.shaikh, a.lasebae, g.mapp}@mdx.ac.ukAbstract - Heterogeneous networks facilitate easy <strong>and</strong> costeffectivepenetration of medical advice in both rural <strong>and</strong> urbanareas. However, disparate characteristics of different wirelessnetworks lead to noticeable variations in network conditionswhen users roam among them e.g. during vertical h<strong>and</strong>overs.<strong>Telemedicine</strong> traffic consists of a variety of real-time <strong>and</strong> nonreal-time traffic streams, each with a different set of Quality ofService requirements. This paper discusses the challenges <strong>and</strong>issues involved in the successful adaptation of heterogeneousnetworks by wireless telemedicine applications. A proposal isrendered for the development of a Smart Bundle Management(SBM) Layer for optimally managing co-existing traffic streamsunder varying channel conditions in heterogeneous networks.This layer acts as an interface between the applications <strong>and</strong> theunderlying layers for maintaining a fair sharing of channelresources. Internal priority management algorithms areexplained using Coloured Petri nets. The paper lays thefoundation for the development of strategies for efficientmanagement of co-existing traffic streams across varyingchannel conditions.I. INTRODUCTION<strong>Telemedicine</strong> is the branch of medical science which dealswith the provision of health-care over a distance with the helpof communication technologies. These technologies play avital role in the successful deployment of telemedicalapplications, by facilitating the transmission of specialisedmedical data among different locations. Rapid technologicaldevelopments in the field of communication have resulted inan increase in the popularity <strong>and</strong> in the number of successfultelemedical procedures [6]. However, this growth has beencomplemented with an enormous rise in the dem<strong>and</strong> forimproved, high-speed communication st<strong>and</strong>ards; capable oftransmitting large amounts of complex medical data e.g.detailed patient history, streaming media <strong>and</strong> information forreproduction of virtual environments. Speed <strong>and</strong> quality ofinformation transfer play a significant role in the choice ofnetwork st<strong>and</strong>ards that provide satisfactory levels of Qualityof Service at affordable prices.Wireless technologies allow mobility <strong>and</strong> enable thepenetration of health-care in rural <strong>and</strong> remote areas that areout of reach of wired infrastructure networks. Thesenetworking st<strong>and</strong>ards are particularly useful in enhancing prehospitalcare by providing timely access to expert medicaladvice [7, 8]. Studies show that in medical procedures such aspercutaneous coronary intervention <strong>and</strong> fibrinolytic therapy inacute myocardial infarction, the survival benefits declinerapidly with increasing time delays [9]. The studyMetropolitan Area NetworkPersonalAreaNetworkWide Area NetworkCompany Area Network(Wireless LAN)Fig.1. Heterogeneous 4G Networkconducted in [9] demonstrated considerable reduction inevaluation delays when patient information was transmittedfrom the ambulance to the attending cardiologist’s palmtopthrough a wireless channel. Wireless st<strong>and</strong>ards also assisthealthcare professionals situated at remote locations tocollaborate <strong>and</strong> confer with one another. Thus, wirelessnetworks possess a huge potential that could be harnessed toexp<strong>and</strong> the radius of availability of health-care.In the wireless field, considerable research is going on inthe development of fourth generation (4G) heterogeneousnetworks. The popular design of heterogeneous networksconsists of a collection of different wired <strong>and</strong> wireless accesstechnologies that converge down to a common all-IP basedcore network [12]. These networks promise users ubiquitous<strong>and</strong> seamless networking anytime, anywhere, with access tomultimedia <strong>and</strong> real-time applications on the move.A vital requirement for telemedicine procedures is thereliable, uninterrupted delivery of information.Heterogeneous 4G networks will allow users to access a wide131


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinarange of location dependent services like increased data rates<strong>and</strong> streaming media. Consider an ambulance equipped withwireless telemedicine devices <strong>and</strong> initially under the coveragearea of an IEEE 802.11g Wireless LAN (WLAN) hotspotwith data rates up to 54 Mbps [13]. Under the coverage areaof the hotspot, the ambulance will transmit the telemedicinetraffic streams at the available data rates. However, on themove the device will h<strong>and</strong>off to the next best availablenetwork e.g. GPRS which offers data rates up to 13.4 Kbps[13]. Thus the connection could be maintained albeit at lowerdata rates. Furthermore, if the ambulance travels into ruralareas that do not fall under GPRS coverage, the device canh<strong>and</strong>off to the wide-area satellite network. Even though itmay not be possible to transmit high-quality multimediastreams at all times, 4G networks offer more reliability byallowing healthcare professionals to roam freely betweenurban <strong>and</strong> rural areas, <strong>and</strong> still remain connected to the mainsite through the best available network service.The successful implementation of 4G involves resolving anumber of issues. The convergence of networks withdisparate characteristics results in many complexities at boththe application <strong>and</strong> network level, particularly duringconditions like vertical h<strong>and</strong>overs. Although the channelquality improves during a downward vertical h<strong>and</strong>over,(when the MH moves from a macro cell to a micro cell) it c<strong>and</strong>egrade considerably during an upward vertical h<strong>and</strong>over,which may result in connection loss. To maintain anacceptable level of Quality of Service (QoS), it is vital to hidethese complexities from applications while roaming amongnetworks. Apart from this, maintenance of a balanced flow ofmulti-class traffic across a wireless channel under varyingnetwork conditions <strong>and</strong> reconfigurability of terminal devices<strong>and</strong> network elements for dynamic selection of best availableservice [14] are a few among the numerous issues thatresearchers are striving to discover optimum solutions for, toform a truly ubiquitous heterogeneous 4G network. Yet,despite the numerous challenges involved in the developmentof a ubiquitous heterogeneous network, the fascinating idea ofseamless connectivity anytime, anywhere makes it anattractive field of research.In this paper we discuss the challenges <strong>and</strong> issues involvedin the successful adaptation of heterogeneous networks inwireless telemedicine applications. We survey theachievements of some previous projects, <strong>and</strong> explain thenovelty of our work. We then propose the development of aSmart Bundle Management Layer (SBM) for the optimummanagement of multi-class streams over a heterogeneouslink. The basic design of the internal priority mechanisms ispresented using Coloured Petri Nets <strong>and</strong> finally, we concludethe paper with a discussion on future work.II. TELEMEIDICINE TRAFFIC CLASSIFICATION<strong>Telemedicine</strong> traffic can be classified into differentcategories depending upon their QoS requirements:A. Delay intolerant traffic: The traffic consists of real-timeaudio <strong>and</strong> video streams that facilitate a high level ofinteractivity between healthcare professionals. It exhibitstolerance to infrequent packet loss that does not distort theinformation quality beyond recognition. However, this traffictype imposes stringent delay constraints on the network,which are necessary to avoid jerky, non continuous motionsthat impair interactivity. Tolerable one way delays are up to150ms for 64kbps real-time video <strong>and</strong> up to 400ms for realtimeaudio [10]. The network must also manage in-orderdelivery of packets to the destination as the re-ordering ofpackets in real-time applications is difficult due to limitedreceiver-side buffer space. Store-<strong>and</strong>-play media streams areless delay sensitive than real-time streams, due to largerbuffer space, but reduce interactivity among users. These QoSrequirements lead to the choice of unreliable protocols likeUDP for the transfer of delay intolerant traffic.B. Loss intolerant traffic: This traffic is tolerant to delay <strong>and</strong>jitter, but intolerant to packet loss e.g. emails, file transfers,<strong>and</strong> detailed, high-visual-quality images. These images suchas X-rays <strong>and</strong> sonographic images require a reliable packetdelivery <strong>and</strong> reconstruction of the image at the receiver. Lossintolerant traffic is transmitted using reliable protocols likeTCP that preserve end-to-end semantics. Vital signs wouldcontain information such as heart rate, blood pressure <strong>and</strong>ECG. To avoid any distortions in the ECG readings, e.g.delays in cardiograms when transmitted directly (due tonetwork congestion) we suggest capturing <strong>and</strong> transmittingthese cardiograms in the form of images at short regularintervals.C. Loss <strong>and</strong> Delay intolerant traffic: Although not widelyrequired by applications today, this traffic imposes stringentconstraints on delay, loss <strong>and</strong> throughput variation. Withbroadb<strong>and</strong> wireless st<strong>and</strong>ards becoming more prevalent,extensive research is being carried out for projecting surgicalexpertise in hostile environments. The US Air Force (USAF)Surgeon General <strong>and</strong> USAF Directorate of Modernisation areinvolved in exploring the potential of surgical robotics inmilitary applications, mainly for deploying robotic devices indangerous combat environments [8]. Tele-surgical dataconsists of specialised medical information pertaining tovirtual reality <strong>and</strong> haptic feedback which are very delaysensitive;hence the enormous dem<strong>and</strong> for network resourcesby this type of traffic.D. Delay <strong>and</strong> loss indifferent traffic: In this case, applicationsgenerate best-effort traffic <strong>and</strong> do not exert any dem<strong>and</strong>s fornetwork resources. Instead they adjust their traffic patterns tomatch with prevailing network conditions. Best-effort service132


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinadoes not guarantee reliable or ordered delivery of packets.The packets are of lowest priority <strong>and</strong> are not affected byjitter or throughput variations. Although best effort service isnot a suitable choice for many applications, sometimes it isthe only option available for information transfer in networksexhibiting high error rates.III. TRAFFIC MANAGEMENT CONCERNS IN HETEROGENEOUSNETWORKSThis section highlights the effects of vertical h<strong>and</strong>overs onthe quality of service of different traffic streams. The mainconcerns that arise are:A. Disruption in traffic flows due to large variations inchannel latencies during vertical h<strong>and</strong>overs:Difficulties arise mainly when a mobile host (MH) roamsbetween networks that exhibit large variations in performanceparameters e.g. delays, b<strong>and</strong>width <strong>and</strong> packet loss rate. Withdifferent network access technologies offering different datarates, an upward (high b<strong>and</strong>width to low b<strong>and</strong>width) verticalh<strong>and</strong>over will result in a decrease in the data rate of trafficstreams, which may cause degradation of performance. A MHexperiences delays when it moves into a new network <strong>and</strong>adjusts its behaviour to the new environment. Every networkhas different latency values. Variations in transmission delays<strong>and</strong> inter-packet arrival rates (jitter) can degrade theperformance of delay-sensitive traffic.In case of TCP-based traffic, the disparate nature ofdifferent networks <strong>and</strong> packet loss errors in wireless networkshave led to the development of different TCP flavours thataimed to deliver optimum performance in the network theywere tailored for, e.g. HighSpeed TCP for high b<strong>and</strong>widthlinks, STP for satellite links, TCP NEWRENO <strong>and</strong> TCPVegas for wireless networks [11]. Nevertheless, there stillexists the need for the development of a protocol that caneffectively differentiate between congestion <strong>and</strong> packet lossin any wireless network.Different networks exhibit different round-trip-times(RTT). Thus after a vertical h<strong>and</strong>over, the time required by atraffic stream to adjust to the new network would be the sumof network layer latency (Tn) <strong>and</strong> the adaptation latency (ta)(delay that occurs when the MH adapts the TCP connection tothe new network) [3]. D. Cottingham et al. [3] highlighted thefact that TCP-connection adaptation latency could actually belonger than the total h<strong>and</strong>over latency. The system wouldexperience further degradation of performance if both sender<strong>and</strong> receiver are mobile. Moreover, although it may bepossible to migrate a TCP connection on to a new interfaceduring soft h<strong>and</strong>overs, in reality the application performs ahard h<strong>and</strong>over between the old <strong>and</strong> new TCP states [15]. Thusdue to the presence of variable RTTs <strong>and</strong> bit errors in aheterogeneous environment, it is very difficult for TCP toreach an optimal level of performance.B. Management of co-existing traffic streams that compete forchannel resources: A telemedicine procedure consists of coexistingTCP <strong>and</strong> non-TCP flows. As these flows compete forchannel b<strong>and</strong>width, it causes a decrease in the availablethroughput. Thus [2] shows,µ = µ´ + φwhere µ is the total capacity rate of the wireless channel, µ´the rate of TCP-flows <strong>and</strong> φ the rate of non-TCP flows.Furthermore, results of the analysis in [2] show that non-TCPflows seriously affect TCP flows when TCP evolutionreaches the congestion avoidance sub-phase. In a wirelessenvironment, high packet error rates cause TCP to frequentlyenter the slow start phase, not allowing it to make maximumutilisation of available channel capacity. Upward verticalh<strong>and</strong>overs will cause a further decrease in µ, resulting in aneven lower transfer rate per flow. Hence it is vital to managethese co-existing traffic streams efficiently to avoiddisruption.A MH in a heterogeneous network must be aware of allavailable network access technologies <strong>and</strong> be able to choosethe right one based on application requirements. The MH inthis case must have an up-to-date knowledge of the quality ofservice of all available network access technologies.IV. RELATED WORKEarlier works have attempted to highlight the variouschallenges involved in the successful deployment ofheterogeneous networks, <strong>and</strong> some have made valuablecontributions by proposing solutions to overcome thesechallenges. In this section, we discuss the achievements ofsome relevant projects <strong>and</strong> highlight the issues that remainunsolved, especially the management of myriad trafficstreams over heterogeneous links.Guenkova-Luy et al. in [5] proposed the development of anend-to-end negotiation protocol (E2ENP) for negotiating QoSparameters. This protocol defined specific criteria fordescription <strong>and</strong> management of session control data.However, although the model succeeded in the reduction ofthe QoS renegotiation times in a LAN environment, the sameresults may be difficult to achieve on erroneous wireless linksthat exhibit large variations in round-trip-times. Moreover,the E2ENP protocol did not address the technical challengesfaced during negotiation of QoS parameters at lower layersduring vertical h<strong>and</strong>overs.Hsieh et al. proposed a multi-state transport layer solutioncalled parallel TCP (pTCP), which aimed to provide an endto-endapproach for h<strong>and</strong>ling host mobility without anysupport from underlying network infrastructure [4]. Aconnection was split across different interfaces, for achievinghigher data rates through aggregated b<strong>and</strong>width. However,the scope of this study was limited only to the management ofconnection-oriented TCP traffic across heterogeneousnetworks <strong>and</strong> did not consider the performance of delay-133


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinaintolerant multimedia traffic which is based on UDP. Asbuffering would hamper interactivity, connection-splittingwould be of no benefit in such scenarios. The paper also doesnot explain the fate of packets lost after being retransmittedon a new interface.The study in [1] introduced Jitter-Based TCP (JTCP), astrategy that aimed at distinguishing congestion from packetloss over wireless networks by studying inter-arrival jitter.The drawback of this approach was its complete dependenceon RTTs. JTCP judges the current state of the wireless linkbased on the value of the previous RTT. Thus it could fail todistinguish congestion loss from wireless link error lossduring vertical h<strong>and</strong>overs when it comes in contact withwireless access technologies exhibiting large variations inRTTs.F.Hu et al in [2] proposed the development of an analyticalmodel for co-existing TCP <strong>and</strong> non-TCP traffic on wiredcum-wirelesslinks. A valuable contribution of this study wasa detailed analysis of the throughput performance of arbitrarysized TCP connections based on an approximated fluidmodel. It provided some insight into the behaviouralcharacteristics of TCP traffic in the presence of non-TCPtraffic, <strong>and</strong> forms the basis of some of our work. However, inorder to avoid complexity in the analysis, the authors did notaddress the problems that arise in the wireless sub-networkduring h<strong>and</strong>overs, especially problems faced due to varyingRTTs.The study in [16] proposed the development of a testbedplatform for studying the behaviour of heterogeneous wirelessnetworks. It proposed several solutions such as fast routeradvertisements, router advertisement caching <strong>and</strong> bindingupdate bi-casting to reduce latencies <strong>and</strong> packet loss duringvertical h<strong>and</strong>overs. A policy-based h<strong>and</strong>over solution(PROTON) [20] aimed to provide a set of dynamicallychanging policies to help mobile devices adapt to networkvariations without incurring huge delays. As these studiesfocussed mainly on capturing network conditions to assistdevices during h<strong>and</strong>overs, the SBM Layer will rely on this setof mechanisms to inform it about prevailing channelconditions.V. SMART BUNDLE MANAGEMENT LAYERThis section provides an overview of the design <strong>and</strong>functioning of the Smart Bundle Management Layer.Residing above the Transport Layer in the network model, theSBM Layer adapts a fine-grained approach for the optimummanagement of co-existing traffic streams to ensure minimumuser disruption when a device roams among diverse accesstechnologies. As the functionality of this layer is focusedmainly on the behavioural patterns of traffic streams acrossdiverse access links, its design is based on the followingassumptions:1. Vertical h<strong>and</strong>overs are client-based soft h<strong>and</strong>overs that takeplace at the mobile sender’s site, while the receiver isstationary.2. The MH is allowed to connect to the available networks.3. The SBM layer’s decision-making mechanism is based onthe feedback received from the lower layers about theavailable networks <strong>and</strong> their prevailing channel conditions,i.e. available b<strong>and</strong>width, packet loss rate <strong>and</strong> time beforeh<strong>and</strong>over. To achieve this, the SBM Layer would rely onexisting models like IETF’s PCIM [17], TRIGTRAN [18] <strong>and</strong>PROTON [16].Technological constraints such as limited network capacityor coverage area may render a single wireless technologyincapable of satisfying all the required performance criteria oftraffic streams on its own. The layer aims to derive maximumbenefit of the condition where there is an increase in theavailable network resources when the MH’s trajectory fallsunder the coverage area of several networks. The SBM layeraims to achieve the following goals:• Assignment of priorities to application streams based ontheir QoS requirements.APPLICATIONSAPP1 APP2 APP3 APP4 APP5PPB1 PPB2 PPB3 PPB4 PPB5STREAM PRIORITY MECHANISMS1 S2 S3 S4 S5TRANSPORTLAYERSTREAMSTS1 TS2 TS3CH1 CH2 CH3Fig. 2. The SBM Layer Graphical ModelCC1CC2CC3NETWORKPOOL• Optimised management of traffic to prevent overloadingof channel capacity.• Mapping of streams on to different networks based onchannel characteristics.• Management of traffic flows to adjust them according toprevailing channel conditions.Fig. 2 gives an overview of the graphical design of theSBM Layer which manages streams in a hierarchical manner.The layer receives periodic updates from the lower layersabout available networks <strong>and</strong> their Channel Characteristics(CC) located in the Network Pool. An application wanting toinitiate a transfer first sends its Performance ParametersBlock (PPB) down to the SBM Layer. The layer thenperforms the Resource Availability Check (RAC), which is acomparision of the application quality of service parameters134


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinain the PPB with the available networks’ characteristics, toensure the availability of channel resources for that particulartransfer. If the application fails the RAC then it is added to awaiting queue <strong>and</strong> will be activated only when the requiredresources become available.The application that clears the RAC then undergoes theMulti-class Stream Priority Management mechanism(MSPM) (explained in Section 6) where it is assigned apriority based on the parameter values in PPB. Applicationpriorities are listed in table 1. The SBM Layer will decide theurgency of a particular stream accordingly.The layer would make use of a per-flow control mechanismfor the management of co-existing TCP <strong>and</strong> non-TCP trafficstreams. Distribution of streams across different interfaceswill prevent channel overloading <strong>and</strong> simplify theirmanagement. The decision of assigning streams to channelswould depend on the actual capacity available after existingstreams consume their required network resources. The SBMLayer will maintain a list of networks the application iscompatible with, <strong>and</strong> in the event of the unavailability of theassigned channel (CH), the application will be assigned thenext channel on the list.TABLE ITRAFFIC STREAM PRIORITIESPriorityType of Service1 Best effort service, delay-tolerant with no heavydem<strong>and</strong>s for b<strong>and</strong>width2 Best effort service, delay-tolerant but some dem<strong>and</strong> forb<strong>and</strong>width3 Loss intolerant traffic, but less dem<strong>and</strong> for b<strong>and</strong>width, inthe form of small packets4 Loss intolerant traffic, but dem<strong>and</strong>s b<strong>and</strong>width5 Delay intolerant traffic, but in the form of small packets6 Delay <strong>and</strong> jitter sensitive, with dem<strong>and</strong> for b<strong>and</strong>width7 Delay <strong>and</strong> loss insensitive traffic.VI. COLOURED PETRI NET REPRESENTATIONColoured Petri Nets (CP-Nets) [19] are design tools thatprovide a framework for the design, specification, validation<strong>and</strong> verification of systems. As they possess the ability tosimultaneously represent both states <strong>and</strong> actions, we haveemployed CP-Nets to capture the behavioural properties ofthe various components of the SBM Layer <strong>and</strong> verify theircorrectness. However due to limited space, in this paper werestrict ourselves to the explanation of the Multi-class StreamPriority Management mechanism. State Space Tool employedto analyse the functional correctness of the MSPM confirmedthe absence of any infinite occurrences or deadlocks.The execution of a CP-Net consists of the flow of tokensamong places <strong>and</strong> transitions. While places are represented byellipses <strong>and</strong> depict the different states in the procedure,transitions shown by rectangles capture the actions that takeplace when tokens (packets of different data values) reachcertain places. Every place in a CP-Net is associated with adata type (colour set) <strong>and</strong> contains tokens of the same datavalue (colour). An arc connects transitions <strong>and</strong> places. Thetransition becomes enabled (ready for execution) when allplaces linked to it through incoming arcs have a token inthem. During execution a transition removes all tokens fromincoming arcs <strong>and</strong> forwards tokens along outgoing arcs totheir respective places. In the presence of arc expressions, thetransition can place the token on the outgoing arc whoseexpression conditions are satisfied by the token data.Figure 3 shows the Multi-class Stream PriorityManagement mechanism for the SBM Layer. PlaceAPPLICATION represents the state where the application hascleared the RAC procedure <strong>and</strong> is waiting for priorityassignment. The inscription PPB (Performance ParameterBlock) denotes the product colour set which is a combinationof several pre-declared colour sets. Each token onAPPLICATION will have the colour set of type PPB of thefollowing self-explanatory fields:Colour PPB = Product ApplicationID * MinB<strong>and</strong>width *PacketPerSec * InterPacketDelay * Protocol.Protocol field contains the name of the protocol to be usedduring transfer. How it will actually be implemented isdecided by the SBM Layer. E.g. while Protocol may containthe value TCP, the SBM Layer will decide which flavour ofTCP to adapt for a particular channel. A token onAPPLICATION binds with the arc variable apprec of typePPB <strong>and</strong> enables the Transition PRIORITY_DECISION. Thistransition is linked to seven outgoing arcs, one for eachpriority. Arc expression PRIORITYN apprec denotes thefunction PRIORITYN (N=1 to 7) which consists of predefinedconditions that determine the movement of the token alongone of the arcs based on the values in variable apprec.Depending upon which place the token is forwarded to, thecorresponding transition addpriority becomes enabled, thistime by the place135


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaPRIORITYN <strong>and</strong> PRIOR which attaches the priority value.The token that emerges from addpriority now consists of thecolour set NEWAPPREC containing the following fields:Colour NEWAPPREC = product ApplicationID * Priority *Protocol * PacketPerSec * InterPacketDelay.Thus each application is assigned a priority based on itsperformance parameter values. By introducing timed conceptto the existing CP-Nets, it would be possible to avoidb<strong>and</strong>width starvation of low priority streams by increasingtheir priority with increasing value of time spent in queue.VII. FUTURE WORKOne of the main goals of the SBM Layer is to act as aninterface between the applications <strong>and</strong> the underlying layersto maintain a fair sharing of channel resources among them.However, this goal is associated with several challenges thatmust be overcome in order to achieve a satisfactory level ofperformance. Some of the main issues that we aim to addressin the near future are:1. Multiple h<strong>and</strong>overs: When the MH is transmitting onseveral networks, the issue of multiple vertical h<strong>and</strong>overscould arise, increasing the complexity of transfer. It isimportant to devise strategies to minimise the occurrence ofsuch conditions as they result in an increase in computationalload, leading to increased power consumption.2. Hybrid flow mechanism: A major drawback of adaptingTCP in wireless environment is its inability to maintain stableflows on erroneous wireless links. While the congestionwindow prevents over flooding of packets in channels,frequent packet errors cause TCP to enter slow start moreoften, resulting in an inefficient utilisation of b<strong>and</strong>width. Inorder to avail maximum benefit of stable channel conditionsin wireless channels, it is vital to transmit as many packets asFig. 3. Multi-class Stream Priority Management Mechanismpossible during improved channel conditions to recompensefor any reduced throughput when losses occur. Unfortunately,by continuing in slow start, even under improved networkconditions, TCP does not allow the application to adapt tosudden changes in the network. To overcome this problem wepropose a hybrid flow mechanism which is a combination ofrated-based <strong>and</strong> window-based flow schemes.3. Management of multiple Care-of-Addresses (CoA): Theprocess of simultaneous transmission on multiple interfacesmay result in multiple CoA assignments per MH. Our studyshows that there is a need for the development of strategies tomanage these addresses efficiently at both the correspondenthost <strong>and</strong> MH.VIII. CONCLUSIONThis paper explored the benefits of telemedicine overheterogeneous networks <strong>and</strong> discussed its traffic QoSrequirements. The main focus was on challenges associatedwith the co-existence of traffic streams during verticalh<strong>and</strong>overs <strong>and</strong> the effect of network variations ontransmission quality. While the issues mentioned in the papercontinue to exist, it would not be possible to avail the benefitsof heterogeneous networks to their full extent. The paperpresented the concept of the SBM layer for the managementof co-existing traffic streams across multiple interfaces. Itthus laid the foundation for the development of strategies forefficient management of co-existing traffic streams acrossvarying channel conditions.IX. REFERENCES[1] E. Wu <strong>and</strong> M. Chen, “ JTCP: Jitter-Based TCP for HeterogeneousWireless Networks,” IEEE J. Select. Areas.Commun, vol. 22, pp.757-766,May 2004.[2] F.Hu, N.K.Sharma <strong>and</strong> J. Ziobro, “An Accurate Model For AnalyzingWireless TCP Performance with the Co-existence of Non-TCP Traffic,”Computer Networks, vol. 42, pp.419-439, 2003.136


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, China[3] D.Cottingham <strong>and</strong> P.Vidales, “Is Latency the Real Enemy in NextGeneration Networks?” Initial Draft.[4] H. Hsieh, K. Kim <strong>and</strong> R. Sivakumar, “An End-to-End Approach forTransparent Mobility across Heterogeneous Wireless Networks,” MobileNetworks <strong>and</strong> Applications, vol.9, pp.363-378, 2004.[5] T.Guenkova-Luy <strong>and</strong> A.J. Kassler, “End-to-End Quality-of-ServiceCoordiation for Mobile Multimedia Applications,” IEEE J. Select.Areas.Commun, vol.22, pp. 889-903, June 2004.[6] K. Shimizu, “<strong>Telemedicine</strong> by Mobile Communication,” IEEEEngineering in Medicine <strong>and</strong> Biology, July/August 1999, pp. 32-44.[7] V. Ananthraman <strong>and</strong> L. Han, “Hospital <strong>and</strong> Emergency AmbulanceLink: Using IT to Enhance Emergency Pre-hospital Care,” InternationalJournal of <strong>Medical</strong> Informatics, vol. 61, 2001.[8] C.M. Marohn <strong>and</strong> C.E. Hanly, “Twenty-first Surgery using Twenty-firstCentury Technology: Surgical Robotics,” CURRENT SURGERY, vol.61,number 5, pp. 466-473, September/October 2004.[9] P.Clemmensen et al. “Prehospital Diversion to Hospital With Acute PCISet-up Using Wireless 12-Lead ECG Transmission,” Journal ofElectrocardiology, vol.37, Supplement 2004.[10] J.Kurose <strong>and</strong> K. Ross, Computer Networking: A Top-Down ApproachFeaturing the Internet. 2nd ed., Addison Wesley, July 2002.[11] M.Hassan <strong>and</strong> R.Jain, High Performance TCP/IP Networking Concepts,Issues <strong>and</strong> Solutions. International ed., Pearson Prentice Hall, 2004.[12] R. Koodli, "Fast H<strong>and</strong>overs for Mobile IPv6", Internet Draft. March2003. [Online]. Available: http://www.ietf.<strong>org</strong>/internet-drafts/draft-ietfmobileip-fast-mipv6-06.txt,[13] J. Schiller, Mobile Communications, 2nd ed., Pearson Education, 2003.[14] H. Bing, C. He <strong>and</strong> L. Jiang, "Performance Analysis of VerticalH<strong>and</strong>over in a UMTS-WLAN Integrated Network", in Proc. IEEE PIMRCSymposium, 2003.[15] A. Snoeren <strong>and</strong> H. Balakrishnan, “An end-to-end approach in hostmobility,” in Proc. of ACM MOBICOM, Boston, MA, 2000, pp. 155-16.[16] P. Vidales et al., “Performance Issues with Vertical H<strong>and</strong>overs -Experiences from GPRS Cellular <strong>and</strong> WLAN hot-spots Integration,” Proc.IEEE PerCom 2004.[17] P. Karn, Editor, Advice for internet subnetwork developers,Performance Implications of Link Characteristics (PILC) Working Group:Internet-Draft. draft-ietf-pilc-link-design-13.txt,February 2003.[18] S. Dawkins, C. E. Williams, <strong>and</strong> A. E. Yegin, “Framework <strong>and</strong>requirements for trigtran,” Internet-Draft draft-dawkins-trigtran-framework-00.txt, February 2003.[19] K. Jensen, Coloured Petri Nets Basic Concepts, Analysis Methods <strong>and</strong>Practical Use Volume1.2nd ed., Springer 1997.[20] P. Vidales, R. Chakravorty <strong>and</strong> C. Policroniades, “PROTON: A PolicybasedMobile Client System for the Fourth Generation (4G) MobileEnvironments,” Proc. IEEE POLICY 2004.137


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaVisualization ----Software <strong>and</strong> hardware138


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaParametric 3D Visualization of HumanCrystalline Lens based on OpenGLXU Xiuying a , LIU Zhuo b ,WANG Boliang a ,CHEN Yu a ,YANG Hao aa Xiamen University, Xiamen, 361005 Chinab National University of Defense Technology, Changsha, 410073, Chinae-mail:xxiuying_82@163.comABSTRACT: A parametric 3D geometry model of human crystalline lens is established, <strong>and</strong> an interactive <strong>and</strong> a realistic 3Dlens display system using VC++ <strong>and</strong> OpenGL is developed in this paper. Firstly, a st<strong>and</strong>ard lens model is constructed basedon the published experimental data of the lens profile. Then, the lens shape can be adjusted to satisfy certain restrictingconditions <strong>and</strong> geometry topology relation by inputting clinical characteristic parameters such as lens diameter, lensthickness, lens nucleus diameter, nucleus thickness <strong>and</strong> nucleus equatorial offset. Thus, the presented model can be used tocharacterize various lenses of the different people. To make the model more realistic <strong>and</strong> to show the lens cortex as well as thenucleus, special effects such as lighting, transparence <strong>and</strong> blending are applied on the model using the popular graphicinterface OpenGL, furthermore, rotation <strong>and</strong> zooming functions were introduced to the model lens so that the user can viewthe lens from any angle <strong>and</strong> any distance. To conclusion, the parametric modeling method proposed in this paper realized the3D visualization of lens of the Chinese virtual eye; it provides an available basis for further studies on lens accommodation<strong>and</strong> surgical simulation research.KEYWORDS: Human crystalline lens; Parametric modeling; OpenGL; Visualization; Interactive systemI. INTRODUCTIONIn the fields of medical surgery teaching <strong>and</strong> training,the virtual reality (VR) technology becomes morepopular <strong>and</strong> irreplaceable <strong>and</strong> has an encouraging future.By applying VR technology, medicals can experiencereal surgery environment <strong>and</strong> learn how to deal with thecases. VR technology can greatly save the time <strong>and</strong>money for training medicals; furthermore, it can reducethe surgical risk by improving medicals operation skillsin the virtual [1] . The application of VR technology inmedical field is called surgical simulation system, whichsimulates the various phenomena which may occur inpractical operations. It refers to referrers to theinteraction <strong>and</strong> visualization of medical data as well asthe simulation of objects movement <strong>and</strong> the sense<strong>org</strong>ans [2] .To realize a surgical simulation system, the first step isthe three dimensional (3D) visualization of medicalimages. This paper proposes a parametric 3D geometrymodel of human crystalline lens, which is the base of thefurther studies on lens accommodation <strong>and</strong> surgicalsimulation research. There are two ways to realize the 3Dmodeling. One is to build a solid 3D model by the 3Dmodeling software such as AutoCAD; the other is todevelop the lens model directly with graphic interfacesuch as OpenGL. The first method is relatively moreconvenient when the model is fixed. It is unfit to design aparametric lens model, because if different characteristicparameters, for example, new lens diameter, lensthickness, nucleus diameter, nucleus thickness, were set,dissimilar lens need to be constructed. That is, the lensmodel is diverse. Therefore the way that builds a fixedmodel is unsuitable. OpenGL (GL means GraphicsLibrary) graphics system is a software interface tographics hardware. With OpenGL, you can controlThis work was supported by National Natural Science Foundation ofChina (Project No. 60371012).139computer-graphics technology to produce realisticpictures or objects that depart from reality in imaginativeways. It becomes one st<strong>and</strong>ard developing tool ofpowerful graphics processing <strong>and</strong> 3D displayprogramming. For details please see the references[3][4][5][6] . In this paper, we adopt the second way toconstruct 3D lens model directly using OpenGL.II.CONSTRUCTION AND DISPLAY OF LENS MODELA. Construction of the 3D Lens ModelIn VR, main methods to model objects include:geometric modeling, image-modeling, blend modeling,integrated modeling. Geometric modeling generates realor virtual object models by applying geometry operations,such as rotating, transforming <strong>and</strong> zooming, <strong>and</strong> setoperations on basic geometry elements such as points,lines, planes <strong>and</strong> solids. Since the lens has regular shape,it is unnecessary to use complex modeling methodreferred to pictures. So geometric modeling is adopted inthis study.Among all current researches on the lens there is abasic assumption: lens is an axisymmetrical object. Thecurrent study adopted this assumption. The measurement<strong>and</strong> mathematical description of the lens’ profile are veryimportant to model the lens, but it is beyond the scope ofthis paper. Available experimental data about whole lensshape is rare. The widely used experimental data aremeasured by Fincham [7] <strong>and</strong> Brown [8] . This studyadopted their data to plot the profile of the lens. Fig.1shows the profile of the lens with main parametersannotated. The plus part of v axis indicates the frontiersurface, <strong>and</strong> the minus part indicates the back surface.Two 5-order polynomial are used to fit the surface curves,as following:V= - 0.00026524U 5 + 0.0044986U 4 - 0.016573U 3 -0.065789U 2 +2.42 (1)V=0.0026648U 5 - 0.02667U 4 +0.084679U 3 +0.061728U 2-2.42 (2)


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaAn ellipse which has a constant offset on v axis is usedto describe the nucleus profile; the major axis <strong>and</strong> theminor axis of the ellipse are also constant in a certainlens.The 2D discrete coordinate data were calculated from(1) <strong>and</strong> (2).The 3D coordinates are obtained by rotating2D coordinates around Y coordinate axis. The formulathat calculates 3D coordinates( x,y,z)from 2Dcoordinates ( u,v)is as formula(3)⎧x= u ∗ cos( α )⎪⎨y= v(3)⎪⎩z= u ∗ sin( α )Where α ∈ [ 0,2π] , is rotating angle. A smallerinterval of rotating angle will bring a finer 3D lens modelas well as more data. So both the data quantity <strong>and</strong>fineness should be taken into account when choosing theinterval of rotating angle. 3D coordinates calculated by(3) were used to construct the 3D model of lens with thegraphic functions of OpenGL. Every four adjacent pointswhich were denoted by the contiguous coordinates in the3D coordinates set were used to form a quadrangle <strong>and</strong> amass of small quadrangles make up of the whole lenssurface. Using OpenGL functions to form a mass ofquadrangle takes the form as below.//set the lighting <strong>and</strong> material property of the lenssurface hereglBegin(GL_QUADS_STRIP)//set the normal vector of a pointloop:// h<strong>and</strong>le with 4 points every timeglvertex3v(point1);glvertex3v(point2);glvertex3v(point3);glvertex3v(point4);goto:loopglEnd().Fig.2 shows the algorithm of the construction of thelens surface. Other axisymmetrical objects can bemodeled in the same way.Fig.2 Lens surface construction algorithmB. Make the 3D model realisticLighting is the key to generate a realistic 3D object.Without lighting, 2D <strong>and</strong> 3D objects can hardly berecognized in vision [9] . To make the 3D lens realistic, weshould apply lighting in the scene <strong>and</strong> set proper materialproperties of the lens. Bouknight lighting model is usedin this paper. When lighting is used, objects’ normalvector should also be set. Normal vector tells OpenGLthe orientation <strong>and</strong> the frontal is illuminated with the rearis not. We can set the normal vectors for each vertex ofthe quadrangle, or set one normal vector for onequadrangle alternatively. The latter method may causediscontinuity of the normal vector of each quadrangle. Tomake the lens surface smooth, an algorithm was designedto calculate the normal vector for each vertex of aquadrangle. Firstly, the 2D normal vector of eachdiscrete point was calculated. Then it exp<strong>and</strong>ed to 3Dcoordinates. Suppose the 2D coordinates of point i is(u i ,v i ), <strong>and</strong> the point i+1 is (u i+1 ,v i+1 ). Set du=u i+1 -u i ,dv=v i+1 -v i , the angle between vector (du,dv) <strong>and</strong> u axis isα, then the 2D normal vector of point i isn i =(n iu ,n iv )=(cos(α−π/2), sin(α−π/2)), see Fig.3. Whenrotating, point i produced a latitude. Suppose the rotatingangle is θ, then the 3D vertex normal vector is⎧nx= niucosθ⎪(4)⎨ny= niv⎪⎩nz= niusinθAfter getting the normal vectors, OpenGL functionglNormal3f(n x , n y , n z ) is used to set the vertex normalvector. Fig.4 shows the 3D lens model without settingvertex normal vectors, it looks like 2D picture. Fig.5shows the model with vertex normal vectors calculatedby formula (4). Furthermore, to see the lens nucleusclearly, the blend function of OpenGL glBlendFunc() isused to make transparent effect, see Fig.6.Fig.3 Calculation of the normal vectorFig.1 Lens profile with main parameters annotated.2D profile expressionDiscretization <strong>and</strong> rotation3D coordinates setLighting<strong>and</strong>materialpropertysettingsQuadranglesLens surfaceThe generatingalgorithm ofvertex normalvector140Fig.4 3D lens without normal vectors


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaFig.5 Realistic 3D lens with normal vectors⎛xc⎞ ⎛NDC/ ND00 0 0 ⎞⎛x0⎞⎜ ⎟ ⎜⎟⎜⎟⎜yc⎟ ⎜ 0 NTC/ NT00 NSC−NS0⎟⎜y0⎟ (6)⎜ ⎟=z ⎜ 0 0 / 0 ⎟⎜⎟⎜cNDCND0z0⎟ ⎜⎟⎜⎟⎝1⎠ ⎝ 0 0 0 1 ⎠⎝1 ⎠where (ND O , NT O ) <strong>and</strong> (ND c , NT c ) are the default <strong>and</strong>input nucleus diameters <strong>and</strong> thicknesses respectively,NS 0 , NS C are the default <strong>and</strong> input nucleus offsets,(x o ,y o ,z o ) <strong>and</strong> (x c ,y c ,z c ) are default <strong>and</strong> modified nucleusdata coordinates respectively.Table 1 shows two sets of parameters, fig.7 <strong>and</strong>fig.8 are the corresponding 3D visualization of these twomodels.Table1 Two sets of parameters*unit:mmlensdiameterlensthicknessnucleusdiameternucleusthicknessnucleusoffsetSet 1 8.627 4.08 5.70 2.68 0.5119Set 2 8.896 4.84 6.20 3.22 0.6292*Data from Brown[8].Fig.6 Transparent lens with nucleus visibleIII. PARAMETRIC MODELLING OF THE LENSDifferent people have different lens geometry shape.Our desired model should be adjustable by individualgeometry information. Unfortunately, this objectivesuffers deeply from the lack of high quality experimentaldata due to the constraint of current medical devices.However some important data of the lens shape, whichdetermine the refractive property of the lens, are stillavailable by using popular medical devices. Therefore,by fusing in vivo <strong>and</strong> in vitro data, a practical model canbe constructed to visualize the individual lens.In this section a parametric model of the lens isconstructed using 5 main lens parameters, namely lensdiameter, lens thickness, nucleus diameter, nucleusthickness <strong>and</strong> nucleus equatorial offset. The data insection 2 are assumed to be the default lens shape.Modified model is generated based on above fiveparameters <strong>and</strong> the default state. The modification of thewhole lens is divided into two parts: a. to modify theshape of the lens surface; b. to modify the shape of thenucleus. The input diameter of the lens is used to modifythe x <strong>and</strong> z coordinates components of the lens surfacedata <strong>and</strong> the thickness of the lens is used to modify the ycomponent. The transfer formula of the lens surface datais as (5).⎛ xc⎞ ⎛ D⎞C/ D00 0 ⎛ x0⎞⎜ ⎟ ⎜⎟⎜⎟⎜ y ⎟ = ⎜ 0 T / T00 ⎟⎜y (5)cC0 ⎟⎜ ⎟ ⎜⎟⎜⎟⎝ zc⎠ ⎝ 0 0 DC/ D0⎠⎝z0⎠where (D O ,T O )is the default diameter <strong>and</strong> thickness of thelens in section 2, (D c , T c ) is the input diameter <strong>and</strong>thickness, (x o ,y o ,z o ) <strong>and</strong> (x c ,y c ,z c ) are default <strong>and</strong>modified data coordinates respectively. Similarly, thetransfer formula of the nucleus data is as (6)fig.7 3D model lens from set 1fig.8 3D model lens from set 2Current modification formula is a linear transform.This simplification is due to that only 5 parameters areused to modify the default model. To integrate moreparameters of the lens shape into the default model toconstruct a more accurate model is our future work.IV CONCLUSIONSA parametric 3D geometry model of human crystallinelens <strong>and</strong> an interactive, realistic 3D lens display systemusing VC++ <strong>and</strong> OpenGL is established in this paper. Byinputting clinical characteristic parameters such as lensdiameter, lens thickness, lens nucleus diameter, nucleusthickness <strong>and</strong> nucleus equatorial offset, new modelwould be constructed. This system is interactive; the usercan rotate, transfer or zoom in <strong>and</strong> out the realistic lens.The model described in this paper presentes a direct141


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, China3D virtual eye’s lens structure. It is the basis of furtherlens surgical simulation. Our future research will befocused on the amending this model by increasingcharacteristic parameters such as lens frontier surfacecurvature <strong>and</strong> back surface curvature to improve themodel’s facticity, <strong>and</strong> setting the tissue’s physicalproperty to establish the functional model as well asrealizing surgical simulation on virtual eye.REFERENCES[1] Tan Ke, Guo GuangYou, Wang YongJun, Wu Peng, Theapplication of virtual reality technology on medical surgicalsimulation <strong>and</strong> training. Acad J PLA Postgrad Med Sch, 2002,23(1):77-79[2] Hong JingJing, Geometry reconstruction & simplification <strong>and</strong>realistic rendering technique on surgery simulation system.Master degree thesis of ZheJiang university, 2001[3] Meson Woo, Jackie Neider, OpenGL programming authoritativeguide[M], China Electricity Press. 2000[4] Bai YanBin, Shi HuiKang, OpenGL 3D graphic libraryprogramming guide, Mechanism Industry Press, 1998[5] Gao JianLing, Shi HuaiJin, Di JiangTao, A new open graphicslibrary OpenGL <strong>and</strong> its applications. Engineering College ofGuiZhou Transaction, 1995,24(6):81-84[6] S Richard, Jr Wright. OpenGL SuperBible[M]. 2nd Edition,Waite Group Press,2000[7] E.F. Fincham, “The mechanism of accommodation”, BritishJournal of Ophthalmology, Suppl.8, pp.5-80, 1937.[8] N. Brown, “The change in shape <strong>and</strong> internal form of the lens ofthe eye on accommodation”, Experimental Eye Research, vol.15,pp.441-459, 1973.[9] Wang YuHua, Yang KeJian, Wang Ling, To draw realistic imagewith lighting technology of OpenGL. Modern Computer, 2002,9:72-75[10] Shi Qiong, Shen ChunLin, Tan Hao, Realizaion to 3D modelingmethods based on OpenGL. Computer Engineering <strong>and</strong>Applicaions, 2004,18:122-124[11] Donald Heam M, Pauline Baker, Computer Graphics[M].Electronic Industries Press,1998142


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, China3D Navigation of CT Virtual Endoscopy in Non-invasive Diagnostic DetectionXie Xiaomian 1,2 Tao Duchun 2 Chen Siping 2 Bi Yalei 2( 1 Biomedical Engineering Department of Tsinghua University, Beijing 100084;2 Anke Enterprise Post-doctor Workstation, Shenzhen 518067; )Abstract- CT virtual endoscopy (CTVE) is becomingvery useful in clinical diagnosis, but the control <strong>and</strong>manipulation of CTVE always present a difficultproblem when the viewpoint is flying in the targetedduct structure of tissue. Most existing methods use theMulti-Planar Reformatted (MPR) images which aremanipulated respectively to pilot the viewpoint inflying. Some other methods calculate a fixed path offlying at first <strong>and</strong> then fly in. The above methods arenot convenient for the users in practice <strong>and</strong> someabnormal changes in the tissues may not be seen easilybecause of the inconvenience of viewing. In this paper,an absolutely free 3D navigation system wasdeveloped <strong>and</strong> was used to control the flying directionof viewpoint without the necessity of calculating afixed path before the flying began. This 3D navigationsystem can help the users acquire an absolute freedomin flying <strong>and</strong> the viewpoint can fly forward, flybackward <strong>and</strong> fly around in any direction <strong>and</strong> freely.This 3D navigation system was integrated in theCTVE software which was used in the non-invasivediagnostic detection <strong>and</strong> the application resultsshowed a higher performance <strong>and</strong> effectiveness.Keywords-CT virtual endoscopy; 3D navigation;Non-invasive detectionI. INTRODUCTIONAccording to clinical reports, many abnormal changesin the duct structures of tissues can be leading causes ofdisease [1]. The duct structures of tissues includetracheobronchial structures, bile ducts, colorectal ducts,<strong>and</strong> so on. Clinical researches have proved that the vastmajority of diseases arise from the abnormal changesin these anatomic ducts, such as the colorectal polyps,calculus in the bile ducts, <strong>and</strong> stenosis in thetracheobronchial structures, therefore, early detection <strong>and</strong>early removal of pre-malignant abnormal changes is themost important approach to prevent malignant changesfrom occurring <strong>and</strong> to decrease both incidence <strong>and</strong>mortality [2,3]. CT virtual endoscopy (CTVE) is themost suitable <strong>and</strong> effective approach of non-invasivedetection to meet these clinical diagnostic needs.As one of the major medical imaging modalities, CThas been playing a very important role in clinicaldiagnosis practice, <strong>and</strong> with the rapid progress of CTimaging techniques in recent years, it has become morepowerful in detection of abnormal changes in humanbody tissues. Multi-slice spiral CT scanners can presentmuch better spatio-temporal resolution (especially theaxial space resolution with thin-slice scanning technique)<strong>and</strong> the clinical images can present much more significant<strong>and</strong> valuable diagnostic information which significantlyincrease the clinical diagnostic sensitivity <strong>and</strong> specificity.With such advances of CT imaging techniques, CTVE isbecoming very useful in clinical diagnosis applications,<strong>and</strong> the diagnostic reliability of CTVE has been greatlyincreased in application practice.The control <strong>and</strong> manipulation of CTVE alwayspresent a difficult problem when the viewpoint is flyingin the targeted duct structures of tissues [5, 6, 7]. Mostexisting methods use MPR images which are manipulatedrespectively to pilot the viewpoint in flying. Some othermethods calculate a fixed path of flying before the flyingstarts. The above methods are not convenient for the usersin practice <strong>and</strong> some abnormal changes in the tissues maynot be seen easily because of the inconvenience ofviewing. In this paper, an absolutely free 3D navigationsystem was developed <strong>and</strong> was used to control the flyingdirection of viewpoint without the necessity ofcalculating a fixed flying path. This 3D navigation systemcan help the users acquire an absolute freedom in flying<strong>and</strong> the viewpoint can fly forward, fly backward <strong>and</strong> flyaround in any direction <strong>and</strong> freely. This 3D navigationsystem was integrated in the CTVE software <strong>and</strong> wasused in the non-invasive diagnostic detection practice,showing a higher performance <strong>and</strong> effectiveness.143


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaⅡ. MATERIALS AND METHODSA. Images data acquisitionIn CT images acquisition, a beam collimation of 3mm was used, table speed was set at 6 mm/s in the spiralscanning acquisition process of the 8-slice CT scanner.Such a scanning protocol allows reliable detection of verysmall abnormal changes in the tissues. 50% overlapreconstruction was used at the image reconstruction stageto gain an improved effect of volume scanning. Thescanning parameters are: 2mm for thickness, 120KV,250MA, st<strong>and</strong>ard reconstruction mode, 32cm for FOV,the scanning started at the position of pharynx of patientbody <strong>and</strong> ended at the position of midriff.B. Data field pre-processingData field preprocessing involves the classification<strong>and</strong> segmentation approaches, <strong>and</strong> this process is realizedwith a pre-specified transfer function. And this transferfunction must be adjustable so as to define the interestedtissue structures which are to be observed <strong>and</strong> visualizedin CTVE. The transfer function algorithm which wasused in our classification approach is as below:x i : voxel locationr : region thicknessf v : region thresholdC. 3D ReconstructionThere are basically two kinds of 3D reconstructionmethods, namely surface rendering which is used in SSDfunctions <strong>and</strong> volume rendering which is used in VRTfunction. In our CTVE method, the authors developed anovel 3D reconstruction method of self-adaptive volumerendering which was used to construct an optimizedCTVE 3D scene, <strong>and</strong> this approach can gain much morevivid reality of 3D images <strong>and</strong> at the same time greatlyfacilitate the 3D navigation functions which are veryimportant in the interactive manipulations of CTVEsystem. This self-adaptive volume rendering method canbe demonstrated as Fig 1.Voxel FieldAdaptiveVR linesFig. 1: self-adaptive volume rendering linesD. 3D navigation methodsThe principal purpose of navigation is to pilot theviewpoint to fly in the 3D scene as freely <strong>and</strong>conveniently as possible <strong>and</strong> the users can go where theywant to go inside the duct structures of tissues <strong>and</strong> to seewhat they want to see, <strong>and</strong> thus to identify the interestedfeatures in the tissues as quickly <strong>and</strong> conveniently aspossible. In this 3D navigation approach, the authorsregard the navigation environment as 5 principalelements:ⅰ)Current 3D location of the viewpoint.ⅱ)Current 3D direction of the viewpoint.ⅲ)Expected direction of turning around.ⅳ)Expected extent of turning around.ⅴ)Expected extent of flying forward or backward.The navigation system can be demonstrated as Fig 2.When the viewpoint of CTVE starts to fly at theentrance point, the authors use the navigation controllerto present the expected direction <strong>and</strong> extent of flyingforward or backward, <strong>and</strong> the expected extent of turningaround. Such interactive manipulation information can beused to feed the navigation system, <strong>and</strong> to adjust the 3Drendering system in real time.Ⅲ. RESULTSThe images were sent to the CTVE system after theimages acquisition. The flying entrance point wasselected at the interested position on a specified image.Then the Viewpoint started to fly into thetracheobronchial tissue structures. With the navigationcontroller, the viewpoint was flying forward, backward,<strong>and</strong> around freely in real time, as showed in Fig 3.144


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaNavigationControllerSystemForwardPrevious:LocationDirectionExpected:Location changeDirection changeForward Right ForwardCurrent:LocationDirectionRight Right Back3D Recon.&RenderingsystemRight Back BackwardFig.2: 3D Navigation structuresForwardⅣ. DISCUSSION3D navigation plays a very important role in CTVEfunctions [7] <strong>and</strong> has several significant advantages over2D navigation methods which use the MPR images fornavigation control. Constructing a fixed path of flying isalso a 3D navigation method <strong>and</strong> is widely used in someapplications [4], but the fixed path can impair thefreedom <strong>and</strong> convenience for CTVE flying <strong>and</strong>observation. When the viewpoint is flying along the fixedpath, some abnormal changes which locate beside theviewpoint <strong>and</strong> in the tissue rugae may be unseen in flyingprocess <strong>and</strong> thus some significant diagnostic informationmay be lost. In this paper, a novel 3D navigation methodwas presented, <strong>and</strong> this 3D navigation method can presentmuch more freedom <strong>and</strong> convenience for CTVEfunctions, <strong>and</strong> it can be a more favorable solution toCTVE navigation problems. With this 3D navigationmethod, it is not necessary to construct a fixed flying pathbefore the flying starts <strong>and</strong> the viewpoint can fly in anydirection, freely <strong>and</strong> conveniently, forward <strong>and</strong> backward<strong>and</strong> around in real time. To achieve this goal, the 3DFig.3: 3D navigation CTVE results in just oneflying process.reconstruction <strong>and</strong> rendering algorithm must becompatible with the 3D navigation mechanism. In thisCTVE system, the authors developed a novelself-adaptive volume rendering method which iscompletely compatible with the 3D navigation system.Researches in imaging application indicate thatCTVE has a bright future in clinical diagnosis practice,because it is a non-invasive approach which has thefollowing advantages over the conventional invasiveapproaches:1) Non-invasive approach presents much more safety<strong>and</strong> convenience in clinical practice.2) Non-invasive approach can be performed repeatedlyon the computer.3) Non-invasive approach can reach much moreobservation extension of the interested tissue structurewhile the conventional invasive approach can only reachlimited extension.With the improvement in the spatial <strong>and</strong> temporal145


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinaresolution of multi-slice CT scanners, CTVE willundoubtedly play a more significant role in non-invasivediagnostic detection in clinical applications.REFERENCES[1] Cancer Facts <strong>and</strong> Figures. American CancerSociety1998, 2. American Cancer Society, Inc. Atlanta, USA[2] SEER Cancer Statistics Review – 1973–1992,USDepartment of Health <strong>and</strong> Human Services,PublicHealth Service, NIH, 1995.[3] Muto T, Bussey HJR <strong>and</strong> Morson BC. The Evolu-tionof Cancer of the Colon <strong>and</strong> Rectum. Cancer 1975; 36:2251–2270.[4] Rui C. H. Chiou, Arie E. Kaufman, Zhengrong Liang,Lichan Hong, <strong>and</strong> Mir<strong>and</strong>a Achniotou, ”InteractivePath Planning for Virtual Endoscopy, IEEENuclear Science Symposium 1998, Vol. 3 , pp. 20692072, 1998.[5] [6] S. J. Phee, W. S. Ng, I. M. Chen, F. Seow-Choen,<strong>and</strong> B. L. Davies, ”Automation of Colonoscopy PartII: Visual-Control Aspects, IEEE Engineering inMedicine <strong>and</strong> Biology Magazine , Vol. 17, Issue 3 ,pp. 81- 88, May June, 1998.[6] D. Bartz, D. Mayer, J. Fischer, S. Ley, A. del RÌo, S.Thust, C. Heussel, H. Kauczor, <strong>and</strong> W.Strafler.Hybrid Segmentation <strong>and</strong> Exploration of theHuman Lungs. In Proc. of IEEE Visualization, 2003.[7] D. Bartz. Virtual Endoscopy in Research <strong>and</strong> ClinicalPractice. In Eurographics State-of-the-Art-Report S4,2003.146


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaAutomatic Correction of MinIP in CT Diagnostic ApplicationsTao Duchun 2 Xie Xiaomian 1,2 Bi Yalei 2 Chen Siping 2( 1 Department of Biomedical Engineering, Tsinghua University, Beijing 100084;2 Anke Enterprise Post-doctor Workstation, Shenzhen 518067;)Abstract-Minimum Intensity Projection (MinIP) is auseful imaging function in CT clinical applications,<strong>and</strong> is routinely used to visualize the abnormalchanges in low density tissues. MinIP imaging usevolume rendering methods to realize it’s functions.This paper explored the 3D reconstruction methods ofMinIP functions, using a set of CT images acquiredfrom clinical routines, <strong>and</strong> then focused on theimprovement <strong>and</strong> optimization methods of MinIPimaging functions, an automatic correction methodwas used to improve <strong>and</strong> optimize the MinIP 3Dreconstruction results, the experimental resultsshowed that the presented correction methods can beeffective in improving the qualities of MinIP imaging.Keywords-Minimum Intensity Projection;Clinicalapplication; Automatic correctionaccumulation or containing of air existing in the softtissues (especially in the lung tissues)[3], under suchcircumstances, MinIP can be the most suitable method tovisualize the situation of the accumulation or containingof air existing in the soft tissues.MinIP imaging functions can be realized usingvolume rendering methods. This paper explored the 3Dreconstruction methods of MinIP functions, using a set ofCT images acquired from clinical routines, <strong>and</strong> thenfocused on the improvement <strong>and</strong> optimization methods ofMinIP imaging functions, an automatic correction methodwas used to improve <strong>and</strong> optimize the MinIP 3Dreconstruction results, the experimental results showedthat the correction methods can be effective in improvingthe qualities of MinIP imaging.Ⅱ. MATERIALS AND METHODSI. INTRODUCTIONCT imaging has undergone rapid advances oftechniques in recent years, <strong>and</strong> it has achieved greatimprovements in spatial <strong>and</strong> temporal resolutions. Withthe introduction of multi-slice CT scanners, thin-slicescanning techniques are widely used in CT imagingpractice. In clinical practice, 3D visualization has beenplaying a very significant role in post-processing <strong>and</strong>diagnostic applications[1], such as surface shaded display(SSD), volume rendering (VR), Maximum intensityprojection (MIP), Minimum intensity projection (MinIP),virtual endoscopy (VE) , <strong>and</strong> so on. Thesepost-processing functions can be very useful tools for thedoctors in their clinical diagnostic practice [2, 4].Minimum Intensity Projection (MinIP) is a usefulimaging function in CT clinical applications, <strong>and</strong> isroutinely used to visualize the abnormal changes in lowdensity tissues. In clinical practice, accidental traumas<strong>and</strong> some other abnormal changes can lead to theA. Images Data AcquisitionIn CT scanning, a beam collimation of 3 mm wasused. Table speed was set at 6 mm/s in the spiral scanningacquisition process of the 8-slice CT scanner. Such ascanning protocol allows reliable detection of very smallabnormal changes in the tissues 50% overlapreconstruction was used at the image reconstruction stageto gain an improved effect of volume scanning. Thescanning parameters are: 10mm for thickness, 120KV,250MA, st<strong>and</strong>ard reconstruction mode, 38cm for FOV,the scanning started at the position of pharynx of patientbody <strong>and</strong> ended at the position of midriff. The obtainedimages were sent to our post-processing software systemto perform MinIP exploration.B. MinIP 3D ReconstructionFor the acquired serial images, The minimumintensity projection method was used to gaincorresponding 3D results [5,6]. The acquired images weretransformed into 3D volume data fields, <strong>and</strong> from any147


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinapoint of view with arbitrary rotation <strong>and</strong> zooming, thesearching lines pass through the whole volume data field,<strong>and</strong> each searching line approaches to a unique pointaccording to the minimum intensity algorithm, the MinIPalgorithm can demonstrated as Fig 1.Fig.1. MinP searching algorithmC. Valid Voxel Searching ProblemIn clinical practice, the practical images acquiredfrom the imaging modalities do not always meet the idealneeds of MinIP algorithm, <strong>and</strong> there are always practicalproblems for the MinIP 3D reconstruction method. Oneof the major problems is the valid voxel searchingproblem, as shown in Fig. 2.1In Fig. 2, 1is the original image acquired from theCT imaging modalities, according to conventional MinIPreconstruction method, a searching line passes throughthe whole volume data field to gain the correspondingminimum intensity voxel in the voxel field. As shown inFig.2(2), if we use the conventional MinIPreconstruction method, we got a corresponding voxel P1,but P1 is false voxel, not a valid voxel in MinIPreconstruction. The correct voxel of MinIP reconstructionis P2. Therefore the conventional MinIP method needsto be corrected to gain corresponding correct <strong>and</strong>improved MinIP reconstruction results.In CT imaging, this problem for MinIPreconstruction is caused by the FOV parameters for CTimages acquisition in the scanning process.D. Automatic CorrectionTo solve the problem of valid voxel searching, wefirstly establish a transfer function for the 3D transformedvolume data field [7], <strong>and</strong> this transfer function is realizedthrough the experience function Ф(d, x) where d refers tothe CT number of the corresponding voxel, x refers to the3D spatial location of the corresponding voxel in volumedata field. For a specified voxle V(d, x), we can use thefollowing transfer function to judge if it is valid voxel ornot: if (Ф(d, x) = 1), then it is a valid voxel for thesearching process, if not , it is not a valid voxel for thesearching process. As far as the function Ф(d, x) isconcerned, it is a experience function, it must beestablished through a lot of experience <strong>and</strong> tests, <strong>and</strong> itmust be reliable <strong>and</strong> qualified enough to meet thepractical needs for practical applications.According to the MinIP reconstruction method, thesearching line passes through the whole volume filed,when it meets an invalid voxel, this voxel is ignored, <strong>and</strong>the searching process goes on, until it meets a validminimum intensity voxel.Ⅲ. RESULTSFig.22Valid voxel searching problem in MinIPFig. 3 shows the MinIP reconstruction results before<strong>and</strong> after automatic corrections. In the rendering process,the MinIP reconstruction results can be rotated <strong>and</strong>zoomed in a arbitral way. 1<strong>and</strong> 3 are the MinIPreconstruction results using the conventional MinIP148


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinamethods, the results shows that when it is rotated in somedirection, the problem of valid voxel searching occurred,<strong>and</strong> the resulted 3D image does not appear in a correctway. 2<strong>and</strong>4 are the MinIP reconstruction results usingour automatic correction methods, the proper 3D MinIPreconstruction results can be gained when it is rotated inwhatever direction, <strong>and</strong> at the same time, the quality ofMinIP reconstruction results is improved.1234Fig. 3 MinIP reconstruction results before <strong>and</strong> after automatic correctionsⅣ. DISCUSSIONAs an engineering method for Clinical applications,minimum intensity projection is very helpful in clinicalpractice to visualize the abnormal changes ofaccumulation or containing of air existing in the softtissues (especially in the lung tissues) caused byaccidental traumas <strong>and</strong> some other abnormal pathologicalchanges. This paper firstly explored the MinIP 3Dreconstruction methods <strong>and</strong> their applications, <strong>and</strong> thenanalyzed the valid voxel searching problem which existsin the conventional MinIP reconstruction process. For theconventional MinIP, when the 3D reconstruction resultsare rotated in an arbitrary way, the proper results can notbe gained sometimes because of the problem of validvoxel searching in the volume data field.In this paper, an automatic correction method wasused to solve the problem of valid voxel searchingproblem, the practical <strong>and</strong> experimental results showedthat, with the automatic correction method, proper MinIPreconstruction results can be gained when the they arerotated in a arbitrary way [8], <strong>and</strong> the quality of MinIPreconstruction results can be improved at the same time.As for the transfer function Ф(d, x), it is a experience149


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinafunction, it was established through a lot of experience<strong>and</strong> tests to make it reliable <strong>and</strong> qualified enough to meetthe practical needs for medical applications.In future work, with the advances of CT imagingtechniques, the MinIP reconstruction functions should bemore reliable <strong>and</strong> qualified to increase the diagnosticsensitivity <strong>and</strong> specificity as well as the clinical reliabilityin medical applications [2].REFERENCES[1] Jae Jeong Choi <strong>and</strong> Yeong Gil Shin, ”EfficientMultidimensional Volume Rendering, <strong>Medical</strong><strong>Imaging</strong> 99, San Diego, USA, February, 1999.[2] S. J. Phee, W. S. Ng, I. M. Chen, F. Seow-Choen, <strong>and</strong>B. L. Davies, ”Automation of Colonoscopy Part II:Visual-Control Aspects, IEEE Engineering inMedicine <strong>and</strong> Biology Magazine , Vol. 17, Issue 3 ,pp. 81-88, May June, 1998.[3] D. Bartz, D. Mayer, J. Fischer, S. Ley, A. del RÌo, S.Thust, C. Heussel, H. Kauczor, <strong>and</strong> W.Strafler.Hybrid Segmentation <strong>and</strong> Exploration of theHuman Lungs. In Proc. of IEEE Visualization, 2003.[4]J. Zhou, M. Hinz <strong>and</strong> K. D. Tönnies. “HybridFocalRegion-based Volume Rendering of <strong>Medical</strong>Data”,Bildverarbeitung für die Medizin, Leipzig,March 2002.[5] Andreas K¨onig <strong>and</strong> Eduard Gr¨oller. Masteringtransfer function specification by usingvolumeprotechn ology. In Spring Conference onComputer Graphics 2000 (SCCG 2001), volume 17,pages 279–286, April 2001.[6] Gordon Kindlmann, Ross Whitaker, TolgaTasdizen,<strong>and</strong> Torsten M¨oller. Curvature-basedtransfer functions for direct volume rendering:Methods <strong>and</strong> applications. In Proceedings of theIEEE Conference on Visualization ’03. IEEEComputer Society, 2003. ACM Press, 1998.[7] Gordon Kindlmann, Ross Whitaker, Tolga Tasdizen,<strong>and</strong> Torsten M¨oller. Curvature-based transferfunctions for direct volume rendering: Methods <strong>and</strong>applications. In Proceedings of the IEEE Conference.[8]Hanspeter Pfister, Jan Hardenbergh, Jim Knittel, HughLauer, <strong>and</strong> Larry Seiler. The volumepro real-timeray-casting system. In Proceedings of the 26th annualconference on Computer graphice <strong>and</strong> interactivetechniques, pages 251–260. ACMPress/Addison-Wesley Publishing Co.,1999.Visualization ’03. IEEE Computer Society, 2003150


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaEvaluation of Quantitative Capability of the Iterative <strong>and</strong> Analytic Reconstructions fora Cone-Beam Micro Computed Tomography SystemHo-Shiang Chueh, Wen-Kai Tasi, Hsiao-Mei Fu, *Jyh-Cheng ChenInstitute of Radiological Sciences, National Yang-Ming University*155, Li-Nong St., Sec.2, Peitou, Taipei, Taiwan, R.O.C. *886-2-2826-7282, *jcchen@ym.edu.twAbstract-Quantitative computed tomography (QCT) providesthe most accurate bone mineral density (BMD) measurement.It is greatly useful to several applications using a microcomputed tomography (microCT). The objective of this study isto evaluate the quantitative capability of a home-mademicroCT. Because the source of X-ray microCT ispoly-energetic <strong>and</strong> a linear attenuation coefficient (μ) isvariable with the energy of X-ray photons, a specific correctionpresented in this study is utilized to resolve the poly-energeticeffect. Then, a 3D distribution of μ is reconstructed by themaximum likelihood (ML) <strong>and</strong> T-FDK algorithms. They areboth developed for the cone-beam microCT. The imagesreconstructed by two algorithms with/without correction wereshown in the paper. A region of interest (ROI) analysis wasused to evaluate the different results from the analytic <strong>and</strong>iterative algorithms. The pros <strong>and</strong> cons of the two algorithms isdiscussed. The quantitative capability is better in the imagereconstructed by the iterative method along with beamhardening correction.Keyword: microCT, quantitative computed tomography,iterative reconstructionⅠ. INTRODUCTIONIn osteological diseases, osteoporosis is a common diseasein climacteric woman [1] <strong>and</strong> osteoarthritis is a commonage-related joint disease [2]. An important issue of osteopathicstudies is Bone Mineral Density (BMD), which measures theamount of calcium in regions of the bones. Many modalitiescan accomplish BMD test, such as quantitative ultrasound [3],Dual Energy X-ray Absorptiometry (DEXA) [4;5], SingleEnergy X-ray Absorptiometry (SXA), Peripheral Dual EnergyX-ray Absorptiometry (PDXA), Radiographic Absorptiometry(RA), Dual Photon Absorptiometry (DPA), Single PhotonAbsorptiometry (SPA), <strong>and</strong> Quantitative ComputedTomography (QCT) [4;6-10]. Among these modalities, theQCT is the most accurate BMD test. However, the QCT is notwidely available because it delivers more radiation dose topatient than ultrasound <strong>and</strong> DEXA. Even so, the QCT has ahigh potential for animal study, in which dose is less concerned.Among small-sample imaging modalities, the microcomputed tomography, often called microCT, is utilized toobtain high resolution tomographic images. The microCT usedin this study is a cone-beam computed tomography (CBCT),which provides volume information from multipletwo-dimensional projections. Compared to traditional computedtomography, the CBCT has a major advantage ofthree-dimensional imaging because the CBCT can producevolume information directly without stacking multipletwo-dimensional cross sections. Therefore, the CBCT is able torapidly acquire high-resolution three-dimensional images <strong>and</strong> issuperior in small animal imaging.The geometry employed in this study for the microCT is acircular orbit. Unfortunately, an exact reconstruction conditionis hard to be sufficed in this arrangement. However, severalapproximate reconstruction methods have been proposed. Themost popularly one is "practical cone-beam algorithm", whichwas developed by Feldkamp L.A. et. al. in 1984 [11]. Thecone-beam reconstruction algorithm can be roughly categorizedinto two classes: analytic <strong>and</strong> iterative methods. Feldkampalgorithm is an analytic reconstruction. It is a successful <strong>and</strong>efficient reconstruction algorithm. Nevertheless, the Feldkampalgorithm is limited by circular scanning, spherical specimenreconstruction <strong>and</strong> longitudinal image blurring. Due to thesereasons, modified Feldkamp approaches are used in practice,such as general cone-beam reconstruction algorithm [12] <strong>and</strong>the T-FDK algorithm [13]. Compared to the analytic method,iterative reconstruction, such as maximum likelihood algorithm(ML)[14;15] <strong>and</strong> expectation maximization algorithm (EM)[16],has potential to obtain a higher quality image, but dem<strong>and</strong> morecomputing resources than analytic algorithm. In this study,different results obtained by the T-FDK [17] <strong>and</strong> MLalgorithms were shown <strong>and</strong> evaluated.Ⅱ. THEORYA. The T-FDK algorithmα of thedetector measurement along a circular source-detectortrajectory is parameterized by the source angle on the circularα ∈ 0 , 2π, the column coordinate on the detectorThe set of cone-beam projection data R ( , c,r)trajectory [ ]c [ − cmax , + c max]r ∈ [ − r , + ]∈ <strong>and</strong> the row coordinate on the detectormaxr max. An illustration of the acquisitiongeometry of the original cone-beam projection data is shown inFig 2.In the T-FDK algorithm, the original cone-beam projectiondata can be converted to the set of virtual rectangular planedata p ( φ , , )d xyd z, which is isotropic <strong>and</strong> equidistantprojection data. φ is an extended source angle ofparallel-beam projection data on circular trajectory <strong>and</strong> arecalculated as follows:φ = α + β(1)β is a fan-angle <strong>and</strong> obtained by = tan −1( c / L)β ,where L is a source-to-detector distance. <strong>and</strong> are thecolumn coordinates <strong>and</strong> the row coordinates of the virtualrectangular plane, respectively, <strong>and</strong> are calculated as follows:d xy= S sin( β ) <strong>and</strong> r cos( β )dxySd z= (2)LS is the distance from extended source to virtualrectangular plane. Then, a weighted filter back projection isutilized to obtain resulting image <strong>and</strong> can be expressed in thefollowing manner:[ p( φ,d , d ) ⋅ w( d , d ) ∗ h( d )]r 1 2πµ ( x)= ∫dφ(3)xy z xy z xy2π0dz151


The functions w ( , ) <strong>and</strong> ( )d xyd zh are ad xytwo-dimensional weighting function <strong>and</strong> one-dimensional b<strong>and</strong>limited ramp filter [18], respectively. More detail of the T-FDKalgorithm was shown in Grass 2000.B. The ML algorithmThree-dimensional cone-beam reconstruction iscomputation expensive due to the huge amount of the data. Thisproblem is even worse for iterative algorithms. To overcomethis problem, before doing the ML reconstruction, thecone-beam projection data are rebinned into parallel-beamprojections by the same approach as done in the T-FDK method.The rows of rebinned data are independent to each other.Consequently, each row data are reconstructed separately <strong>and</strong>the requirement of computing resource is reduced. Although thethree-dimensional iterative algorithm can be achieved in theory,it is perhaps inefficient [19].In the ML algorithm, a set of X-ray linear attenuationcoefficients (LACs) is denoted by the vector µ = ( µ ) Ti| i ∈ N .It is an unknown vector constant <strong>and</strong> needs to be estimated. Aset of existed photon number acquired by the detector isTdenoted by the vector X = ( X | j ∈ N ) which is referred tothe rebinned dataj( φ , )p d xy, d z. An expectation of X isdenoted by X <strong>and</strong> calculated as follows:⎛ ⎞Xj = Bjexp ⎜ − ∑ gijµi⎟(4)⎝ i ⎠, where i <strong>and</strong> j are pixel subscript <strong>and</strong> beam subscript,respectively. The notation B j is an injected photon number ofbeam j. The notation g ij is the factor of system responsefunction between pixel i <strong>and</strong> beam j. Assume the existed photonnumber follows Poisson distribution, the probability of X j isexpressed by the following formula:X jXjf ( X ) ( ) (5)j, µ = exp − XjX !functionjOwing to statistical independence, the log-likelihoodL( µ )is defined by following manner:⎡⎛ ⎞⎤L( µ ) = ln f ( X,µ ) = ∑⎢− Xj ∑ gijµ i − Bjexp⎜− ∑gijµi ⎟⎥+ C(6)j∈N⎣ i∈N⎝ i∈N⎠⎦, where C is a constant which does not depend on theparameter µ . For a gradient based iterative algorithm formaximizing this ML function, the general form of the iterationis expressed by the following formula:( n)( ) ( ) ( )∂L( )n 1 n nˆ µ ˆi'= µi'+ Dµ ) + (7)i'∂µi') ( n)The partial derivativeis the i’th element∂L( µ ) ∂µi'of gradient vector <strong>and</strong> is expressed by( n)∂L( µ ) ) ⎡⎛ ( n)⎞⎤= ∑gi j ⎢ − Xj + Bjexp⎜− ∑gˆijµi⎟ (8)'⎥∂µi'j∈N⎣⎝ i∈N⎠⎦( n)The term is a weighting factor to affect the changeofProceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinaˆµ i '( n+ 1)Di '<strong>and</strong> expressed byD( n)i'=∑j∈N( n)ˆµi'(9)gi'j[ X ]jTo accelerate this iterative algorithm, the original formulais reformatted to ordered-subsets ML algorithm (ML-OS),which is defined in following manner:⎪⎧⎡ ⎛ ( ) ⎞ ⎤⎪⎫∑ ⎨ ⎢ ⎜ −ng ∑ ⎟i'j B j exp g −ij ˆ µ i Xj ⎥⎬( ) ( ) ( ) j∈J⎪⎩ ⎢⎣⎝ ∈ ⎠ ⎥ (10)n n n si N⎦ˆ µ⎪⎭i'( + ) = ˆ µ i ( ) +s 1 ' ˆ µs i'( s)g X( n)The ML-OS algorithm updates all estimated LACs ˆ µi'( s+1),where (n) is the iteration number <strong>and</strong> (s) is the subset number. J scontains the projections in subset s. The accelerated factorachieved by using ordered subsets is approximately in the orderof n [20;21].∑j∈JFig 1. Schematic view of the microCT system. The object isrotated by a rotating stage <strong>and</strong> illuminated for 360 degree. Thetransmitted X-rays are detected by a CMOS detector.Fig 2. Schematic view of the reconstruction volume of thecone-beam scanning system. R ( α , c,r)is the cone-beamprojection data, µ ( x r ) is the three-dimensional LACsdistribution. r <strong>and</strong> c are the row <strong>and</strong> column coordinates ofdetectorsi'jj152


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, China2120E (keV)1918171621BG341514130 0.05 0.1 0.15 0.2 0.25 0.3 0.35mmAlFig 3. The X-ray beam is poly-energetic. The mean energyincreases with the thickness of aluminum.Fig 4. The HA phantom consists of five known-densitymaterials. The density in the background region is 1.131 g/cm 3 ,which is the same as the basic material. The densities in region1 to 4 are 1.196 g/cm 3 , 1.261 g/cm 3 , 1.394 g/cm 3 , 1.659 g/cm 3 ,respectively.Ⅲ. MATERIALS AND METHODSIn our microCT system, the object is placed on a rotatingstage <strong>and</strong> illuminated by a source-detector pair for 360 degreeto acquire 400 cone-beam projections. A schematic view isillustrated in Fig 1. The detector size is 5×5 cm 2 <strong>and</strong> the matrixsize is 1024×1024 pixels.The X-ray source is poly-energetic. The mean energy of thebeam arriving at different detector location increases with totalattenuation, which is also called beam hardening effect.Moreover, the LAC depends on the photon energy. Therefore,even in the same material, the LAC in thick part will beunderestimated compared to thin part. Unfortunately, mostreconstruction is under the assumption of monochromatic X-ray<strong>and</strong> the non-uniform mean energy would cause a misestimate ofthe LAC. In order to resolve this problem, we want to know theenergy of each beam first. A set of aluminum foils wasemployed to establish a relation between the mean energy <strong>and</strong>thickness of the foils (shown in Fig 3). According to the relation,the aluminum equivalent of each beam can be calculated. Then,the mean energy can be estimated. Theoretically, to assume thematerial is known, the ratio of attenuation coefficients betweenthick <strong>and</strong> thin parts can be checked from a table of massattenuation coefficients for photon interaction [22] <strong>and</strong>compensate it. In short, this method is a kind of non-linearnormalization to transfer the poly-energetic data into themonochromic data [23].Tomographic images are reconstructed by the T-FDK orML-OS reconstruction from the data after beam hardeningcorrection. An HA phantom that has five regions of knowndensity was utilized to establish an intensity-to-density curve.We can lean on the curve to estimate the density of object.Another uniform phantom was utilized to test accuracy of theestimated density. The true density of the uniform phantom is1.38 g/cm 3 . Besides, the HA phantom was also employed toevaluate contrast-to-noise ratio (CNR), which is defined infollowing formula [24]:µROI− µBG(11)CNR =2 2σ + σThebackground. The( ) 2ROIBGµROI<strong>and</strong> µBGare mean of ROI <strong>and</strong>σROI<strong>and</strong>BGσ are st<strong>and</strong>ard deviation ofROI <strong>and</strong> background. The high CNR usually indicates a betterdetectability for lesions; hence it is another Fig of merit (FOM)to evaluate the image quality.(a) (b)(c) (d)Fig. 5. (a) T-FDK with correction, (b) T-FDK withoutcorrection, (c) ML-OS with correction, (d) ML-OS withoutcorrection.Ⅳ. RESULTSThe HA phantom data with <strong>and</strong> without correction werereconstructed by the T-FDK <strong>and</strong> ML-OS algorithms,respectively. The four different images reconstructed from thesame data are shown in Fig 5. Every image is shown in thesame window level <strong>and</strong> width. The contrast of images withbeam hardening correction is obviously higher than the imageswithout correction. Four intensity-to-density curve of eachmethod are shown in Fig 6. The intensity of image withoutcorrection is lower <strong>and</strong> cannot keep the linearity for highdensity region. The CNRs in regions-of-interest (ROIs) areshown in Fig 7. The CNRs of the images with correction arehigher than those without correction. Furthermore, the CNRs ofthe images reconstructed by the ML-OS algorithm are alsohigher than those of the T-FDK algorithms. The comparison ofthe the accuracy is shown in Table 1. The corrected data havehigher accuracy than uncorrected data. There is no obviousdifference between the T-FDK <strong>and</strong> ML-OS algorithms.153


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaIntensity1.41.210.80.60.40.2T-FDK w/cML w/cT-FDK wo/cML wo/c01.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8Density (g/cm 3 )Fig 6. The intensity-to-density curves are established from theresults of HA phantom. The notation ‘w/c’ means the data withcorrection <strong>and</strong> the notation ‘wo/c’ means the data withoutcorrection. The linearity is better in images with correction.2520T-FDK w/cML w/cT-FDK wo/cML wo/cⅤ. DISCUSSION AND CONCLUSIONSThe intensity in high density region was restored after beamhardening correction, because the high density projections werenormalized into the same mean energy level as low densityprojections. For this reason, the contrast of image withcorrection is higher than those without correction. However, theCNRs of image reconstructed by the T-FDK algorithm wereless improved compared to the ML algorithm, because somehigh density noise are also enhanced during the correction <strong>and</strong>the iterative algorithm has more tolerance of r<strong>and</strong>om noise.In the accuracy evaluation, the estimated density is a meanof ROI, which is not affected by r<strong>and</strong>om noise. Therefore, thereis no obvious difference between the two algorithms. Becausethe image without correction has disadvantage in the sense oflinearity, the estimated density of the image has more deviationfrom the true value.Although the ML-OS algorithm can produce better imagequality, it needs five times of reconstruction time of the T-FDKalgorithm. However, a lot of time is spent on loading the hugeg ijfrom the hard disk. The real calculation time of ML-OSalgorithm is 130% longer than the other.According to the results, we have come to a conclusion thatthe iterative reconstruction has advantages of the tolerance ofr<strong>and</strong>om noise <strong>and</strong> higher CNR. However, there is only a littledifference in terms of the accuracy of density estimation. Aprecise beam hardening correction method is a critical factor toaffect the accuracy of density estimation. Consequently, theiterative reconstruction along with the correction provides thebest quantitative capability for the microCT images.CNR1510ACKNOWLEDGMENTThe research is supported by grant (93N6202) form NationalScience Council, Taiwan.51 2 3 4Region NumberFig 7. The comparison of CNRs. The CNRs are better in imageswith beam hardening correction, especially for the ML-OSalgorithm.TABLE ⅠThe comparison of accuracy. The accuracy is better in imageswith correction. The true value is 1.38 g/cm 3 .EstimatedMethodDensity (g/cm 3 Error)T-FDK w/c 1.43 3.6%ML w/c 1.42 2.9%T-FDK wo/c 1.26 8.9%ML wo/c 1.24 9.1%REFERENCES[1] M. Munoz-Torres, G. Alonso, <strong>and</strong> M. P. Raya,"Calcitonin therapy in osteoporosis," Treat. Endocrinol.,vol. 3, no. 2, pp. 117-132, 2004.[2] J. D. Pomonis, J. M. Boulet, S. L. Gottshall, S. Phillips, R.Sellers, T. Bunton, <strong>and</strong> K. Walker, "Development <strong>and</strong>pharmacological characterization of a rat model ofosteoarthritis pain," Pain, vol. 114, no. 3, pp. 339-346,Apr.2005.[3] J. Huopio, H. Kroger, R. Honkanen, J. Jurvelin, S.Saarikoski, <strong>and</strong> E. Alhava, "Calcaneal ultrasound predictsearly postmenopausal fractures as well as axial BMD. Aprospective study of 422 women," Osteoporos. Int., vol.15, no. 3, pp. 190-195, Mar.2004.[4] M. Mylona, M. Leotsinides, T. Alex<strong>and</strong>rides, N.Zoumbos, <strong>and</strong> P. A. Dimopoulos, "Comparison of DXA,QCT <strong>and</strong> trabecular structure in beta-thalassaemia," Eur.J. Haematol., vol. 74, no. 5, pp. 430-437, May2005.[5] M. Qin, S. Lin, Z. Song, J. Tian, F. Chen, H. Yan, <strong>and</strong> Q.Ge, "Comparison of bone mass in forearm, lumbarvertebra <strong>and</strong> hip by single <strong>and</strong>/or dual energy X-rayabsorptiometry," Chin Med. Sci. J., vol. 14, no. 2, pp.117-120, June1999.[6] C. Galesanu, M. R. Galesanu, <strong>and</strong> L. Moisii, "[Normalbone mineral content levels determined by quantitative154


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinacomputer tomography (QCT) in the population ofMoldavia]," Rev. Med. Chir Soc. Med. Nat. Iasi, vol. 108,no. 2, pp. 314-318, Apr.2004.[7] L. Lenchik, F. C. Hsu, T. C. Register, K. K. Lohman, B. I.Freedman, C. D. Langefeld, D. W. Bowden, <strong>and</strong> J. J. Carr,"Heritability of spinal trabecular volumetric bone mineraldensity measured by QCT in the Diabetes Heart Study,"Calcif. Tissue Int., vol. 75, no. 4, pp. 305-312, Oct.2004.[8] K. C. Lian, T. F. Lang, J. H. Keyak, G. W. Modin, Q.Rehman, L. Do, <strong>and</strong> N. E. Lane, "Differences in hipquantitative computed tomography (QCT) measurementsof bone mineral density <strong>and</strong> bone strength betweenglucocorticoid-treated <strong>and</strong> glucocorticoid-naivepostmenopausal women," Osteoporos. Int., vol. .Sept.2004.[9] S. Masala, B. Annibale, R. Fiori, G. Capurso, A.Marinetti, <strong>and</strong> G. Simonetti, "DXA vs QCT forsubclinical celiac disease patients," Acta Diabetol., vol.40 Suppl 1:S174-6., p. S174-S176, Oct.2003.[10] S. Masala, U. Tarantino, A. Marinetti, N. Aiello, R. Fiori,R. P. S<strong>org</strong>e, <strong>and</strong> G. Simonetti, "DXA vs QCT: in vitro<strong>and</strong> in vivo studies," Acta Diabetol., vol. 40 Suppl1:S86-8., p. S86-S88, Oct.2003.[11] L. A. Feldkamp, L. C. Davis, <strong>and</strong> J. W. Kress, "Practicalcone-beam algorithm," J. Opt. Soc. Am. , vol. 1, no. A, pp.612-619, 1984.[12] G. Wang, T. H. Lin, P. C. Cheng, <strong>and</strong> D. M. Shinozaki,"A general cone-beam reconstruction algorithm," IEEETrans. Med. <strong>Imaging</strong>, vol. 12, no. 3, pp. 486-496,Sept.1993.[13] M. Grass, T. Kohler, <strong>and</strong> R. Proksa, "3D cone-beam CTreconstruction for circular trajectories," Phys. Med. Biol.,vol. 45, no. 2, pp. 329-347, Feb.2000.[14] J. A. Browne, J. M. Boone, <strong>and</strong> T. J. Holmes,"Maximum-likelihood x-ray computed-tomographyfinite-beamwidth considerations," Appl. Opt., vol. 34, no.23, pp. 5199-5209, Aug.1995.[15] J. A. Browne <strong>and</strong> T. J. Holmes, "Developments withMaximum Likelihood X-Ray Computed Tomography,"IEEE Trans. Med. <strong>Imaging</strong>, vol. 11, no. 1, pp. 40-52,Mar.1992.[16] Lange K., "Convergence of EM Image Algorithms withGibbs Reconstruction Smoothing," IEEE Trans. Med.<strong>Imaging</strong>, vol. 9, no. 4, pp. 439-446, Dec.1990.[17] H. S. Chueh, I. T. Hsiao, Y. R. Shieh, <strong>and</strong> J. C. Chen,"Three Dimensional Cone-beam Reconstruction of MicroComputed Tomography in the Breast Specimen," J. Med.Biol., vol. 24, no. 2, pp. 79-84, Mar.2004.[18] G. L. Zeng, "Nonuniform noise propagation by using theramp filter in fan-beam computed tomography," IEEETrans. Med. <strong>Imaging</strong>, vol. 23, no. 6, pp. 690-695,June2004.[19] J. S. Kole <strong>and</strong> F. J. Beekman, "Parallel statistical imagereconstruction for cone-beam x-ray CT on a sharedmemory computation platform," Phys. Med. Biol., vol. 50,no. 6, pp. 1265-1272, Mar.2005.[20] F. J. Beekma <strong>and</strong> C. Kamphuis, "Ordered subsetreconstruction for x-ray CT," Phys. Med. Biol., vol. 46,no. 7, pp. 1835-1844, July2001.[21] J. S. Kole <strong>and</strong> F. J. Beekman, "Evaluation of the orderedsubset convex algorithm for cone-beam CT," Phys. Med.Biol., vol. 50, no. 4, pp. 613-623, Feb.2005.[22] F. H. Attix, "Introduction to Radiological Physics <strong>and</strong>Radiation Dosimetry," a Wiley-Interscience publication,Appendixes D.3, pp. 556-561, 1986.[23] E. Van de Casteele, D. Van Dyck, J. Sijbers <strong>and</strong> E.Raman, "A model-based correction method for beamhardening artefacts in X-ray microtomography," J. ofX-Ray Science <strong>and</strong> Tech., vol. 12, pp. 43–57, 2004.[24] X. Song, B. W. Pogue, S. Jiang, M. M. Doyley, H.Dehghani, T. D. Tosteson, <strong>and</strong> K. D. Paulsen,"Automated region detection based on thecontrast-to-noise ratio in near-infrared tomography," Appl.Opt., vol. 43, no. 5, pp. 1053-1062, Feb.2004.155


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaAn Approach to True-Color Volume DataPreparation for Virtual EyeLIN Peng, CHEN Yu, WANG Bo-liangDepartment of Computer ScienceXiamen UniversityXiamen, 361005, ChinaAbstract— volume rendering using true-color volume data,against the traditional way that using artificial-color volumedata, provides people more realistic vision. In this paper, a newapproach to true-color volume data preparation is established,which is a part of the Chinese Virtual Eye research. It coversthe whole process on volume data generation based on true-colorslices sequence, which consist of three parts: true-color slicessequence registration <strong>and</strong> slices smooth, slices color calibration,<strong>and</strong> slices global color quantization. Finally, the implementationof the approach proposed in this paper, a process pipeline system,is constructed.I. INTRODUCTIONvolume rendering has became the most popular techniquefor Virtual Human visualization in nowadays. It is the creationof images directly from three-dimensional arrays of data withoutconverting to an intermediate, polygon-based data format.However, the source of volume data, a sequences of imagestaken from slices of dead persons or <strong>org</strong>an samples, should bestill well prepared to fit volume rendering requirement.The traditional way of volume rendering, using artificialcolorvolume data, is easy to process but far away from reality.Against the traditional way, volume rendering using true-colorvolume data provides people more realistic vision as it keepsthe original detail as much as possible.In this paper, a new approach to true-color volume datapreparation is established, which is a part of the Chinese VirtualEye research. This preparation covers the whole processon volume data generation based on true-color slices sequence,which consist of three parts:1) true-color slices sequence registration2) slices color calibration3) slices global color quantizationThis paper would focus on slices processes in true-colorspace <strong>and</strong> provide the solution for each step in the process.Finally, the implementation of the approach proposed in thispaper, a process pipeline system, is constructed. The systemcould show the result volume data via 3D visualization <strong>and</strong>compare the difference between each process step in theapproach, which provides the fundament for further studieson virtual eye research.II. SLICES IMAGES SEQUENCE REGISTRATIONThe source of volume data is the sequence of images takenfrom body or <strong>org</strong>an slices by camera or scanner. The first stepof the whole process is registration of the sequence.Registration is the task of aligning or developing correspondencesbetween data. In volume data preparation, slices imagesregistration is the process of determining the spatial transformthat maps points from one slice image to homologous pointson a object in the st<strong>and</strong>ard slice chosen by people.Matching <strong>and</strong> transformation are the core of registration.The common method of registration is calculating the transformationmatrix(for polynomial transformation) by points-basematching.In our research of Chinese Virtual-Eye project, there aretwo sequences of true-colors slices images:• head slices images from Chinese Virtual Human #1, 400jpgs 3024*2016*24b (Fig. 1)• scanned slices images of eye sample from Xiamen University,646 jpgs 1280*1552*24b (Fig. 2)Fig. 1. Chinese Virtual Human #1 Fig. 2. Eye Sample from XMUThe slices images of first sequence, each of which wasmarked by four white anchor points, can easily h<strong>and</strong>le byextrinsic matching. The whole work is figuring out the fourcoordinates <strong>and</strong> calculating the center-transform matrix. Thealgorithm is like this:1) Calculate the geometrical center (X 0 , Y 0 ) of the fourpoints on the st<strong>and</strong>ard slice <strong>and</strong> the distances betweencenter <strong>and</strong> four points R 0 i(i = 0, 1, 2, 3), <strong>and</strong> that’s thelast translation transform matrix:T r 2 =⎡⎣ 0 1 01 0 0X 0 Y 0 12) Calculate the geometrical center (X c , Y c ) on each newslice <strong>and</strong> the four distances R c i, <strong>and</strong> that’s the first⎤⎦156


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinatranslation transform matrix:⎡⎤T r 1 = 0 1 0 ⎦⎣ 1 0 0−X c −Y c 13) Calculate the average scale factor s c = 1 4the scale matrixS =⎡⎣ s c 0 00 s c 00 0 1⎤⎦3∑i=0R 0iR ci , <strong>and</strong>4) Calculate the average rotation factor under polar coordinatesr c = 1 43∑(θ 0 i − θ c i), <strong>and</strong> the rotation matrixi=0R =⎡⎣ cos r c sin r c 0− sin r c cos r c 00 0 15) Use the whole transform matrixT = T r 1 × S × R × T r 2to transform the current slice image.However, the second sequence, on which there are not anyanchor points, must be treated by intrinsic matching accordingto medical knowledge. Taking the structure of human eyesinto consideration, the anterior chamber of eye is symmetricalapproximatively, <strong>and</strong> the center of two corners of anteriorchamber is nearly fixed between slices.⎤⎦As we know, modern digital cameras has their color balanceprocessor to h<strong>and</strong>le each picture without the global considerationof the whole sequence. To make things worse, It seemsimpossible to calibrate colors manually slice by slice, sincethe great number of slices, the lighting conditions, the distancebetween samples <strong>and</strong> camera, the colors of the slices, <strong>and</strong> evenfrom one hour to the next (for example, if daylight is present).Therefore, the correct calibration of the sequence is a veryimportant task before the next step. With the right calibration,the color lookup-table can be computed with small error,<strong>and</strong> the side elevation the whole volume data could be moresmooth.There are several approach to color calibration. A computervision module for Sony robots uses achine learning methods tolocate field l<strong>and</strong>marks. Their approach is to recognize regionsdelimited by edges <strong>and</strong> classify them according to certainfeatures, such as average color hue, saturation, intensity, <strong>and</strong>others. Some authors have tried decision trees in order tosegment colors independently from light. Another approachfor automatic calibration is to compute global histograms ofimages, under different lightning conditions. Lookup tables forcolor segmentation are initialized in such a way as to make thenew histogram equal to that found under controlled conditions.In our research, much effort has been spent in the optics tocome up with reliable calibration methods for digital cameras.The main problem regarding color calibration, is the greatvariance of colors under different lightning conditions.Fig. 3.Cross Matching of Eye SampleTherefore, we mark a cross between the two corners of anteriorchamber for registration matching(Fig. 3). The algorithmshowed above must be modified by completely ignoring thescale transform.As there is not a good digital evaluation suitable for thesequence registration in our case, human perception becomesthe best choice. Side elevation must be created after the volumedata generation, from which registration error would be foundeasily by people. In our future work, some volume filter wouldbe introduced to improve registration.III. COLOR CALIBRATIONOne of the most important problems which arise in volumerendering is the color excursion of each slices according tolimit of lighting conditions <strong>and</strong> digital camera. Some slices ofthe sequence could be more illuminated than others; differentkinds of lightning can intermix (for example, light from awindow <strong>and</strong> artificial light).Fig. 4. Slice 01 Fig. 5. Slice 02Fig. 6. Histogram of Slice 01 Fig. 7. Histogram of Slice 02157


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaOur solutions is based on the special feature of the dataset current available. We convert the slices images from RGBto HSL(Hue/Saturation/Lightness) space, <strong>and</strong> use histogramanalysis to generate statistics. According to the result ofhistogram analysis, we apply these method to achieve our goal:1) In hue histograms of the slices, there are the similaraverage curves with obviously different mean squarederror. These error would be reduced to a small levelafter we applying simple 3 × 3 blue filter on the huefactor images.2) In saturation histograms, the curves seem to shift toleft or right slice by slice. It would be calibrated byenhancing or lowering the images’ saturation. We applya linear transform to the saturation factor imagesS (i,j) = min {0, max {255 , S (i,j) × A + B }}for the minimal correlation (uncentered) distance r betweencurrent slice M <strong>and</strong> st<strong>and</strong>ard slice N :( ) ( )r = 1 ∑255M i N i256i=0 σ (0)Mσ (0)Nσ (0)Mσ (0)N= √=√∑2551256Mi2i=0∑2551256Ni2i=0We keep the algorithm of calculating A <strong>and</strong> B as simpleas possible. The only one we need is an approximateresult. Therefore, the algorithm just enumerate A from0.5 to 2.0 with 0.01 increase in each steps, then B from−10 to +10 with 1 steps. The simple algorithm workswell at most time.3) In lightness histogram, the problem is the same as insaturation histograms. Therefore the same method wouldbe applied.IV. GLOBAL COLOR QUANTIZATIONThe global color quantization is the final part of the wholepreparation.As there are three channels in full color slice images(24 bitsper pixel), unlike the gray-scale ones, volume data generateddirectly from three channel images has two times largerthan one channel images(for example, gray-scale images).However, modern graphics processors only support small sizeof volume data, because there are quite a lot of hardware limitin nowadays, such as the capacity of volume texture <strong>and</strong> sharedmemory. To make things worse, render speed on graphicsprocessors falls down rapidly while the size of volume dataincreases.As a result, we should make a compromise between therealistic vision <strong>and</strong> the size of volume data. That is thecolor quantization. The objective of color quantization isrendering a full color slice image with a restricted set of colornumbers(256 or 64) without a significant (almost perceptuallynot noticeable by the spectator) lack of color impressionapproximation as closely as possible when quantized.Quantization of color images is necessary if the display onwhich a specific image is presented works with less colorsthan the original image. When a full color image is displayedon a device with a color lookup-table, the number of colorsin the image has to be reduced. It has to be considered duringcolor reduction that especially these colors are identified<strong>and</strong> selected which appear in the image very often <strong>and</strong> thesubstituted colors produce no or only little errors. This processis called color quantization. Quantization is approximating atrue intensity value with a displayable intensity value.However, there are something different between imagequantization <strong>and</strong> slices sequence quantization. The objectof image quantization are the images, where the object ofslices sequence quantization is the whole sequence. The colorlookup-table generated be quantization should be fit for thewhole sequence, which means colors of every slices shouldbe map to the only one color lookup-table. The whole quantizationcan be viewed as a stepwise process:1) Use histogram analysis to generate statistics on the usedcolors in every slices images of the whole sequence thatis to be quantized2) Based on the analysis the color lookup-table has to befilled with values3) The true color values are mapped to the values of thecolor table. The color values have to be mapped to thenearest color entries in the color table.4) (Optional) Apply the error diffusion techniques.In the research, five different quantization methods areinvestigated combined with Floyd-Steinberg error diffusion(Table. I) . To assess quantization techniques, the followingquality criterias are usually considered: PSNR (peak signalto-noise)measures, human perception, speed, <strong>and</strong> memoryrequirement. As the whole work is data preparation, speed<strong>and</strong> memory requirement are not very important, we focuson the first two factors which are estimates of the quality ofquantization.We use the experiential color distance measure∆C = 0.306 (r 1 − r 2 ) 2 + 0.601 (g 1 − g 2 ) 2 + 0.117 (b 1 − b 2 ) 2to calculate the mean squared error (MSE) of quantized slicessequence <strong>and</strong> PNSR measure.∑ 2 ∆C (i, j)MSE =N 2 × SlicesCount255P SNR = 20 log 10 ( √ )MSEFinally, we choose Octree quantization algorithm that leadsto the best results regardless of very memory <strong>and</strong> timeconsuming. Further more, the quality of quantization couldimmensely be improved by applying Floyd-Steinberg errordiffusion(Fig. 10).158


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaTABLE ICOMPARISON OF FIVE QUANTIZATIONAlgorithm PSNR Human PerceptionStatic Color Table 34.36 discontinuitiesUniform Subdivision 44.81 poor, <strong>and</strong> contrast increasePopularity Method 38.92 poor, loss of image detailMedian Cut Algorithm 28.62 good, a bit discontinuitiesOctree Algorithm 22.47 best resultsprovide our solution. Along these same lines, we showed notonly the theoretic method for solving the problems but that thesystem we implemented would overcome many of the gr<strong>and</strong>challenges of today’s true-color volume data preparation. Onepotentially minimal shortcoming of the system is that it stillrequire a little human interaction during the process. Further,in fact, the main contribution of our work is that our processwould almost automatically construct the volume data <strong>and</strong> savethe time as much as possible. We verified that the system isable to successfully construct many true-color volume data infour or five hours after the source slices images are available,which provides the fundament for further studies on virtualeye research.Fig. 8. Original Fig. 9. Octree Fig. 10. Octree + Floyd-SteinbergV. SYSTEM ARCHITECTURE AND IMPLEMENTATIONThe implementation of the approach proposed in this paper,a process pipeline system, is constructed in our research. Thesystem consists of several subsystems. A brief activity diagram11 that shows the relationship of these subsystems follows.The input of the system is the sequence of slices images(jpegfiles). The data flow in the pipe system through the three kernelprocess. There are some other steps that crop the slices imagesjust for the area of interest <strong>and</strong> scale the huge size of imagesto a hardware acceptable level. Finally, the true-color volumedata would be generated at the output.ACKNOWLEDGMENTThis project is supported by National Natural ScienceFoundation of China (Grant No.60371012), The Special TechnologyProject of Fujian (Grant No.2002Y021).REFERENCES[1] Woods RP, Grafton ST, Holmes CJ, Cherry SR, Mazziotta JC, Automatedimage registration: I. General methods <strong>and</strong> intrasubject, intramodalityvalidation. Journal of Computer Assisted Tomography 1998.[2] Woods RP, Grafton ST, Watson JDG, Sicotte NL, Mazziotta JC, Automatedimage registration: II. Intersubject validation of linear <strong>and</strong>nonlinear models. Journal of Computer Assisted Tomography 1998.[3] Terry Yoo, Insight into Images - Principles <strong>and</strong> Practice for Segmentation,Registration, <strong>and</strong> Image Analysis, ISBN:1-56881-217-5, 2004.[4] M. J. L. de Hoon, S. Imoto, J. Nolan, <strong>and</strong> S. Miyano, Open SourceClustering Software. Bioinformatics, 2004.[5] Kulessa, T., <strong>and</strong> Hoch, M., Efficient Color Segmentation under VaryingIllumination Conditions, Proceedings of the 10th IEEE Image <strong>and</strong> MultidimensionalDigital Signal Processing Workshop, July 12-16, 1998.[6] Zrimec, T., <strong>and</strong> Wyatt, A., Learning to Recognize Objects - TowardAutomatic Calibration of Color Vision for Sony Robots, Workshop of theNineteenth International Conference on Machine Learning (ICML-2002).[7] Alex<strong>and</strong>er Gloye, Anna Egorova, Mark Simon, Fabian Wiesel, <strong>and</strong> RaulRojas, Plug & Play: Fast Automatic Geometry <strong>and</strong> Color Calibration forCameras Tracking Mobile Robots, Freie Universitat Berlin, Takustrabe 9,14195 Berlin, Germany, 2003.[8] J. D. Foley, A. van Dam, S. K. Feiner, J. F. Hughes, Computer Graphics- Principles <strong>and</strong> Practice, Second Edition, Addison-Wesley PublishingCompany, ISBN 0-201-12110-7, 1990.[9] M. Gervautz <strong>and</strong> Purgathofer, A simple Method for Color Quantization:Octree Quantization in New Trends in Computer Graphics, N. Magnenat-Thalmann <strong>and</strong> D. Thalmann, eds. Springe-Verlag, Berlin 1988.[10] P. Heckbert, Color Image Quantization for Frame Buffer Display,Computer Graphics (Proc. Siggraph), Vol. 16, No. 3 July 1982.[11] Zhiang Xiang <strong>and</strong> Gregory Joy, Color Image Quantization by AgglomerativeClustering, IEEE Computer Graphics, May 1994.Fig. 11.System ArchitectureVI. CONCLUSIONIn our research we explored the whole process of true-colorvolume data preparation. We analyze the difficulty in threesteps of the work, compare some methods in these areas, <strong>and</strong>159


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, China<strong>Imaging</strong> Biological Micro-tissues <strong>and</strong> Organs Using Phase-contrast X-ray<strong>Imaging</strong> techniqueHongxia Yin 1 , Bo Liu 2 , Xin Gao 1 , Hang Shu 3 , Xiulai Gao 2 , Peiping Zhu 3 , Shuqian Luo 1∗1 College of Biomedical Engineering, Capital University of <strong>Medical</strong> Sciences, Beijing, China2 Department of Anatomy, Capital University of <strong>Medical</strong> Sciences, Beijing, China3 Institute of High Energy Physics, the Chinese Academy of Sciences, Beijing, ChinaAbstract-Phase-contrast x-ray imaging is a phase-based,high-contrast imaging technique, by which the acquiredKeywords-Phase-contrast x-ray imaging, diffractionenhanced imaging, cochleaspatial resolution can be increased 1000 times over theconventional x-ray radiography. To date, several approacheshave been developed to detect the phase variation by theobject, including diffraction enhanced imaging (DEI) thatemploys an analyzer crystal as a diffractive optic. In ourstudy, DEI is used to image the minute biomedical tissuesboth in planar mode <strong>and</strong> in CT mode.Some cochleae of the guinea pigs were used as thesubjects for their typical cochlear structure. Before imaging,the subjects were decalcified in order to improve the imagingquality. Both the x-ray film <strong>and</strong> the x-ray fast digital imager(FDI) camera system were used to record images <strong>and</strong> thefiltered back-projection (FBP) was used for CTreconstruction.The acquired DEI images <strong>and</strong> the reconstructedtomography images all display the spiral structure <strong>and</strong> innerdetails of the cochlea clearly. Especially the cellular-levelstructures, such as the static cilia of hairy cell <strong>and</strong> the limbusof Hansen cell, can be seen in the planar DEI images, whichcan’t be seen in the conventional radiography images. Theresults show that DEI is an effective phase-contrast imagingtechnique for observing the cochlear microstructures of theguinea pigs <strong>and</strong> other biological micro-tissues or <strong>org</strong>ans evenincluding those at the cellular level. Thus we preconceive thatwith the improvement of the digital detector <strong>and</strong> the CTreconstruction algorithms, the cellular-level DEI-CTtechnique will be possible in the future.The asterisk (∗) indicates the corresponding author.Ⅰ. INTRODUCTIONThe conventional x-ray imaging techniques rely on thex-ray attenuation as the sole source of contrast <strong>and</strong> ignoreother, potentially more useful effects of the x-rays, such asrefraction <strong>and</strong> scattering [1]. A possible solution for thisproblem is the phase-contrast x-ray imaging technique, whichis much sensitive to light elements <strong>and</strong> changes in refractiveindex of the light-element medium. This is because that thex-ray phase shift cross section is almost a thous<strong>and</strong> timeslarger than the x-ray absorption cross section for lightelements, such as hydrogen, carbon, nitrogen, <strong>and</strong> oxygen [1<strong>and</strong> 4]. This technique provides significant improvements inimage contrast for weakly absorbing soft tissues [1-3].To date, several techniques have been studied as thephase-contract x-ray imaging methods [1-3 <strong>and</strong> 5-7];moreover, based on these techniques, phase-contrast x-raycomputed tomography has been developed for observinginner structures of the biological soft tissues [4 <strong>and</strong> 8-11].Among these techniques, Diffraction Enhanced <strong>Imaging</strong> (DEI)is a synchrotron-based promising imaging technique. Itderives contrast from the x-ray attenuation of the object,refraction <strong>and</strong> rejection for small angle scattering (extinction)[8, 12 <strong>and</strong> 13]. DEI significantly improves the spatialresolution of the images <strong>and</strong> is suitable for observing innerdetails of the biomedical tissues. Recently, a lot of theoretical<strong>and</strong> applied researches on the DEI technique have been done[12-17]. In medical area, DEI is being studied for medicalresearch <strong>and</strong> for clinical diagnosis, such as the usefulness inbreast cancer diagnosis [17]. In our study, we introduce DEI160


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinato image the cochleae of guinea pigs.The cochlea is a very complex micro-<strong>org</strong>an with uniquefunction of generating hearing, which is attributed to itspeculiar structural arrangement. The cochlear morphologicresearch is the basis of inner ear’s physiological <strong>and</strong>pathological study. Conventionally, the cochlear studydepends on histological sections <strong>and</strong> stretch preparationswhich are observed under the optical or electron microscope[18 <strong>and</strong> 19]. The procedure is rather complex <strong>and</strong> only a partof cochlear inner structures can be observed. The novelimaging technique, DEI, overcomes these difficulties. In ourexperiments, the acquired DEI images successfully displaythe holistic spiral structure <strong>and</strong> inner details of guinea pigcochleae both in planar mode <strong>and</strong> in CT mode.Ⅱ. THE PRINCIPLE OF DEIDEI is a synchrotron-based radiographic imaging methodwith the introduction of a matched monochromator-analyzercrystal system [5]. The monochromator crystal is used togenerate a nearly monochromatic x-ray beam <strong>and</strong> theanalyzer crystal is used as a scatter rejection <strong>and</strong> diffractionoptic. According to Bragg's diffraction law, only x-rayssatisfying Bragg condition of the analyzer crystal will bediffracted onto the detector [5, 8 <strong>and</strong> 13]. The angularacceptance is described by the analyzer’s rocking curve,which is depicted in Fig.1 [5 <strong>and</strong> 8]. Due to the angularsensitivity of the analyzer crystal, DEI can measure the x-rayattenuation, refraction <strong>and</strong> small-angle scattering (scatteringangles less than milliradians) [12 <strong>and</strong> 13]. Since the range ofacceptance angles from the analyzer crystal is a fewmicroradians, DEI automatically provides a high degree ofscatter rejection, which results in extinction contrast [13].As it is pointed out in [5 <strong>and</strong> 13], DEI presents twoimages: apparent absorption <strong>and</strong> refraction images. When theanalyzer crystal is set at a relative angle ± ∆θ D/ 2 (whereθDis the full width half maximum of the rocking curve)from the Bragg angleθ Btwo images at the low <strong>and</strong> highangles of analyzer crystal are acquired. Each of these imagesholds apparent absorption <strong>and</strong> refraction information aboutthe object. If we denote the angles θL<strong>and</strong> θHfor low <strong>and</strong>high angle sides <strong>and</strong> the corresponding images are IL<strong>and</strong>IHrespectively then the following fundamental equationscan be applied to images I L<strong>and</strong> I Hon a pixel-by-pixelbasis [5 <strong>and</strong> 13]:IM⎛ dR ⎞ ⎛ dR ⎞IL ⎜ ⎟(θH) − IH ⎜ ⎟(θL)dθdθ=⎝ ⎠ ⎝ ⎠(1)⎛ dR ⎞⎛ dR ⎞R(θL) ⎜ ⎟(θH) − R(θH) ⎜ ⎟(θL)⎝ dθ⎠⎝ dθ⎠IHR(θL) − ILR(θH)∆ θz=(2)⎛ dR ⎞ ⎛ dR ⎞IL ⎜ ⎟(θH) − IH ⎜ ⎟(θL)⎝ dθ⎠ ⎝ dθ⎠where R(θ) is the reflectivity of the analyzer crystal, ∆ θZis the angle deviation in vertical direction with respect tocrystal planes <strong>and</strong> I Mis the modified intensity through theobject due to refraction, absorption <strong>and</strong> extinction within thesample. Apparent absorption <strong>and</strong> refraction images cansimply be obtained by adding <strong>and</strong> subtracting the images IL<strong>and</strong> IHaccording to equations (1) <strong>and</strong> (2) [13]. Theacquired apparent absorption image derives contrast fromabsorption <strong>and</strong> scatter rejection (extinction) <strong>and</strong> refractionimage derives contrast only from the x-ray refraction on theassumption that the procedure is free from scattering.Fig.1. Analyzer’s rocking curveⅢ. EXPERIMENTAL METHODA. The setupsThe experiments were performed at beamline 4W1A ofBeijing Synchrotron Radiation Facilities (BSRF). Fig.2depicts the two-crystal DEI setup that was constructed at thetopography station of BSRF [20]. A nearly monochromaticx-ray beam is generated from the white incident synchrotronbeam by a silicon (111) crystal monochromator. This beamthen traverses the sample <strong>and</strong> is diffracted <strong>and</strong> refracted by ananalyzer crystal which is similar to the type used in the161


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinamonochromator. The sample scanning stage is placedbetween two crystals <strong>and</strong> controlled by the stepper motors. Itcan automatically rotate through 360°.Fig. 3 shows the conventional radiography experimentalsetup based on synchrotron resource. The two differences ofthis system from the DEI system are the absence of theanalyzer crystal <strong>and</strong> the position of the detector close to thesubject. Conventional absorption-based images taken by thissystem are used to compare with the DEI images.B. The x-ray sourceThe synchrotron incident beam from beamline 4W1A isnearly parallel at the entrance to the experimental hutch,which is 43m from the beam source. The beam source is1mrad of maximum horizontal angle <strong>and</strong> 0.36mrad ofmaximum vertical angle. The tunable energy range of thesynchrotron beam in this system is 3-22 keV. The aperture ofthe beam used for the experiments was 20 mm high <strong>and</strong> 8mm wide. The range of beam intensity was 50-110 mA. Inour experiments, the monochromatic beam energies of 8 keV,9.4 keV, 10 keV <strong>and</strong> higher energy were tried <strong>and</strong> allmeasurements showed in this paper were acquired in theenergy of 9.4 keV.C. DetectorsTwo types of detectors were used in our experiments: oneis an x-ray fast digital imager (FDI) camera system with aspatial resolution of 11 µ m , <strong>and</strong> the other is Fuji IX80 x-rayfilms with resolution of 0.3-0.8µm. The acquired images bythe FDI camera were directly stored in the computer <strong>and</strong>synchronously displayed on the screen, while the acquiredones by the x-ray films were observed under the lightmicroscope <strong>and</strong> then digitalized.D. SubjectsDecalcified guinea pig cochleae were selected as thesubjects. The subjects were prepared in accordance with thefollowing procedure. Firstly, the double cochleae werecarefully taken out after perfusion with 0.9% sodium solution<strong>and</strong> 4% polyformaldehyde solution into motley guinea pigs.Secondly, we removed the footplates of stapes, puncturedthrough the membranes of round window, <strong>and</strong> immersed thecochleae into 30% saccharose formaldehyde solution for 24hours’ post-fixation. Thirdly, 3 to 5 days’ decalcification withPlank’s solution was accomplished, which is the mostimportant step. Finally, all treated cochleae were preserved in4% polyformaldehyde solution.Fig.2. Diagram of DEI experimental setupFig.3. Diagram of conventional radiography setup.E. MeasurementsDEI images of the cochleae were acquired with theanalyzer tuned to the peak point θ = 0 <strong>and</strong> the symmetricalpoints θ = ± ∆θ D/ 2 of the rocking curve. Themeasurements were recorded by x-ray films or the FDIcamera. Equations (1) <strong>and</strong> (2) were used to computingapparent absorption <strong>and</strong> refraction images.Then DEI-CT scans were done at the correspondingangular positions of the analyzer crystal by rotating thesubjects through 180° while three sets of projections wereacquired from the subjects. FBP algorithm was used toproduce CT reconstructions from the projections <strong>and</strong>Hamming window function selected after comparing differentseveral filter functions, including Ram_Lak, Sheep_Logan,<strong>and</strong> Hamming window.Finally, the conventional absorption-based images ofcochlea were taken with the conventional radiography setupsystem for comparison with the DEI images.Ⅳ. RESULTSA. The acquired imagesThree directly measured DEI images <strong>and</strong> oneconventional radiography image are given in Fig.4, in which(a)-(c) show the measurements with the analyzer crystal162


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinatuned to the peak point ( θ = 0 ) <strong>and</strong> two symmetrical anglepoints ( = ± / 2 ) of the rocking curve respectivelyθ ∆θ D<strong>and</strong> (d) shows the conventional radiography image. Allimages in Fig.4 were recorded by the FDI camera.From Fig.4, three effects are clearly revealed:Firstly, all DEI images clearly display the macroscopicspiral structure of the cochlea as well as the microcosmicinner details. All of them show the enhanced edges of theobjects. This is due to the differential principle of the method.It is demonstrated that DEI is an effective way to observeinner structures of the minute <strong>org</strong>ans <strong>and</strong> micro-tissues.Secondly, all DEI images provide obviously highercontrast <strong>and</strong> better resolution compared with the conventionalradiography one. The main reason for is the combinedcontrast from the x-ray absorption, refraction <strong>and</strong> extinctionin the DEI images [13].Thirdly, the low <strong>and</strong> high angle images have adverseshadows, which is due to the adverse x-ray diffractionenhancement direction. The peak image shows obviouslydifferent information from the low <strong>and</strong> high angle images.This is because that the shoulder images are sensitive to thex-ray refraction variation while the peak image mainlyrecords absorption information.Fig.5 (a) shows one peak image that was taken at thepeak point of the analyzer’s rocking curve <strong>and</strong> recorded bythe x-ray film. Fig.5 (b) shows its magnified image of theselected region. The magnified image displays more details.After observing the images in Fig.5 <strong>and</strong> comparing themwith the images in Fig.4, two points of view are obtained: thefirst point is that the DEI images recorded by films havehigher spatial resolution; therefore provide much moredetailed information than those recorded by the FDI camera.The second is that DEI can be used for imaging thecellular-level structures in planar mode by the x-ray films. InFig.5 (a), besides the macroscopic structures of cochlear axis,spiral lamina, cochlear duct, vestibular scala, <strong>and</strong> tympanicscala, more microcosmic even cellular-level structures, suchas basilar membrane, basilar membranous crest, vestibularmembrane, spiral prominence, spiral limbus, static cilia ofhairy cell limbus of Hansen cell can be seen clearly, whichcan not be recognized distinctly in the DEI images recordedby the FDI camera because of its limited spatial resolution.The cellular structures are clearer in the magnified image ofFig.5 (b). Thus we preconceive that with the improvement ofthe digital detector’s sensitivity <strong>and</strong> resolution, it will bepossible to image biologic tissues at the cellular level withDEI-CT technique in the future.Fig.5. (a) The peak image recorded by the x-ray filmsFig.4. Result images recorded by the FDI camera:(a) The peak image, (b) the low angle side image, (c) the high angle sideimage, (d) the conventional synchrotron radiography image.Where, 1 basilar membrane, 2 basilar membranous crest, 3 cochlear duct,4 spiral lamina, 5 vestibular membrane, 6 limbus of Hansen cell, 7 static ciliaof hairy cell, 8 cochlear axis, 9 spiral prominence, 10 tympanic scala,11 vestibular scala, 12 spiral limbus.163


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaFig.5 (b) The magnified image of the selected region from Fig.5 (a).Where, thick arrow: basilar membranous crest; tin arrow: static cilia of hairycell; thick arrowhead: limbus of Hansen cell; asterisk: spiral limbus.B. Apparent absorption <strong>and</strong> refraction imagesFig.6 (a) <strong>and</strong> (b) show the apparent absorption <strong>and</strong>refraction images respectively. They are obtained by adding<strong>and</strong> subtracting images in Fig.4 (b) <strong>and</strong> Fig.4 (c) according toequation (1) <strong>and</strong> (2). Apparent absorption is a combinedcontrast of extinction <strong>and</strong> absorption <strong>and</strong> the apparentabsorption image is similar to the peak image. The refractionimage contains the contrast only from the x-ray refraction <strong>and</strong>looks seem as it was three-dimensional. It enhances the effectof refraction direction by the black shadow in the left edges<strong>and</strong> the white shadow in the right edges.On the other h<strong>and</strong>, the apparent absorption <strong>and</strong> refractionimages decrease the contrast comparing with the directlyacquired DEI images. This is due to the introduced noises bythe computing procedure.Fig.6. (a) The apparent absorption image, (b) the refraction image.C. CT reconstructionPartial slices of CT reconstructions are given in Fig.7 <strong>and</strong>Fig.8, in which Fig.7 (a)-(c) are from the peak, the low <strong>and</strong>high angle reconstructions respectively <strong>and</strong> Fig.8 (a) <strong>and</strong> (b)from the apparent absorption <strong>and</strong> refraction reconstructions.The thickness of all CT reconstructions is 11 µ m <strong>and</strong> thesize is 1024×1024.Fig.7. CT reconstructions: (a) from the peak projection, (b) from the lowangle side projection <strong>and</strong> (c) from the high angle side projection.Fig.8. CT reconstructions: (a) from the apparent absorption data sets <strong>and</strong> (b)from the refraction data sets.The CT images display internal structures of the cochleawithout superimposing <strong>and</strong> the cochlear spiral structure isclearly recognized when browsing all the CT slices.Comparing all reconstructions, the reconstructions fromdifferent data sets contain different information <strong>and</strong> showdifferent characteristic features respectively. At the same time,it is noted that the contrast of the CT reconstructions is lowerthan the two-dimensional images. According to our analysis,there are mainly two reasons for this: the subject movingduring CT scanning <strong>and</strong> the limit of the reconstructionalgorithm.Ⅴ. CONCLUSIONFirstly, some cellular-level cochlear microstructures areclearly displayed in the planar DEI images recorded by thex-ray films. These high-resolution DEI images of the guineapig cochleae show that DEI technique is an effective way to164


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinaobserve biomedical microstructures even at the cellular level.Secondly, CT reconstructions display cochlear finespiral structure <strong>and</strong> ample details. It is demonstrated thatDEI-CT is a novel <strong>and</strong> feasible tomography technique forobserving the biomedical inner structures.Thirdly, it is clear that acquisition of the high qualityDEI images dramatically depends on the resolution of thedetectors. It is an important <strong>and</strong> challenging task to developthe digital detector of high resolution of several microns.We preconceive that it is possible to image the micro-<strong>org</strong>ans<strong>and</strong> minute tissues by DEI-CT at the cellular level with theimprovement of the detector’s sensitivity <strong>and</strong> resolution.In a word, DEI is a developing <strong>and</strong> useful technique notonly in medical applications but in other areas. More studieswith some minute <strong>org</strong>ans of human body <strong>and</strong> pathologictissues should be done in order to lead this technique toclinical applications.ACKNOWLEDGEMENTWe thank Dr. Qingxi Yuan, Dr. Wanxia Huang, Dr.Junyue Wang, Chen Wang, Xin Shi, <strong>and</strong> Xin Zheng forassistance in our experiments. This research was supportedby Beijing Natural Science Foundation of No. 7031001.REFERENCES[1] A.Momose <strong>and</strong> J.Fukuda, “Phase-contrast radiographs of nonstained ratcerebellar specimen,” Med. Phys., vol.22, pp.375-380, 1995.[2] A.Momose, T.Takeda <strong>and</strong> Y.Itai, “Biological imaging based on x-rayphase measurement-Toward applications in cancer diagnosis,” HitachiReview, vol.48, pp.110-115, 1999.[3] T.J.Davis <strong>and</strong> D.Gao, “Phase-contrast imaging of weakly absorbingmaterials using hard x-rays,” Nature, vol.373, pp.595-597, 1995.[4] A.Momose, T.Takeda, K.Hirano, “Phase-contrast x-ray computedtomography for observing biological soft tissues,” Nature Medicine,vol.2, pp.473-475, 1996.[5] D.Chapman, W.Thomlinson, R.E.Johnston, D.Washburn, E.Pisano,N.Gmur, et al, “Diffraction enhanced x-ray imaging,” Phys. Med. Biol.,vol.42, pp.2015-2025, 1997.[6] A.W.Stevenson, T.E.Gureyev, D.Paganin, S.W.Wilkins, T.Weitkamp,A.Snigirev, etc, “Phase-contrast x-ray imaging with synchrotronradiation for materials science applications,” Nuc.Instrum.Meth.Phys.Res.B, vol.199, pp.423-435, 2003.[7] S.W.Wilkins <strong>and</strong> T.E.Gureyev, “Phase-contrast imaging usingpolychromatic hard X-rays,” Nature, vol.384, pp.335-338, 1996.[8] F.A.Dilmanian, Z.Zhong, B.Ren, X.Y.Wu, L.D.Chapman, I.Orion, et al,“Computed tomography of x-ray index of refraction using thediffraction enhanced imaging method,” Phys. Med. Biol., vol.45,pp.933-946, 2000.[9] I.Koyama, Y.Hamaishi, A.Momose, T.Warwich, “Phase tomographyusing diffraction-enhanced imaging,” AIP Conference Proceedings,vol.705, pp.1283-1286, 2004.[10] A.Momose, “Demonstration of phase-contrast X-ray computedtomography using an x-ray interferometer,” Nuc.Instrum.Meth.Phys.Res., vol. 352, pp.622-628, 1995.[11] A.Momose, “Phase-sensitive imaging <strong>and</strong> phase tomography usingx-ray interferometers,” Optics Express, vol.11, pp.2303-2314, 2003.[12] O.Oltulu, Z.Zhong, M.Hasnah, M.N.Wernick, <strong>and</strong> D.Chapman,“Extraction of extinction, refraction <strong>and</strong> absorption properties indiffraction enhanced imaging,” J. Phys. D: Appl. Phys., vol.36,pp.2152-2156, 2003.[13] O.Oltulu, “A unified approach to x-ray absorption-refraction- extinctioncontrast with diffraction enhanced imaging,” 2003.[14] C.David, T.Weitkamp, A.Diaz, E.Deckardt, B.Nohammer, F.Pfeiffer, etc,“Hard x-ray differential phase contrast imaging,” PSI scientific reports,2003.[15] P.David, G.Timur E., P.Konstantin M., L.Rober A., K.Marcus, “Phaseretrieval using coherent imaging systems with linear transferfunctions,” Optics Communications, vol.234, pp.87-105, 2004.[16] E.Pagot, S.Fiedler, P.Cloetens, A.Bravin, P.Coan, K.Fezzaa, etc,“Quantitative comparison between two phase contrast techniques:diffraction enhanced imaging <strong>and</strong> phase propagation imaging,”Phys.Med.Biol, vol.50 (4), pp.709-724, 2005.[17] J.Keyriläinen, M.Fernández, S.Fiedler, A.Bravin,M.L.Karjalainen-Lindsberg, P.Virkkunen, etc, “Visualisation ofcalcifications <strong>and</strong> thin collagen str<strong>and</strong>s in human breast tumourspecimens by the diffraction-enhanced imaging technique: acomparison with conventional mammography <strong>and</strong> histology,” Eur.J.Radiol, vol.53, pp.226-237, 2005.[18] A.H.Voie, “<strong>Imaging</strong> the intact guinea pig tympanic bulla byorthogonal-plane fluorescence optical sectioning microscopy,” HearingResearch, 171, pp.119-128, 2003.[19] M.Sugawara, Y.Ishida, <strong>and</strong> H.Wada, “Mechanical properties of sensory<strong>and</strong> supporting cells in the <strong>org</strong>an of Corti of the guinea pigcochlea-study by atomic force microscopy,” Hearing Research, 192,pp.57-64, 2004.[20] G.Li, N.Wang, <strong>and</strong> Z Y. Wu, “Hard x-ray diffraction enhanced imagingonly using two crystals,” Chinese Science Bulletin, vol.49, no.20,pp.2120-2125, 2004.165


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaA novel micro-tomography algorithm for X-ray diffraction enhanced imagingXin Gao 1,5 , Shuqian Luo* 1,5 , Bo Liu 2 , Maolin Xu 4 , Hongxia Yin 1 , Hang Shu 3 , Xiulai Gao 2 , Peiping Zhu 31 College of Biomedical Engineering, Capital University of <strong>Medical</strong> Science, Beijing, China2 Department of anatomy, Capital University of <strong>Medical</strong> Science, Beijing, China3Institute of High Energy Physics, Chinese Academy of Science, Beijing, China4 Beijing Information Technology Institute, Beijing, China5 National Laboratory of Pattern Recognition, Beijing, ChinaAbstract- Phase contrast X-ray micro-tomography is apromising technique for observing low-contrast details insideweakly absorbing objects. The existing phase contrasttomography methods based on DEI used the phase contrastimages, which contain not only phase information but alsoabsorption information, as the projections. In this paper, wepresent a novel algorithm, which greatly increases theproportion of refraction information in phase contrast imagesby computing the difference between the two sets of imagesacquired on different angle sides of the rocking curve <strong>and</strong>adopts the projections with complete set. The imagesreconstructed from DEI images of cochlea of guinea pigs showedthe helix structure of cochlea very clearly, including the detailedstructure <strong>and</strong> spatial location of cochlea axis. Compared withthe conventional phase contrast tomography methods, the newmethod can give much higher spatial resolution, <strong>and</strong> is moresuitable to reconstruct micro-structures of small human <strong>org</strong>ans.Index Terms-Phase contrast micro-tomography, diffractionenhanced imaging, image reconstructionI. INTRODUCTIONX-ray phase contrast imaging has been studied actively inX-ray radiography for being able to non-destructivelyinvestigate internal structure in light-element-based materialswithout staining, to which the conventional X-ray imaging isincompetent. The imaging theory is based on the fact that thephase shift is three orders of magnitude higher thanabsorption information when X-ray passes through theweakly absorbing objects. The method broadens theobservation ability to non- or weakly absorbing objects,especially to biological soft tissues.Diffraction enhanced imaging (DEI) method is one of thephase contrast imaging methods <strong>and</strong> developed by Chapmanet al [1] . DEI based on synchrotron uses monochromaticbeams <strong>and</strong> an analyzer crystal positioned between the subject<strong>and</strong> the detector. The analyzer allows only those X-rays thattravel through the specimen <strong>and</strong> fall into the acceptance angleof monochromator /analyzer system to be detected. Imageresults clearly reveal the structure inside the weaklyabsorbing objects at micro scale resolution without seriousradiation exposure. DEI method greatly improves spatialresolution of X-ray radiography.Tomography processing techniques can be applied readilyto DEI images. Because the phase shift is equivalent to theprojection of the refractive indices, a phase contrasttomogram which reveals the spatial distribution of variationof the refractive indices inside the object could bereconstructed from DEI images by using a normaltomography algorithm. How effectively acquire the phaseshift in DEI images <strong>and</strong> what kind of reconstructionalgorithms is adopted would greatly influence the display indetails of boundary of materials with different refractionindices. On one h<strong>and</strong>, the images taken by using the existingDEI methods contain not only the phase shift information butalso a little absorption information. On the other h<strong>and</strong>, theimage projections by DEI needed for micro-tomography areincomplete sets. Thereby, cross section reconstructed fromthus acquired image projections would loss a portion ofThe asterisk (∗) indicates the corresponding author., e-mail: sqluo@ieee.<strong>org</strong>166


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinaboundaries information between materials with differentrefractive indices <strong>and</strong> reduce the sectional phase contrastimage quality. In this paper, we present a novel method forphase contrast tomography based on DEI. The differenceimages between the two sets of DEI images taken on the low-<strong>and</strong> high-angle side of the rocking curve respectively areused as a new set of DEI images in different angles of view.In addition, we adopted the image projections within 2πinstead of π to reconstruct, which is a complete set. Theimage results showed that the new method is capable ofclearly displaying the micro-structures in cochlea of guineapig.II. MICRO-TOMOGRAPHY OF REFRACTION INDICES USING DEIA. Principle of DEIIn DEI, a beam of monochromatic X-ray generated bymonochromator is used as incident beam. The radiation isabsorbed, refracted <strong>and</strong> scattered as it strikes the specimen. Aportion of beam are deviated from original direction, socalled angular deviation, which caused either by theircrossing boundaries of materials with different refractiveindices, or by small-angle scattering. The DEI modality willseparate the beam affected by refraction <strong>and</strong> attenuated byabsorption from scattering [1][2] . The separation can beachieved by an analyzer since only X-ray aligned within theangular acceptance of analyzer will be diffracted on to thedetector. The angular acceptance is called the rocking curveof the crystal [3] .Two images containing refraction <strong>and</strong> absorptioninformation are acquired on low- <strong>and</strong> high-angle sides of therocking curve of the analyzer in DEI. They can be expressedas [4] ,WhereIII , I L H⎛ dR ⎞⎜ R( θL) + ( θL) ∆θ⎟ (1)⎝ dθ⎠L= IRZ⎛ dR ⎞⎜ R( θH) + ( θH) ∆θ⎟ (2)⎝ dθ⎠H= IRZdenote the intensity of the images on thelow-angle side ( θ ) <strong>and</strong> the high-angle side ( θ ) of therocking curve.I RLis the intensity through the specimen.R ( θ ) is the height of the rocking curve at the position θ .∆ θ Zis the vertical deviation generated by the boundarybetween materials with different refractive indices for ahorizontal incident beam. The following results can bederived from the formula (1) <strong>and</strong> (2) being independent eachother,IRRI( θ ) − I ( θ )( θ ) ( θ ) − R( θ ) ( θ )LL⎛ dR ⎞⎜ ⎟⎝ dθ⎠⎛ dR ⎞⎜ ⎟⎝ dθ⎠HHH⎛ dR ⎞⎜ ⎟ L⎝ dθ⎠⎛ dR ⎞H ⎜ ⎟⎝ dθ⎠= (3)LI R⎛ dR ⎞⎜ ⎟⎝ dθ⎠( θ ) − I R( θ )H L L H∆ θZ=(4)I( θ ) − I ( θ )HH⎛ dR ⎞⎜ ⎟⎝ dθ⎠Phase shift can be obtained from the above equation. Thealgorithm is applied on a pixel-by-pixel basis to the diffractedimages from the low- <strong>and</strong> high-angle side of the rockingcurve.B. Theory of DEI Micro-tomographyWhen an X-ray beam passed through a non-homogeneousspecimen in refractive indices, the propagation of the beamfollows [3] :d( ni) = ∇n(5)dlwhere n is the refractive indices, i is the unit vectortangent to the ray’s path, l is the length along the path inthe specimen. Thendi dnn + i = ∇n(6)dl dlThe angular deviation of the beam in the vertical direction∂nis proportional to for a horizontal ray path.∂zLLH167


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaConsequently, the line integration of these deviations along∂n∆θ z= ∫ ∂ z() l dl(7)the path of the beam is [4]The above equation is similar in form to the usual lineintegrals used in normal CT. A set of integrals correspondingto one particular angle of view are said to be a set ofprojections of the specimen. So given a number of suchprojections at different angles of view, conventionalConvolution Back Projection (CBP) or Filter Back Projection(FBP) algorithms can be used to reconstruct the sectionaldistribution of refractive indices inside specimen from DEIimages, which is the motivation of the DEImicro-tomography.III. NEW REFRACTION DOUBLED WITH FULL SCANMICRO-TOMOGRAPHY APPROACH BY DEITo reconstruct the sectional distribution of refractiveindices using DEI images, a general method is that DEIimages are acquired on one position of the shoulder of therocking curve (e.g. on the low- or high-angle side), withequiangular interval in the interval [ 0 ,π ], <strong>and</strong> are used toreconstruct the sectional estimation of the reflective indicesdistribution within specimen by a normal reconstructionalgorithm [4][5][6][7] .There are two issues to be considered. How theproportion of refraction information to absorptioninformation in one DEI image was increased? Are theimage projections by DEI in the interval [ 0 ,π ] enough fortomography?Although we can acquire the vertical deviation (refractioninformation) generated by the boundary of materials withdifferent refractive indices through equation (4), the processis complicated. We already knew that DEI images taken onlow- or high-angle side of rocking curve contain refractioninformation <strong>and</strong> a little absorption information. LetI ( θ L)Ref, IRef( θ H) be the intensity of refractive indiceson low- <strong>and</strong> high-angle of the rocking curve respectively. LetIAbs( θ L) , I ( θ H)Absbe the intensity of absorption on low-<strong>and</strong> high-angle of the rocking curve respectively. Moreover,For refracted X-ray, there will be a variation in direction dueto the slope of the rocking curve. That is to say, if theintensity of DEI images obtained on low-angle side of therocking curve is composed of I ( θ ) I ( θ )Abs L+Ref L, theintensity of DEI images obtained on high-angle side of therocking curve is composed of ( θ ) I ( θ )I − . Bysubtracting the low- <strong>and</strong> high-angle DEI images, therefraction information can be enhanced quite well, while theabsorption information is suppressed. Furthermore, therefraction information is doubled thus. The process can beexpressed asAbs( I ( θ ) + I ( θ )) + ( I ( θ ) − I ( θ ))I = (8)RefLRefHThe cross section reconstructed from a set of DEI images bythe method is able to reveal distribution of refractive indicesbetter.Usually, an interval [ 0 ,π ] is used as the integral limits inthe formula of X-ray tomography. That means the projectionsfor reconstruction are sampled in [ 0 ,π ] , since theattenuation amount of the beam transmitted through theobject at θ equals the one atWheretAbsθ + π ,p ( t) p ( − t)=θ +πLHAbsRefθ(9)is normal distance of origin to the beam. Therebythe projections in the interval( π , 2π ] are redundant to theabsorption contrast-based X-ray tomography. But the instanceisn’t suitable for the refraction deviation. The refractiondeviation of beam which is transmitted through the object atθ don’t equal one at θ + π ,r ( t) r ( − t)≠ θ +πθ(10)So the reconstruction formula [8] needs to be modified.HH168


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinaf2π∞j2πw( x cosθ+ y sinθ)( x,y) = F( w,θ ) e=∫ ∫0∫ ∫0−∞2π0−∞w ⋅ FWe analyze the latter part of equation (11) first.f'2π∞( x,y) = w ⋅ F( w,θ )==∫ ∫0 0π∫∫0 0π∫∫0 0∞∞w ⋅ Fw ⋅ Fe⋅ w ⋅ dwdθ2π∞j2πwt( w,θ ) e ⋅ dwdθ+ w ⋅ F( w,θ )j2πw(x cosθ+ y sinθ)dw⋅dθ∫ ∫0 0ej2πwt⋅ dwdθπ ∞j2πw(x cosθ+ y sinθ)j2πw(x cos( θ + π ) + y sin( θ + π )( w,θ ) edw⋅dθ+ w ⋅ F( w,θ ) e∫∫0 0π ∞j2πw(x cosθ+ y sinθ)( w,θ ) edw⋅dθ+ w ⋅ F( w,θ )∫∫0 0e− j2πw(x cosθ+ y sinθ))dw⋅dθdw⋅dθ(11)(12)'let w = −w ,f'π ∞π 0j2πw(x cosθ+ y sinθ)''( x y) = ∫∫ w⋅F( w,θ ) edw⋅dθ+ ∫∫ − w ⋅ F( − w , θ )− j2πw( x cosθ+ y sinθ) ', edw ⋅ dθ0 0Rewrite the above equation as,f'π ∞π 0j2πw(x cosθ+ y sinθ)( x y) = w⋅F( w,θ ) edw⋅dθ+ − w⋅F( − w,θ )∫∫− j2πw(x cosθ+ y sinθ), edw⋅dθ0 0as F ( w,θ ) <strong>and</strong> F( − w,θ )0∫∫− are symmetric about origin, the following equation is derived,,0−∞−∞'(13)(14)fπ ∞j2πw( xcosθ+ ysinθ)( x y) = w ⋅ F( w,θ ) ⋅ e'∫∫, dw⋅dθ0−∞(15)It is the reconstruction formula in general tomography. Wecan analyze the former part of the equation (11) in the similarway <strong>and</strong> draw the following,'f ( x,y) = 2 ⋅ f ( x,y)− , ) in the interval [ ,2π ](16)Thus it can be seen that the image projections obtained withthe integral limit( ∞ ∞0 areidentical with the projections with the integral limit ( 0 ,∞)in the interval [ 0 ,2π ] twice, <strong>and</strong> the projections with the0 ,∞) [ ,2π]integral limit ( in the interval0 can beexpressed by rebinding the projections with the integral limit− , ) in the interval [ 0 , π ]. Based on the equation (15),( ∞ ∞the conventional transformation method (CBP, FBP) can be'adopted to solve f ( x,y).A summary of the algorithm is described as below:1) Preprocess the image projections of DEI, e.g. subtractbackground, for suppressing noise.2) Compute the difference images from the two sets ofimage projections of DEI taken on the low- <strong>and</strong> high-angleside of the rocking curve respectively, consequently a new setof image projections by DEI, in which refraction informationare magnified, are acquired.3) Separate each DEI image into two parts of ( − ∞,0)<strong>and</strong> [ 0 ,+∞). Rebind the separated image with the sameintegral limit as equation (14), <strong>and</strong> reconstruct from two setsof rebinded image projections by using conventionalreconstruction algorithm (CBP, FBP, et al) respectively. Sotwo sets of the sectional distribution of variation of the169


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinarefractive indices inside the specimen can be obtaind.4) Add the two sets of reconstructed images correspondingto the same slice based on a pixel-by-pixel.IV. RESULTSFig. 1 shows the DEI images of the cochlea of guinea pigtaken on the low- <strong>and</strong> high-angle side of the rocking curverespectively, with the 9.2Kev synchrotron radiation source,<strong>and</strong> detected by CCD with 11µm resolution. The DEI imagespreprocessed by subtracting background are shown in Fig. 2.For comparison, the results reconstructed from DEI imagesof the cochlea of guinea pig by using three methodsrespectively are shown in Fig. 3, (a) general method, whichused the image projections of DEI obtained on low-angle sideof the rocking curve within [ 0 ,π ], (b) the refraction(a)(b)Fig. 2. DEI images by preprocessing (a) imaging on the low-angle side of therocking curve, (b) imaging on the high-angle side of the rocking curvedoubled method, which used the image projections of DEIobtained on low- <strong>and</strong> high-angle side of the rocking curvewithin [ 0 ,π ], (c) the refraction doubled with full scanmethod, based on the (b) with full scan within [ 0 ,2π ].(a)(b)The 3D reconstruction result of the cochlea based on the2D reconstructed images by using software package Analyze6.1 is shown in Fig. 4. The helix structure of cochlea can beclearly observed.(c)Fig. 3. Images reconstructed by (a) general method (b) the refraction doubledmethod (c) the refraction doubled with full scan method(a)(b)Fig. 1. Images of cochlea guinea pig by DEI method (a) imaging on thelow-angle side of the rocking curve, (b) imaging on the high-angle side ofthe rocking curveFig. 4. The 3D reconstruction results of the cochlea of guinea pig170


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaThe above results demonstrated the advantage of therefraction doubled with full scan method over othermicro-tomography methods for DEI in terms of the highresolution in boundary of the refractive indices variationinside the cochlea of guinea pig. With regard to thediscontinuity of boundary of cochlea in Fig. 4, it was due tothe distortion of the specimen caused by the dehydrationwhen X-ray irradiated the specimen in the experiment.V. CONCLUSION AND DISCUSSIONA novel micro-tomography algorithm of X-ray diffractionenhanced imaging, went by the name of the refractiondoubled with full scan method, was developed. The methoddoubles the refraction information in DEI images bysubtraction between DEI images taken on the low- <strong>and</strong>high-angle side of the rocking curve, <strong>and</strong> takes the DEIimages in full scan mode ( [ 0 ,2π ]). In conclusion, theperformance of the method was demonstrated by using twosets of the DEI images of the cochlea of guinea pig. Theresults showed that the algorithm is able to more clearlydifferentiate the crossing boundary of different soft tissuethan general method. It is possible to enhance application ofthe technology in biomedicine.We should pay attention to prevent the distortion ofspecimen, which is greatly derogatory to the quality oftomography, during DEI in future experiments.ACKNOWLEDGMENTSREFERENCE[1] D. Chapman, W. Thomlinson, R. E. Johnston, D. Washburn, E. Pisano, N.Gmur, Z. Zhong, R. Menk, F. Arfelli <strong>and</strong> D. Sayers, “Diffractionenhanced x-ray imaging,” Phys. Med. Biol. Vol.42, pp. 2015-2025, 1997.[2] E. Pagot, S. Fiedler, P. Cloetens, A. Bravin, P. Coan, K. Fezzaa, J.Baruchel <strong>and</strong> J. Hartwig, "Quantitative comparison between two phasecontrast techniques: diffraction enhanced imaging <strong>and</strong> phase propagationimaging," Phys. Med. Biol. Vol.50, pp. 709-724, 2005.[3] B. W. Batterman <strong>and</strong> H. Cole, “Dynamical diffraction of X rays byperfect crystal,” Rev. Mod. Phys. Vol.36, pp. 681-717, 1964.[4] F. A. Dilmanian, Z. Zhong, B. Ren, X. Y. Wu, L. D. Chapman, I. Orion<strong>and</strong> W. C. Thomlinson, “Computed tomography of x-ray index ofrefraction using the diffraction enhanced imaging method,” Phys. Med.Biol. Vol.45, pp. 933-946, 2000.[5] F. Dubus, U. Bonse, T. Biermanna, M. Baronb, F. Beckmannc <strong>and</strong> M.Zawisky, "Tomography using monochromatic thermal neutrons withattenuation <strong>and</strong> phase contrast," Developments in X-ray Tomography III.,Proc. of SPIE, Vol.4503, pp. 359-370, 2002.[6] D. X. Shi, M. A. Anastasio <strong>and</strong> X. Ch. Pan, "Phase contrast X-raytomography using synchrotron radiation," Developments in X-RayTomography IV., Proc. of SPIE, Vol.5535, pp. 310-317, 2004.[7] E. Pagot, S. Fiedler, P. Cloetens, A. Bravin, P. Coan, K. Fezzaa, J.Baruchel <strong>and</strong> J. Hartwig, "Quantitative comparison between two phasecontrast techniques: diffraction enhanced imaging <strong>and</strong> phase propagationimaging," Phys. Med. Biol. Vol.50, pp. 709-724, 2005.[8] A.G.Ramm, A.I.Katsevich, The Radon Transform <strong>and</strong> Local tomography,CRC Press, 1996.[9] G. T. Herman, <strong>Imaging</strong> Reconstruction from Projections, theFundamentals of Computerized Tomography, Academic Press, 1979.This research was supported by Beijing Natural ScienceFoundation of No. 7031001.171


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaIntervertebral disc biomechanical analysisusing the finite element modeling based on medical imageZheng Wang, Haiyun Li*,College of Biomedical Engineering Capital University of <strong>Medical</strong> Sciences,Beijng China, 100054Abstract-In this paper, a three-dimensional geometricmodel of the intervertibral disk <strong>and</strong> lumbar has beenpresented, which integrated the spine CT <strong>and</strong> MRIdata-based anatomical structure . Based on the geometricmodel, a 3D finite element model of L1-L2 segment wascreated .Loads which simulate the pressure from abovewere applied the FEM, while a boundary conditiondescribing the relative L1-L2 displacement is imposed onthe FEM to account for three-dimension physiologicalstates. The simulating calculation can illustrate the stress<strong>and</strong> strain distribution <strong>and</strong> deformation of the spine. Themethod has two characteristic over previous studies: first,the finite element model of the lumbar are based on thedata directly derived from medical image such as CT,MRI. Second, the result of analysis will be more accuratethan that using the data of geometric parameters. TheFEM will provide a promising tool in clinical diagnosis<strong>and</strong> optimizing individual therapy in the intervertibraldisc herniation.Keyword- Reconstruction, Intervertibral disc herniation, FEM,Strain, StressI. INTRODUCTIONLumber disc herniation is an important cause of low backpains, the relevant research indicate the disk herniation isgenerally induced by the degenerated deformation of the discbecause of too much labor or spine abnormality .To analyze3D characteristics of these deformations, which can be usefulfor the design, evaluation <strong>and</strong> improvement of orthopedic orsurgical operations, a FEM of lumbar was created forstudying the biomechanical functioning of the spine.Lumbar intervertebral disc, a viscoelastic <strong>org</strong>an locating inthe middle of the two intervertebral bodies, is a flexiblestructure. The lumbar discs in the spine make up acomplexshaped structure which is the hinge <strong>and</strong> basis of spineactivities. They can transfer loads of human labor, balance*Corresponding author. E-mail address: haiyunli@cpums.edu.cnThe Project-sponsored by SRF for ROCS, SEM172body, stabilize spine <strong>and</strong> absorb vibration. All the functionsdepend on the intact disc. But in pathological states such asdisc herniation caused by more loads on the spine, the serialchanges of the spinal biomechanics properties will happen.The clinical diagnose <strong>and</strong> therapy for the lumbar discherniation require the knowledge of the states of stress <strong>and</strong>strain throughout the lumbar region. Because the loads on thelumbar are more than any other parts of spine, the discherniation in the lumbar is the commonest. So it is atendency to implement biomechanical analysis in the lumbardisc herniation or change of the disc stress-strain distributionstates after surgery. The following process is to create themodel of the disc <strong>and</strong> then mesh the model for further finiteelement analysis.II. METHODOLOGYWe developed a geometric model of the spine based on theCT <strong>and</strong> MRI data-based anatomical structure of spine byusing reconstruction software VTK. Initially L1-L2 motionsegment were casts that has been taken from a young man,which get 30 slices images from the CT scans. The CT sliceimages had a slice distance <strong>and</strong> thickness of 0.8mm <strong>and</strong> apixel size of 0.33mm. The data points derived from CT scanwere stored on the computer to reconstruct a geometricmodel of spine by VTK. The geometric model was then inputinto a commercially available finite element package using abottom-up approach, which creates an entity model of L1-L2segment. By applying the finite element mesh to thegeometry model, the L1-L2 is divided into a grid of elementthese form the finite element model.A .Modeling of the finite elementAn isotropic, three-dimensional, nonlinear finite elementmodel of an intact human L1-L2 motion segment wasgenerated as show in Fig1. Details of the model develop have


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinabeen given <strong>and</strong> are briefly summarized here: the shape of thelumbar segment was reconstructed from data obtained afterCT scans of a human L1-L2 segment.Each vertebral had been modeled as a 20-nodeisoparametric material elements(SOLID) using homogeneous<strong>and</strong> isotropic material properties[5]. The intervertebral discwere modeled using solid elements to simulate anincompressible behavior with a low Young modulus <strong>and</strong> aPoisson ratio close to 0.4499[5]. In order to appropriatelymodel the changes of contacting areas of facet articulatingsurface which are applied for the load, facet articulations weremodeled using contact elements. The material properties usedin the study were derived from the literature [2,5,13]. Thebehavior of material properties in the model response bestreflected those of published experimental lumbar response.Table1 lists the type, number <strong>and</strong> material properties ofelement used to model the various components of the L1-L2motion segment <strong>and</strong> the complete model consisted of 37449elements.TABLEⅠELEMENT TYPE AND MATERIAL PROPERTIESMaterial Element type Number ofelementYoung’smodulus(MPa)/passion rateCortical 20-node brick 16596 200/0.3bone of L1 (solid 95)Cortical 20-node brick 18663 200/0.3bone of L2 (solid 95)Disc bone 20-node brick 2082 4/0.4449(solid 95)Facet joint Contact(CONTA174)42B. Boundary conditionsWith regard of the validation <strong>and</strong> accurateness of modelanalysis, we applied the boundary conditions on the FE model.The boundary conditions on the model use pressure <strong>and</strong>restrains assigned to surface areas of the model. The inferiorsurface of L3 vertebral body <strong>and</strong> its spinous process werefixed in all directions. We couple the inferior surface ofvertebral L1 body <strong>and</strong> the upper surface of intervertebral discbody, also the bottom intervertebral disc body <strong>and</strong> uppersurface of L2 vertebral body in all directions of translation.The facet articulation was modeled as a three-dimensionalcontact unit using interface elements [5].Fig1. The finite element model of L1-L2 segmentC. load casesIn the paper we will analyze the stress <strong>and</strong> straindistribution of the spine. The evaluation was performed byfollowing methods: 1) load-displacement behavior. We canobserve the displacement change of the vertebral withdifferent loads <strong>and</strong> learn of the strain distribution of L1-L2segment. 2) A load of 1600-N axial compression was appliedto the superior surface of the model in the form of auniformly concentrated load over all L1 superior surfacenodes. We can observe the stress distribution of L1-L2segment by applying the load <strong>and</strong> clue on the high stressconcentration region as the most likely areas fracture. 3) discbulge: disc herniation is an important part of our study forthe L1-L2 segment. It is clinical significant to analyze thedisc bulge degree under different directions at 400N axialcompression. And this kind of analysis can help instruct thetreatment for disc herniation. From the load cases, we knowthat the finite element model can be used to predict thebiomechanical change of the human lumbar spine incompression.III. RESULTSThe finite element model applied the axial loads shows the173


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinastress <strong>and</strong> strain distribution of the vertebral. From theanalysis of the L1-L2 segment as solid cortical bone, severalresults are showed as follows:A. Load displacementWe applied the load (500N, 1000N, 1500N, 2000N, 2500N)for the L1 superior surface. The results of load-displacementbehavior in axial are shown in table2. From the table it showsthe increase in displacement of FE model with the increase ofloading. The tendency is approximately linear which alsoillustrates the cortical bone have flexible biomechanicalcharacteristics.TABLE ⅡTHE MAXIMUN DISPLACEMENT OF FE MODEL IN DIFFERENT LOADINGdisplacement2.52my result1.510.50500N 1000N 1500N 2000N 2500NloadC. Disc bulgeFig2. The Vos Mises distribution of L1-L2 segmentB. Stress distribution of modelFig2 shows the stress distribution of the spine when applied1500N load. It can be seen that the high stress concentrationsare around the vertebral body <strong>and</strong> pedicle region due to theway the applied load which are mainly acting on the upperbody of vertebral. These areas show Von Mises stress thatranges gradually from blue, .2E − 01mN2 to the2mmMaximum Von Mises stress of .1E − 02mN22mmindicated in red. The region is a common place for injures duoto loading. The stress concentration may be higher in pedicleregion if the pedicle area is loaded with a bigger proportion.In the L1 vertebral displacement, the body vertebral <strong>and</strong>superior articular processes are compressed downward.Therefore, the movement of vertebral due to the applied load<strong>and</strong> restrains placed on the model induces the high areas ofstress in the pedicle.Four nodal points on the intervertebral disc which representthe direction of bulge, left lateral, right lateral, left posterior<strong>and</strong> right posterior were marked to determine the bulgedisplacement of disc at 400N axial compression, [2]. Weobserved the degree of the bulge at the four directions. Theresult indicates the tendency of the disc bulge for posteriordirection is more obvious than the tendency for left-rightdirection. So the posterior herniation is induced easily thanthe other directional hernitaion. Table3 displays the discbulge degree at different direction of 400N loading.TABLE IIIDISC BULGE DISPLACEMENT AT AXIAL LOAD OF 400Ndisc bulge35302520151050leftlateralrightlateralload(400N)leftposteriorrightposterior174


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaIII. CONCLUSIONSIV. ACKNOWLEDGEMENTA three-dimensional nonlinear FE model of Lumbar motionsegment was established to simulate the loading state of thespine. The study indicates the biomechanical characteristic asfollows:1). The strain of L1-L2 segment under axial compressiveload increases with the performed load.2). Large stress concentrations are found in the pedicleregion, a common place for injures. Under the axialcompressive load, the body of vertebral <strong>and</strong> articular processis compressed downward. The magnitude of stress in thepedicle region depends on the proportion of load applied tothe superior articular processes. The bigger proportion, thestress on the pedical is higher.3). More loads can cause the disc bulge. The bulge degreedepends on the magnitude of the applied load. Under thesame load, it is easier for the disc to bulge laterally thanposterior.This study enriches some underst<strong>and</strong>ings of thebiomechanical characteristic under loadings <strong>and</strong> can helpsurgeons make better decisions for the treatment with lowback pain. In the paper, it is an initial model of vertebralincluding solid cortical bone, nucleus pulposus, <strong>and</strong> facetjoints. With the further study, the models will include corticalbone, cancellous bone, annulus fiber, nucleus pulposus, <strong>and</strong>ligaments. And it also can improve the accuracy of results <strong>and</strong>evaluation validity. The next step is to study more abouttorsion <strong>and</strong> shear <strong>and</strong> to simulation research for the modelinclude surgeon. Our aim is to simulate the operation <strong>and</strong>perform surgeon navigation by developing <strong>and</strong> analyzing thefinite element model. Research for the FEM based on the CTor MRI images represents a promising tool in decisionmaking <strong>and</strong> can help optimize individual therapy in thefuture.The author would like to thanks Dr Guangwei Du, DrHuizhi Cao, Mr. Jian Feng <strong>and</strong> Dr Haobo Yu, Dr Wei Xiong,Dr Weiwei Yu, for their help with this paper.V. REFERENCE[1] effects of laminectomy <strong>and</strong> facetectomy on the stability of the lumbarmotion segment K.K.Lee,E.C.Teo* school of Mechanical <strong>and</strong>Production Engineering.Nanyang Technological University. <strong>Medical</strong>Engineering & Physics 26(2004)183-192.[2] Christian Wong*, P.Martin Gehrchen, Tron Darvann. NonlinearFinite-Element Analysis <strong>and</strong> Biomechnical Evaluation of the Lumbar.[3] Tobias Pitzen, Fred Geisler, Control Engineering Practice10(2002)83-90[4] F.Nabhani, M.Wake Computer modeling <strong>and</strong> stress analysis of thelumbar spine. Journal of Materials Processing Technology127(2002)40-47[5] Francisco Ezquerro ., Antonio Simo´n. Combination of finite elementmodeling <strong>and</strong> optimization for the study of lumbar spinebiomechanics considering the 3D thoraxpelvis orientation <strong>Medical</strong>Engineering & Physics 26 (2004) 11–22 (32)[6] Y-X,Qin Proceeding of The First Joint BMES/EMBS ConferenceServing Humanity, Advancing Technology, Atlanta, GA. USA. Oct.13-16, 1999[7] Abjiject Joshi, the effect of nucleus implant modulus on the mechanicalbehavior of lumbar functional spinal. 0-7803-2/03/$17.00 2003 IEEE175


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaComputer applications on medical information176


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaAnalytic Modeling <strong>and</strong> Simulating of the HumanCrystalline Lens with Finite Element MethodZhuo Liu a,* , Bo-Liang Wang b , Xiu-Ying Xu b , Cheng Wang aa College of Electronic Science <strong>and</strong> Engineering, National University of Defense Technology, 410073, Changsha, PRCb Department of Computer Science, Xiamen University, 361005, Xiamen, PRCAbstract:This paper constructs an axisymmetrical, linear, finite element model of human crystalline lens <strong>and</strong> zonules based on theexperimental data derived from published resources. Displacement <strong>and</strong> pressure are applied on the model to study the deformationof lens during accommodating. The simulation results show that, under the pull of the zonules, the thickness of the lens decreaseslinearly, <strong>and</strong> the lens diameter increases linearly. The optical power of the lens increases as the zonules displacement increases. Thepressure has a great influence on the shape of the lens <strong>and</strong> the optical power. And the lens becomes thinner <strong>and</strong> flatter as thepressure increases. The optical power decreased 2.6D when the pressure increased 1 kPa. The outcome of this paper is consistentwith the Schachar’s hypothesis on accommodation to some extent. The analytical model presented in this paper can be used in thetheoretical study of the accommodation mechanism of the human lens.Keywords: Human crystalline lens; Finite element; Optical power; Accommodation; Eye pressure1. INTRODUCTIONThe deformation of human crystalline lens has beenconsidered as the physiological basis of visionaccommodation, which is believed to be due to thecontracting or relaxing of the ciliary muscles <strong>and</strong> thezonules. However, the mechanism of lens accommodation isstill under discussion. One popular viewpoint was proposedby Hemholtz[1]. He believed that the optical powerdecreases as the ciliary muscles contracted <strong>and</strong> increased asrelaxed. Although Hemholtz’s hypothesis has been acceptedwidely, many studies threw doubt on this viewpoint. In 1992,Schachar[2] propounded a contrary viewpoint that theoptical power increased as the ciliary muscles contracted.The most direct <strong>and</strong> accurate method to study the lensaccommodation is to measure the accommodating lens invivo, but the lens is normally partially obscured by the iris<strong>and</strong> direct measurements of changes in ciliary body <strong>and</strong> lensduring the accommodation process are difficult. In vitrostudies can provide the opportunity of making more detailedmeasurements <strong>and</strong> can obtain much richer data. Howeverthey are subject to the important uncertainty that theconditions of the lens <strong>and</strong> surrounding issues may not beequivalent to in vivo conditions.Recently, theoretical analysis method based on themathematical model of the lens has been used to study themechanism of lens accommodation. Sophisticatedmechanical analysis becomes available by using thecomputer-aided design <strong>and</strong> finite element analysis. Schachar,Huang & Huang[3] used a mathematical method to studythe accommodating lens in order to prove Schachar’shypothesis of accommodation. Burd, Judge&Cross[4]constructed a finite element model of the lens <strong>and</strong> thezonules to study the mechanism of accommodation. Victor* This work was supported by National Natural Science Foundation ofChina (Project No. 60371012).Corresponding author. Tel: +86-592-2187651E-mail address:liuzhuo999@yahoo.com, newliuzhuo@163.com177Wai Shung[5] examined the deformation effect of the lenswhen a few periodical radial points pulls were applied at thelens equator using his finite element model. Theoreticalanalysis provides great possibilities that are not available inexperimental studies, which makes it to be a usefulsupplement to experimental studies of the accommodationmechanism.The purpose of this paper is to construct an analyticmodel of the lens <strong>and</strong> the zonules, which is different fromexisting models in detailed modeling procedure <strong>and</strong>parameters, to study the deformation of accommodating lens.In section II, a finite element model of the human crystallineconstructed on the basis of experimental data. Section IIIuses the presented model to simulate the deformation of thelens <strong>and</strong> the zonules during accommodating <strong>and</strong> analyze thesimulation results in details. Section IV gives out theconclusion of modeling <strong>and</strong> simulating.2 .METHODS2.1 Geometric Model of the Human Crystalline LensIn this study <strong>and</strong> all previous ones, the lens is assumed tobe axisymmetrical. Under this assumption, only the profiledata are needed to construct a lens. The measurement <strong>and</strong>mathematical description of the lens profile are veryimportant to model the lens, but it is beyond the scope ofthis paper. In this paper published data measured byFincham[6] <strong>and</strong> Brown[7] respectively are used to describethe lens shape. Fig.1 shows the profile of the lens <strong>and</strong>zonules, with main parameters annotated.The capsule thickness is known to vary with radialposition instead of to be even on the outer surface. We usethe data measured by Fish <strong>and</strong> Pettet[8]. The thicknesscurve of the lens capsule is shown in Fig.2.


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaFig.1 Lens profile <strong>and</strong> parametersFig.2 Thickness curve of the lens capsule2.2. Material Properties of the Lens <strong>and</strong> ZonulesThe lens model consists of three distinct materials: thelens capsule, the cortex <strong>and</strong> the nucleus. For the purpose ofthis model, each material is assumed to be linearly elastic<strong>and</strong> isotropic. Although these materials may behave in anon-linear way, as discussed by Krag & Andreassen[9],linearity is a reasonable approximation when the strain isless than 10%. Therefore isotropic linear elasticity isadopted. The mechanical properties of different materialsare shown in Table 1.Table 1 Material parameter of the lens <strong>and</strong> zonulesYoung’s Poisson’s Element TypeModulus RatioCapsule 1.45 * 0.47 ** SHELL208Cortex 0.000398 *** 0.49 ** PLANE2Nucleus 0.0001 *** 0.49 **The lens is anchored into the ciliary body by three sets ofthe zonular fibers: anterior zonules, equatorial zonules <strong>and</strong>posterior zonules. Zonular fibers are thin, smooth <strong>and</strong>stretchable. The diameter of anterior zonules <strong>and</strong> posteriorzonules is about 25~60um, <strong>and</strong> equatorial zonules about10 ~ 15 um[12]. Few data is available on mechanicalproperties of the zonules. Therefore alternative approacheshave been used to determine the mechanical parameters.Burd, Judge&Cross[4] modeled zonules as sheets with zerocircumferential stiffness. However genuine zonules have nosuch structural continuity. Shung[5] applied pull forcedirectly on the lens capsule so as to avoid the modeling ofzonule. In this paper we use three sets of springs to modelthe zonules <strong>and</strong> they are assumed to attach the ciliary bodyat the same point.In original state there is no stretch in these springs. Whenthe attachment point moves against the lens to simulate thecontraction of ciliary muscles, the springs stretch to deformthe lens. Referring the studies of Fisher[13], Rao <strong>and</strong>Wang[14], we set spring’s stiffness as 6N/mm,0.2 N/mm,0.6 N/mm respectively.2.3. Finite Element ModelFig.3 shows the mesh model of the lens <strong>and</strong> zonulescreated with the universal finite element software ANSYS.Element types are listed in TABLE I. The capsule wasmodeled using 500 two-node axisymmetrical plane shellelements. The cortex <strong>and</strong> nucleus were modeled using 1070six-node axisymmetrical plane elements. Three set ofzonules were modeled using spring-damper combineelements with correspondent stiffness.The simulation process of accommodation in this paper isas following: ciliary body, represented by zonulesattachment point in our model, moves away from the lenssymmetry axis <strong>and</strong> this displacement caused the zonules(springs) to stretch, then the zonules pull the lens. The lenswill deform to cause the variation of its optical power.2.4. Radius of the Surfaces <strong>and</strong> Optical PowerZonules Spring Stiffness N/mmK1K20.60.2COMBINE14K30.6* Krag & Anderson[9] ** Fisher[10] ***Fisher [11]Fig.3 Finite element mesh model of the lens <strong>and</strong> zonule178The optical power is calculated using the conventionalthick lens formula2nl− nanl− nat( nl− na)opticalpower = + −(1)rarprarpnlwhere n l , the refractive index of the lens, is assumed to be1.42 <strong>and</strong> n a , the refractive index of the aqueous <strong>and</strong> vitreous,to be 1.336; r a , r p are the radii of the anterior <strong>and</strong> posteriorsurfaces respectively <strong>and</strong> t is the thickness of the lens. Theparameters r a , r p <strong>and</strong> t are calculated from the deformed lensfigure data.The geometry of the portion of the anterior <strong>and</strong> posteriorsurfaces within a circular aperture of radius 1.5 mm wasused to determine the optical power of the lens. A sphere fitwas made through this 3 mm circular zone of each surface,which is most important for vision, to calculate the radius ofthe central or optical zone.


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, China3. RESULTS AND DISCUSSION3.1 Deformation of the lens under the pull of the zonulesBased on the current finite element model, numericalsimulation was carried out to study the accommodationmechanism. To study the relationship between thedeformation of the lens <strong>and</strong> the displacement of the ciliarybody, the following parameters were calculated: lensthickness (t), lens radius (R), the shift of lens equator plane(Shift), curvature radii of the anterior <strong>and</strong> posterior surfaces(r a <strong>and</strong> r p ), the optical power (OP) <strong>and</strong> the force applied byciliary body to cause the deformation (CF). Calculationswere conducted by applying a displacement (D) to theciliary body point (point C in Fig. 3) that would correspondto the expected amplitude of movement. According toStrenk, Semmlow, Strenk &Munoz[15], the displacementwas set to be in 0~0.25 mm. Simulation result data wereshown in TABLE II, with displacement of 0 mm, 0.01 mm,0.02 mm, 0.05 mm, 0.1 mm, 0.14 mm, 0.20 mm <strong>and</strong>0.25mm respectively.Table 2Deformed lens parameters with different displacement(unit:mm)D t R Shift(x10 -3 )r a r p OP(D)CF(N)0 4.840 4.448 0 7.35 6.85 23.21 00.01 4.824 4.453 0.7 7.35 6.80 23.30 0.0050.02 4.805 4.458 1.4 7.35 6.75 23.39 0.0110.05 4.760 4.473 3.4 7.36 6.60 23.66 0.0280.1 4.679 4.498 6.7 7.37 6.48 24.07 0.0550.14 4.615 4.518 9.5 7.38 6.19 24.50 0.0770.2 4.519 4.548 14 7.38 5.94 25.01 0.110.25 4.438 4.573 17 7.38 5.76 25.45 0.14Analysis of the data suggests that, when the ciliary bodymoved away from the lens, the zonules stretched <strong>and</strong> pulledthe lens <strong>and</strong> the anterior surface of the lens moved backward<strong>and</strong> the posterior moved forward. The lens became thinner<strong>and</strong> the radius of the lens increased. The equator planeshifted toward the anterior pole tinily. Fig.4 shows thedeformed lens profile of 0.2 mm displacement. It describesthe typical deformation of the lens under the pull of thezonules. Fig.5 shows the linear variations of thickness <strong>and</strong>radius with displacements respectively.As the ciliary body moved, the curvature radius ofanterior surface increased tinily, while the curvature radiusof posterior decreased obviously in contrast. This differenceis shown in Fig.6. The optical power was then calculated by(1). As shown in Fig.7, it increased when the ciliary bodymoved away form the lens. This result is consistent withShung[7] <strong>and</strong> Zhang[16], which supports Schachar’shypothesis to some extent.Fig.6 The geometry of the portion of the anterior <strong>and</strong> posterior surfaceswithin a circular aperture of radius 0.6 mm of original lens <strong>and</strong> deformedlens (displacement = 0.2 mm)Fig.7 Variation of optical power with displacement.Fig.4 Lens deformation with pull displacement=0.2mm.Fig.5 Lens thickness <strong>and</strong> radius variation with displacement.179To analyze the functions of the anterior <strong>and</strong> posteriorzonules, stiffness of the equator spring was set to zero whileother two springs didn’t change. Then 0.2 mm displacementwas applied to the ciliary body. The results of numericalcalculation were as following: the deformed lens’ thicknesst=4.542 mm, radius R=4.515 mm, equator plane shift=0.015mm, optical power OP=25.51 D. This result suggested thatthe anterior <strong>and</strong> posterior zonules cooperated to keep thelens stable when accommodating. Furthermore, these twosets of zonules contributed apparently to the variation of theoptical power: they brought an increment of 2.3D to the lensoptical power. So it seems that the anterior <strong>and</strong> posteriorzonules may be more important to the lens accommodationthan the equator zonules. This may explain why the anterior<strong>and</strong> posterior zonules are thicker <strong>and</strong> tighter than the


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinaequator zonules.The maximum strain of the lens capsule is 3.4%, which isless than 10%. As discussed in Section 2.2, isotropic linearelasticity is completely acceptable when the strain is in thisscope.3.2 Deformation of the Lens under Eye PressureIn natural state in vivo lens is immersed in the aqueoushumor which produces eye pressure on the lens surface. Thenormal pressure is between 1.33kPa <strong>and</strong> 2.79kPa. The eyepressure can deform various tissues of the human eyeremarkably, including the lens. Therefore it is important tostudy the influence of the eye pressure on the lens. Aninvestigation has been conducted to study the influence ofthe eye pressure on the lens. Surface pressures varying from1kPa to 3kPa were loaded to the outer surface of the lensrespectivly. The ciliary body was assumed to be fixed.Table 3Parameters of deformed lens with different pressure (unit:mm)P(kPa)t R Shift(x10-3)r a r p OP(D)0 4.840 4.448 0 7.40 6.90 23.201.0 4.687 4.480 0.022 7.80 8.20 20.601.3 4.642 4.490 0.029 8.00 8.60 19.901.6 4.596 4.500 0.035 8.20 9.20 19.102.0 4.535 4.513 0.044 8.40 10.00 18.002.5 4.459 4.530 0.055 8.80 11.50 16.703.0 4.382 4.545 0.066 9.10 13.30 15.30the decreasing rate was about 2.6D·kPa -1 , as shown in Fig.8.This meant that 1kPa the pressure increased, about 2.6D theoptical power decreased. So it means that the pressure has aconsiderable influence on the shape of the lens so it maychange the optical power remarkably.3.3. Deformation of the Lens under the Pull <strong>and</strong> PressureTo study the effect of the pressure when the lens is pulledby the zonules, both the displacement (as in Part A) <strong>and</strong> thepressure (as in Part B) were applied to the presented modelin this part. Table 4 lists the main parameters of the lensshape <strong>and</strong> optical power when different displacement valuesapplied accompanied with a constant pressure of 2 kPa. Theanalysis of the data shows that the deformation of the lensunder constant pressure is similar to that without pressure:when the displacement increased, the lens became thinner<strong>and</strong> the radius of the lens increased; the curvature radius ofanterior surface increased <strong>and</strong> the curvature radius ofposterior decreased; the optical power increased almostlinearly.Table 4Parameters of deformed lens with displacement, when pressure P=2kPa(unit:mm)D t R Shift(x10-3)r a r p OP(D)0 4.535 4.513 0.044 8.40 10.1 18.000.01 4.503 4.523 0.044 8.40 9.90 18.200.02 4.454 4.538 0.044 8.40 9.60 18.500.05 4.374 4.563 0.045 8.40 9.00 18.900.1 4.310 4.584 0.045 8.40 8.70 19.300.14 4.213 4.613 0.045 8.50 8.20 19.900.2 4.133 4.639 0.046 8.50 7.80 20.40Fig.8. Variation of the optical power with pressure.Table 3 lists the calculation results. As the pressureincreased, the lens shifted tinily toward the anterior pole.This illustrated that the force on the posterior surfaceproduced by the pressure was stronger than the force on theanterior surface according to the surface shape of the currentmodel. So the lens moved forward. At the same time thethree sets of zonules stretched more <strong>and</strong> more to hold backthe lens as the lens was pushed forward. Their pull forcesincreased until it equaled to the push force. Then the lensstopped at a balance state.When the pressure increased, the anterior surface movedbackward <strong>and</strong> posterior surface moved forward. The lensthickness decreased <strong>and</strong> the equator of the lens extendedtoward the ciliary body. The radii of curvature of theanterior <strong>and</strong> posterior surfaces increased. The optical powerdecreased linearly against the increase of the pressure <strong>and</strong>Fig.9. Lens variation with different displacement.under P=1.3,2.0,2.8kParespectively.Calculations of different pressure values, P=1.3kPa,2.8kPa have also been conducted to compare the influenceof different pressures. Fig.9 shows the comparison ofP=1.3kPa, 2.0kPa, 2.8kPa. In all of these 3 cases the lensmodel behaved in the same way <strong>and</strong> only the result valuesof the lens parameters differed. With the same displacement,optical power of P=2.8kPa is less than that of P=1.3kPa by3.9D. So the decreasing rate was 2.6D·kPa -1 , as same as inpart 3.2. This result suggests that the influence of thepressure on the lens optical power is independent of thedisplacement.180


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaThe maximum strain of the capsule under the maximumdisplacement was 3.4%, which was still in the scope whichwas necessary to the assumption of linearity materialproperty. However, it should be noted that the maximunstrain of the contents (cortex <strong>and</strong> nucleus) is up to 40%. Inthis instance the content material may behave a nonlinearityway. So a nonlinear model may be more accurate <strong>and</strong> thisexpects more experimental data.4. CONCLUSIONSIn this paper, an axisymmetrical, linear, finite elementmodel of the human crystalline lens has been presented <strong>and</strong>has been used to simulate the accommodation process.Results show that the optical power increases as the zonulespulled the lens away from its axis. This result is consistentwith Schachar’s hypothesis of accommodation to someextent. Father calculations suggest that the anterior <strong>and</strong>posterior zonules not only are of great importance to thelocation stability of the lens during accommodating, but alsocontribute much to the variation of the optical power.Another important conclusion is that the shape of thepresented model lens is sensitive to the pressure on its outersurface. Even a normal eye pressure can bring a greatreduction to the optical power. The optical power decreasesas the pressure increases with a rate of 2.6D·kPa -1 . When thezonules pull the lens, the model lens behaved in the sameway no matter with or without pressure <strong>and</strong> the influence ofthe pressure is independent.The outcome of the current study is believed to accordwith expectation. Both the modeling method <strong>and</strong> simulationresults are helpful to the study of the accommodationmechanism. With new, better, experimental data becomingavailable in the future, numerical modeling can bedeveloped as a successful approach in the study ofaccommodation.REFERENCES[1] H. Von Hemlholtz, Physiological Optics. New York: Dover,vol.1, pp.143~172, 1962.[2] R.A. Schachar, “Cause <strong>and</strong> treatment of presbyopia with amethod for increasing the amplitude of accommodation”,Annals of Ophthalmology, pp.445-452, 1992.[3] R.A. Schachar, T. Huang, X. Huang, “Mathematic proof ofSchachar’s hypothesis of accommodation”, Annals ofOphthalmology, pp.5-9, 1993.[4] H.J. Burd, S.J. Judge, J.A. Cross, “Numerical modeling ofthe accommodating lens”, Vision Research, vol.42,pp.2235-2251, 2002.[5] V.W. Shung, “An Analysis of A Crystalline Lens SubjectedTo Equatorial Periodic Pulls”, Ph.D. Thesis, The Universityof Texas at Arlington, August 2002[6] E.F. Fincham, “The mechanism of accommodation”, BritishJournal of Ophthalmology, Suppl.8, pp.5-80, 1937.[7] N. Brown, “The change in shape <strong>and</strong> internal form of thelens of the eye on accommodation”, Experimental EyeResearch, vol.15, pp.441-459, 1973.[8] R.F. Fisher, B.E. Pettet, “The postnatal growth of the capsuleof the human crystalline lens”, Journal of Anatomy, vol.112,pp.207-214, 1972.[9] S. Krag, T.T. Andreassen, “Mechanical properties of thehuman lens capsule”, Progress in Retinal <strong>and</strong> eye Research,vol.22, pp.749-767, 2003.[10] R.F. Fisher, “The elastic constants of the human lenscapsule”, Journal of Physiology, no.201, pp.1-19, 1969.[11] R.F. Fisher, “The elastic constants of the human lenscapsule”, Journal of Physiology, no.212, pp.147-180, 1971[12] Rao Hui-Ying, “Biological <strong>and</strong> mechanical properties of thehuman lens zonule”, Foreign <strong>Medical</strong> Sciences(Ophthalmology), vol.18, no.4, pp201-205, 2001.[13] R.F. Fisher, “The force of contraction of the human ciliarymuscle during accommodation”, Journal of Physiology,no.270, pp.51-74, 1977.[14] Rao Hui-Ying, Wang Li-Tian, “An experimental study ofstretching capability of zonule”, Journal of clinicalophthalmology, vol.10, no.3, pp.223-225, 2002[15] S.A. Strenk, J.L. Semmlow, L.M. Strenk, P. Munoz, J.Gronlund-Jacob, J.K. DeMarco, ”Age-related changes inhuman ciliary muscle <strong>and</strong> lens: a magnetic resonanceimaging study”, Investiagtive Ophthalmology an VisualScience, vol.40, no.6, pp.1162-1169, 1999.[16] Zhang Li-Yun, “Research on lens accommodation <strong>and</strong>presbyopia”, Foreign <strong>Medical</strong> Sciences (Ophthalmology),vol.25, no.5, pp.303-307, 2001181


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaAn Approach to the Relevance between the Variations in HormoneSecretion <strong>and</strong> The Incidence of Hyperplasia of Mammary Gl<strong>and</strong>s <strong>and</strong>Mammary CancerChen Chengqi <strong>and</strong> Lin YubinAbstractObjective: Gonadal hormone in patients’ blood wasdetected to determine the interaction between hormonesecretion <strong>and</strong> the incidence of cancer through the effect ofanti-oncogene.Methods: A total number of 1084 mastopathy caseswere divided into 4 groups. The first two groups were 50cases of mammary cancer <strong>and</strong> 50 cases of hyperplasia ofmammary gl<strong>and</strong>s. In these two groups, blood samples weretaken in the period of follicle or menopause. In each case, 8pituitary hormones(PRL、GH、TSH、ACTH、FSH、LH、IGF)<strong>and</strong> 4 steroid hormones(Cortisol、E2、P、T)weredetected by means of radio immunoassay (RIA). The RankSum Test was employed for statistics in these two groups.The third group included 433 cases of hyperplasia ofmammary gl<strong>and</strong>s. The normal approximation (Wilcoxonmethod) <strong>and</strong> weighted approach were used in this group tocalculate the mean values, which were then compared withthe st<strong>and</strong>ard ones. The last group was also made up of 440cases of hyperplasia of mammary gl<strong>and</strong>s. The results of thepercentage calculation were used to analyze the relativity ofhormone axes.Results: In total 973 cases of mastopathy, a significantportion of 40% was found accompanying other diseases.The FSH level during premenopause <strong>and</strong> the ACTH levelduring menopause in mammary cancer patients are higherthan that of patients suffering hyperplasia of mammarygl<strong>and</strong>s. The values of HPO axes in 433 cases of hyperplasiaof mammary gl<strong>and</strong>s were higher than that of normalcontrol group. The relevance among HPO hormone axes,GH axes, PRL hormone axes <strong>and</strong> immune hormone axes(ACTH axes, TSH axes) in 440 cases of hyperplasia ofmammary gl<strong>and</strong>s were also analyzed in this research.Conclusions: Mammary tumor often accompanies aseries of complications. It is proved in this paper that thehormone secretions in patients suffering mammary cancer<strong>and</strong> hyperplasia of mammary gl<strong>and</strong>s, whether benign ormalignant are absolutely high in long term. According to theperiodical change of menses dynamic state, which femalehormone increases sharply during the date of 9 th -12 th sinceperiod starts <strong>and</strong> decreases after ovulation <strong>and</strong> then reachesthe second wave crest during the date of 21 st -22 nd , theauthor gives medicine treatment periodically in line with thephysiological change of female hormone. TAM, periodicity<strong>and</strong> short-term (3-6months), is given to patients in the casewhen highly density hormone FSH, E2 remains in patients’body in long term. The patient takes the medicine TAM 20mg/per day continuously till 15 days after the end of herperiod <strong>and</strong> then stops. Since the semi-recession term ofTAM is of 7 days, the medicine remaining in blood can stillmaintain over one week, the total amount of medicine isaround 120-180 tablets. Meanwhile, accompanying theChinese traditional medicines which is against oxidizingfunction <strong>and</strong> cancer: Rukang Tablet ( 乳 康 片 ), Ru Kuai Xiao( 乳 块 消 ), Relieving nodules of breast tablet <strong>and</strong> Canelimcapsule, the integrated treatment impels to dissolve thehyperplasia of mammary gl<strong>and</strong>. Reviewing <strong>and</strong> comparingwith the hypothalamo-hypophysea-gonad <strong>and</strong>MAMMOGRAPHY X-ray after treatment the effective rateof integrated treatment reaches over 90% among patients’cases without any complication. It proves to prevent theincidence of hyperplasia of mammary gl<strong>and</strong>s fromtransforming into mammary cancer <strong>and</strong> provides anapproach to research the treatment of variations inhormone secretion.Key Words: hormone secretion, mammarycancer, hyper plasma of mammary gl<strong>and</strong>Etiological factor of hyper plasma of mammary gl<strong>and</strong>s182


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, China<strong>and</strong> mammary cancer is disturbance of gonadal hormone.Functional disorder of gonadal hormone is a whole bodydisease. ( 临 床 【 标 】: clinical 亚 临 床 : Sub-clinical) <strong>and</strong>they will promote mutually, accompanying with manygl<strong>and</strong> diseases. It has been proved by epidemiology <strong>and</strong>endocrine therapeutics that the relationship betweencancer <strong>and</strong> hormone disturbance <strong>and</strong> try to investigatethe regularity between them.Material <strong>and</strong> Method:(I) Case <strong>and</strong> Material:From June 1997 to June 2002, a total number of 1084mastopathy case patients were diagnosed for statisticsanalyze through endocrine checking. These mastopathycase patients were divided into 4 groups. The first groupswere 433 <strong>and</strong> 111 mastopathy case patients. Theweighted approach was used in this group to calculate theaverage values that were then compared with thest<strong>and</strong>ard ones. The second group was 100mastocarcinoma <strong>and</strong> mastopathy case patients. Beforetreatment there were 50 mastocarcinoma <strong>and</strong> 50mastopathy cases, which conclude 28 in the period offollicle <strong>and</strong> 22 in the period of menopause. Comparingthese cases, The Rank Sum Test was employed forstatistics in these two groups. The third group was 973mastopathy case patients C.T <strong>and</strong> ultrasound was used inthis group for abnormal hormones changes. Finally theresult is that 40% is accompanying various gl<strong>and</strong>diseases as a result. The last group was also made up of440 cases of hyperplasia of mammary gl<strong>and</strong>s. Althoughanalyzing the changing percentage of HPO FSH、E2、LH、P、T)axes, hypophysis (GH axes、PRLaxes)<strong>and</strong>Immune hypophysis, mastopathy case patients were allfemale. The oldest age is 67 years old, youngest age is 22years old <strong>and</strong> the average age is 46.3±9.6. Mammarycancers were all female. The oldest age is 67 years old ,the youngest age is 23 years old <strong>and</strong> the average age is45.4±9.9 old。(II) Method of detecting is according to radiatingimmunoassay (RIA) Center in the Hospital. The deviceof chemiluminescence’s offered by detecting reagentwhich is produced by Kang Ren Company, U.S. bloodsamples were taken in the period of follicle or menopause7 pituitary hormones(PRL、GH、TSH、ACTH、FSH、LH、IGF)<strong>and</strong> 4 steroid hormones(Cortisol、E2、P、T)were detected by means of radiating immunoassay(RIA).(III) Statistics MethodThe Rank Sum Test <strong>and</strong> the normal approximation(Wilcoxon comparing methods) as well as weightedapproach <strong>and</strong> percent of statistics.ConclusionThe first Group:As showed in Figure 1, the average values of 433 <strong>and</strong>111 mastopathy case patients were higher than thest<strong>and</strong>ard ones. It is proved in this paper that the meanvalues of FSH <strong>and</strong> E2 in patients suffering mammarycancer <strong>and</strong> hyperplasia of mammary gl<strong>and</strong>s, whetherbenign or malignant, are absolutely high in long term.Mammary gl<strong>and</strong> was simulated by hormone secretion fora long time. As a result epithelial tissue of canal, ER <strong>and</strong>PR increased. All of this is identical in the change ofdisease.The mean values of 433 mastopathy cases <strong>and</strong> st<strong>and</strong>ard onesin the period of follicle (Chart I) (X±S)E2 FSH LH P T PRL ACTH GH TSHNormal39-57(48±9)Values 〔1〕3.88—7.32(5.6±0.86)5.8—9.2(7.5±1.7) 〔1 〕0.15-1.10(0.75±0.6)37---71(59±22)2-25 5.08-32.86 < 5 0.35-5.5183


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaMastopathyCases50YearsOld(26cases)TotalAverage58.96±0.45 5.31±0.52 9.82±0.49 1±0.55 56.89±0.40 21.03±0.52 17.96±0.52 7.34±0.56 1.41±0.5681.56±0.51 12.3±0.41 9.20±0.66 1.79±0.72 67.29±0.67 22.29±0.60 30.45±0.63 3.08±0.63 2±0.6182.91±0.35 14.8±0.45 9.95±0.48 0.52±0.48 52.79±0.60 13.29±0.62 18.56±0.47 1.95±0.52 1.66±0.4863±0.53 22.8±0.38 17.3±0.26 1.5±0.48 70±0.43 13.62±0.50 15.12±0.61 1.87±0.40 1.25±0.3174.06±0.45 11.3±0.43 9.74±0.49 1.06±0.53 56.32±0.50 17.45±0.50 20±0.52 3.86±0.53 1.56±0.51The mean values of 111 mastopathy cases <strong>and</strong> st<strong>and</strong>ardsin the period of menopause (Chart I) (X±S)E2 FSH LH P T RPL ACTH GH TSHNormal 0—31Values 〔1〕 (15.5±15)5.6—16(10.5±5.5)5.8—170.15-1.10(11.5±5.5.) 〔1 〕(0.75±0.6)7---71(59±22)2-25 5.08-32.86 < 5 0.35-5.548-62(111cases)42.01±30.28 55.06±33.65 26.83±21.60 0.90±2.50 50.87±31.70 10.61±6.89 12.11±16.71 1.78±2.79 2.28±2.30184


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, China相 关 词 汇 :大 脑 皮 层 Cerebral cortex; Pallium 下 丘 脑Hypothalami垂 体 Hypothalam垂 体 激素 Pituitary hormone卵 巢 激 素 Ovarian hormone卵 巢 周期 ovarian cycle正 常 与 异 常 垂 体 ―― 性 激 素 坐 标 图 Normal <strong>and</strong>abnormal hypophysis coordinate FigureThe Second GroupAccording to endocrine changing analysis character of 100cases of mammary cancer <strong>and</strong> hyperplasia of mammarygl<strong>and</strong>s, mammary cancer plasma in the period of follicle,FSH is higher than hyperplasia of mammary gl<strong>and</strong>s.Mastocarcinoma plasma ACTH in the period of menopauseis higher than benign hyperplasma of mammary gl<strong>and</strong>s.(See Figure II, III). In conclusion, the FSH level duringfollicle in mammary cancer patients are higher than that ofpatients suffering hyper plasma of mammary gl<strong>and</strong>spossibly because the disturbance of ovary function in theearlier period of mammary cancer, menoxenia, theimbalance of estrin <strong>and</strong> progestogen, high concentration E2negative feedback effecting hormone axes HPO; highconcentration progesterone negative feedback impellingadrenal gl<strong>and</strong> cortex hormone axes(HPA)making ACTH<strong>and</strong> cortisol hormones short, feedback disturbance, ACTHstimulate GH Hormone secretion increasing. GH in highconcentration stimulate gonadal hormone(GnRh)makingFSH、LH increasing, FSH concentration in plasma cleanslowly, change little, reserve high concentration level for along time which is related to the decreasing of follicleinhibin. The increasing of FSH hides the ability ofreleasing estrin <strong>and</strong> stimulates the secretion of estrinindirectly. Estrin in high level promote the disturbance ofHPO axes <strong>and</strong> HPA、GA hormone axes. As a result sexhormone binding globulin in the body decrease, <strong>org</strong>anismutilization of T & E increase at the same time, SHBGtowards the level of T & E is partial to male hormone,<strong>and</strong>rogen stimulate sensible cells proliferate rapidly,accelerate cancer neoplasia cells proliferate quickly <strong>and</strong>become very big.For cases of mammary cancer <strong>and</strong> hyperplasia ofmammary gl<strong>and</strong>s in the period of menopause, mostly ovaryfunction is shut down or defected, follicle atrophy, fibroustissue metaplasia (above 70 percent), ovary steroid turninto adrenal cortex. As a result, zona reticulates cellsexcreting a lot of <strong>and</strong>rosterone turn into estrin. FSH <strong>and</strong>LH are in high level, so FSH <strong>and</strong> LH concentration inpatient’s plasma have no obvious difference in statisticsresult. ACTH plasma concentration increase in the periodof menopause. Mammary gl<strong>and</strong> cells generate ACTHhormone <strong>and</strong> progesterone in high-level negative feedback<strong>and</strong> stimulate ACTH hormone, making ACTHconcentration increasing. The increasing of ACTH promoteadrenal cortex constrained <strong>and</strong> make HPA axe disorderingwhich prevent Immunologic cell active factor such asinterleukin, interferon, tumor necrosis factor <strong>and</strong> immune Tcells <strong>and</strong> B cells. It is a consistent attitude that theincreasing of glucocorticoid, <strong>and</strong>rogen, progestogen <strong>and</strong>adrenocorticotropin is the cause of immunologicalfunction’s restrained reaction <strong>and</strong> activating estrin whichinduce correlated oncogene mammary cancer (T1) <strong>and</strong>hyperplasia of mammary gl<strong>and</strong>s (T2).185


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaResult of using Wilcoxon different comparing method <strong>and</strong> formula to calculate U <strong>and</strong> PE2 FSH LH P T RPL ACTH GH TSHN 1 Group(N 1 =28)T 1 : 814 940 860 802 737 778 867 742 702follicleT 2 : 726 600 680 738 803 763 673 798 838U = 0.50 2.62 1.27 0.29 0.78 0.09 1.39 0.70 1.37U ≥1.96 P ≤ 0.05; U ≥ 2.58 P ≤ 0.01P 值 >0.05 0.05 >0.05 >0.05 >0.05 >0.05 >0.05 >0.05Differ No Yes No No No No No No NoMenopauseThe Third Group:Mammary cancer (T1) <strong>and</strong> hyperplasia of mammary gl<strong>and</strong>s (T2)Result of using Wilcoxon different comparing method <strong>and</strong> formula to calculate U <strong>and</strong> PE2 FSH LH P T RPL ACTH GH TSHN 2 Group(N 2 =22)T 1 : 477 509 504 516 538 517 613 489 512T 2 : 558 526 531 519 479 518 404 546 523U= 0.65 0.06 0.03 0.22 0.72 0.24 2.83 0.37 0.12U ≥ 1.96 P ≤ 0.05; U ≥ 2.58 P ≤ 0.01P Value >0.05 >0.05 >0.05 >0.05 >0.05 >0.05 0.05 >0.05Differ No No No No No No Yes No NoTotal 973 case hyperplasia of mammary gl<strong>and</strong>s in theperiod of follicle is accompanied with many gl<strong>and</strong> diseases,which is about 40 percent. It shows that episodemechanism of mammary cancer tumor is related toendocrine function disorder. Endocrine hormone functionaldisorder is a whole body <strong>and</strong> sub-clinical disease; hormonedisorders promote each other, soliciting many kinds ofrelated gl<strong>and</strong> disease (See Figure IV).E FSH LH P T PRL ACTH GH TSH diagnosis Number Per-cen-tageAbnormalHormoneChange41..67 7.03 10.8 0.6 73.6 71.82 9..35 7.95 1.34 accompanyhypophysisadenoma26cases/398cases6.5%76.74 87.28 60.75 0.56 77.09 21..23 32..25 5.56 0.68 accompany143.5%portion emptycasessella turcica/398cases75..58 6..58 7.89 0.45 96.58 15.16 161.15 3.24 2.27 accompany102.5%suprarenal gl<strong>and</strong>casesadenoma/398cases186


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, China86.25 5.87 6.97 0.86 83.54 23.45 32.14 2.19 0.01accompany184.5%7.46hyperthyreosiscasesaccompany398hypothyroidismcases93.65 13.26 10.87 0.46 130.45 12.32 27.57 1.28 1.35 accompanyHepatitis <strong>and</strong>Cholecyst gl<strong>and</strong>ployp257.28 16.58 23.10 0.86 103.47 18.30 21.45 2.30 1.61 accompanyman-mademenopause<strong>and</strong> abortion132.43 10..9 46.68 0.22 58..37 17.96 24.25 0.01 1.53 accompanyuterinetumor46..96 6.62 10.25 10.04 75.49 12.81 45..51 0.46 1.84 accompanyovarycyst22cases/398cases121cases/398cases129cases/398cases58cases/398cases5.5%30.5%32.5%14.5%The Forth Group:According to 440 cases hyperplasia of mammary gl<strong>and</strong>shypophysis-Gonadal hormone axes interaction (See FigureV), it is proved that the common rule ofhypophysis-Gonadal hormone axes interaction. That isfunctional (HPO axes) sex hormone is disturbed,developing into the discord of hypophysis (GH axes, PRLaxes), resulting immune hormone (ACTH axes, TSH axes)is restrained. Author observed that HPO axes hormonelevel change is the prophase manifestation of hyperplasiaof mammary gl<strong>and</strong>s.HormoneHPO axes+ GH axes(440 Cases ) hypophysis-sex hormone axes intersection statistical table(See Figure V)HPO axes+PRLaxesHPO axes+ACTH axesHPO axes+TSH axesSimple HPO axeschange(includingFSH\LH\T\E\P)Total Amount 55cases 65 cases 52 cases 37 cases 231 casesPercentage% 12.5% 14.8% 11.7% 8.5% 52.5%(440 Cases) hypophysis hormone axes intersection statistical tableHormoneGH+HPOGH+FSHGH+LHPRL+T+E+PPRL+FSH+LH+T+E+PPRL+GH+HPOACTH+HPOACTH+GH+HPOACTH+PRL+HPOTSH+GH+HPOTSH+PRL+HPOTSH+ACTH+HPOTSH ++HPO187


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaTotalAmount13cases/55cases18 cases/55cases24 cases/55cases13 cases/65 cases40 cases/65cases12 cases/65 cases44cases/52cases4 cases/52 cases4 cases/52 cases7 cases/37cases10 cases/37cases3 cases/37cases17cases/37casesPercentage%23.6% 32.7% 43.7% 20% 61.5% 18.5% 84.6% 7.7% 7.7% 18.9% 27% 8.1% 46%DiscussionThrough the study in the change of gonadal hormone inpatients’ blood <strong>and</strong> statistic analysis, we come to theargumentation of the four groups as follows:Group 1: in the cases of the hyperlasia of mammarygl<strong>and</strong>s, 433 samples were taken in the period of follicle<strong>and</strong> 111 samples were in menopause. We used theestrogen (FSH, E 2 ) of different ages as shafting, <strong>and</strong>calculated the average of each hormone. The statisticsresult shows that, the total hormone level in 433 sample ofhyperlasia of mammary gl<strong>and</strong>s is higher that the normal.The FSH <strong>and</strong> E 2 hormone in the 31-40 (150 samples) (E2is 82+- 0.51/48+-9; FSH is 12.33+-0.41/5.6+-0.86,respectively) <strong>and</strong> the 41-50 (115 samples) are highest inFSH, E2 <strong>and</strong> LH hormones in the age (E2 is83+-0.35/48+-9; FSH is 14.83+-0.45/5.6+-0.86; LH is9.95/7.5+-1.7 respectively.). Next, inthe age more than 50 (26 sample) (E2 is 63+-0.53/48+-9;FSH is 22.87/5.6+-0.86; LH is 17.37+-0.48/7.5+-1.7respectively) <strong>and</strong> in the age less than 30 (142 samples) theheighten of E2 <strong>and</strong> GH hormone indicate that GHstimulate the release of FSH, <strong>and</strong> make the E2concentration in brood higher than normal level (E2 is42.01+-30.28/15+-15.5; FSH is 55.06+-33.65/11.5+-8.5;LH is 55.63+-33.65/12.5+-6.5) (reference frame <strong>and</strong>figure 1). The FSH <strong>and</strong> E2 concentration is higherthan normal in the 111 samples of patients in menopause.Indication: In the patient of mammary cancer <strong>and</strong>hyperlasia of mammary gl<strong>and</strong>s, whether is carcinoid ormalignant, in the period of follicle or menopause, the HPOhormone shafting shows that FSH <strong>and</strong> E2 are higher thannormal in a long period. This indicates that FSH <strong>and</strong> E2hormone which keep high level in the patient brood willstimulate the development of mammary vessel <strong>and</strong> receptor(ER, PR), this change in hormone is identical with thepathologic change.Indication: In the patient of mammary cancer <strong>and</strong>hyperlasia of mammary gl<strong>and</strong>s, whether is carcinoid ormalignant, in the period of follicle or menopause, the HPOhormone shafting shows that FSH <strong>and</strong> E2 are higher thannormal in a long period. This indicates that FSH <strong>and</strong> E2hormone which keep high level in the patient brood willstimulate the development of mammary vessel <strong>and</strong> receptor(ER, PR), this change in hormone is identical with thepathologic change.Group2: In this group we analyzed the change of incretionin 100 cases of the hyperlasia of mammary gl<strong>and</strong>s <strong>and</strong>mammary cancer. These two groups are less than 50samples. Objective: mammary cancer (T1 patients, 50samples), the hyperlasia of mammary gl<strong>and</strong>s (T2 patients,50 samples), follicle (group N1) is 28 samples (N1=28) <strong>and</strong>menopause (group N2) is 22 samples (N2=22). Throughstatistical analysis (figure II, III), FSH concentration inbrood in follicle period of T1 patients is much differentfrom T2 patients (P0.05) (PRL, GH, TSH,ACTH, LH) <strong>and</strong> steroid hormone of T1 patients inmenopause period is higher than T2 patients (P, GH, TSH, LH <strong>and</strong> steroid hormones (P>0.05). in thefollicle period, ACTH hormone in the mammary cancerpatients’ brood is higher than the benign hyperlasia ofmammary gl<strong>and</strong>s. Indication: the average of FSH <strong>and</strong>ACTH hormone in mammary cancer in the period offollicle is higher than the hyperlasia of mammary gl<strong>and</strong>s.Group 3: in this group we detect 973 cases of thehyperlasia of mammary gl<strong>and</strong>s in the period of follicle, <strong>and</strong>use CT as further examination. The percent of pituitarycancer which PRL heighten 3 times is 6.5%; the percent ofportion empty sella turcica patient which PRL\FSHincrease 3 times is 3.5% the percent of TSH decreasing orincreasing is 4.5%; the percent of hepatitis <strong>and</strong> cholecystgl<strong>and</strong> ployp patient which FSH/T/E increase 3 times is5.5%; suprarenal gl<strong>and</strong> adenoma patient which ACTH188


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinaincreases 3 times is 2.5%; artificial menopause <strong>and</strong>abortion patient which E/FSH increase 3 times is 30.5%. Inthe patient, which E/FSH increase 3 times, thehysteromyoma is 32.5%; ovarian cyst patient which P/FSHincreases 3 times is 14.5%. in the total 973 case, 40%patient’ hormone detection is according to the several kindsof gl<strong>and</strong> diseases. This shows hyperlasia of mammarygl<strong>and</strong>s is related to the malajustment of endocrinehormones. The maladjustment of endocrine hormones issystemic sub-clinical diseases <strong>and</strong> enhances each other. Itusually arise several kinds of gl<strong>and</strong> diseases. Diagnosis <strong>and</strong>treatment as soon as possible will prevent those gl<strong>and</strong>diseases (figure IV).Group 4: 440 cases of the hyperlasia of mammary gl<strong>and</strong>sare used to detect the pituitary <strong>and</strong> sex gl<strong>and</strong> hormone(figure V): the percent of HPO hormone shafting change is52.5%; the percent of HPO shafting <strong>and</strong> PRL hormonechange is 14.8%; HPO shafting <strong>and</strong> GH hormone change is12.5%; HPO shafting <strong>and</strong> ACTH hormone change is 11.7%;HPO shafting <strong>and</strong> TSH hormone change is 8.5%. statisticalanalysis shows that shafting are influenced each other. Wetry to find the common rule of thehypothalami-hypophysis-gonad shafting interaction. Thisindicates the development, which begins frommaladjustment of functional sex gl<strong>and</strong> hormones (HPOshafting), goes to secretive turbulence of the pituitaryhormone (GH shafting <strong>and</strong> PRL shafting), <strong>and</strong> then lead tothe inhibition of immunological hormone (ACTH shafting,TSH shafting).The fifth group explanations based on scientific proofs.By observing, the author notice that patients sufferinghyperplasia of mammary gl<strong>and</strong>s in the first place losebalance of estrogen (FSH, E2)axes <strong>and</strong> progestogen(L.H.P)axes, thus yielding a negative feedback – thecomplementary physiological metabolization (turbulence)of HPO means it is the phase of maladjustment offunctional estrogen). The author also notices that thechange of HPO is in accordance with the physiologicalchange of patients suffering hyperplasia of mammarygl<strong>and</strong>s.Later, influenced by the long-term under-clinic diseases,progestin is increasingly used, inhibin decreases <strong>and</strong>Progestogen complementarily induces ACTH axes,followed by hypophysis ACTH <strong>and</strong> coriaceous hormone’slong feedback. The increase of ACTH can stimulate thesecretion of GH hormone; in turn GH hormone canenhance the release of FSH. LH, <strong>and</strong> also lead to thebalance of usage of E2.P hormone. The increasing of GH<strong>and</strong> PCL axes can cause the turbulence of Pituitaryhormone <strong>and</strong>, the enhancement of immunity (ACTH axesTSH axes. It indicates that curing the phase when immunityis restrained in body there appears immune watch, thekilling ability reducing, Tlymph cell decreasing, immuneability weakening time <strong>and</strong> time again.In body, there is a chain response: dopamine →rhGH→GH→FSH→E2…. The hoist of noradrenalin <strong>and</strong>GH→s.s→IGF leads to the reaction between the E2 <strong>and</strong>IGF. Suffusing into cells, the estrogen first bind the specificreceptor in target plasmolysis, yielding thehormone-receptor complex. The complex with newmolecular structure can enter the nucleus, together with theco-activation factor activating <strong>and</strong> inducing theestrogen-relative gene, e.g. PS2, P21 <strong>and</strong> C-myc), causingthe contain of bcI-2 protein increase. By the part in theactive change of telomerase, P53 can promote <strong>and</strong> in theend cause the mutation of anti-oncogene, or thedevitalization of P53 protein. The oncogene can yieldabnormal protein (e.g. Hpr gene can yield virus particleprotein E5, E6 or e&). In cell, these abnormal proteins bindthe MHC_I (mainly <strong>org</strong>anism-compatible complex),yielding complex <strong>and</strong> further expressing on the surface ofcell. The product will be transported back into the cell <strong>and</strong>can cause site-mutation, re-arrangement, amplification <strong>and</strong>transposal of gene. In this way the anti-oncogene willchange to oncogene <strong>and</strong> the receptors in nucleus willdimerize from monomers, followed by the nuclearcytoplast activating genes, the gene trascripting newmRNA, <strong>and</strong> new protein yielded, mitosis promoted, Celllanguishing restrained The forming mechanism of newblood vessels can be divided into three phases. The firstphase is called none-vessel phase or pre-vessel phase. Itlasts for 3—6 days, during which the tumor is absolutely atrest, getting the nutrient by suffusing. The second phase isthe vessel-forming phase, during which new vessels form.The new vessel begins to provide nutrient for the tumors<strong>and</strong> let out its metabolizing waste. Then they disappearcompletely or beginning to grow up 66-77 days later. Thethird is the growing period, during which tumors begin tobloom. With the development of normal galactophore tosimple hyperplasia of epidermis, to low-grade,189


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinanone-specific hyperplasia to fatal none-specific hyperplasia<strong>and</strong> to tumor forming, more <strong>and</strong> more blood vessels form.The result of this group indicate such a rule: the hoist ofFSH.E2 <strong>and</strong> IGF hormone will cause the increasing of ER<strong>and</strong> PR. It also discloses a rule about how the former tumorof galacctophore turns to cancer. And when the tum<strong>org</strong>rows up to a certain phase, it will attack the surrounding<strong>org</strong>anizations <strong>and</strong> begin to transfer.The genetic background <strong>and</strong> the environmental factors(mental oppression, for example) will cause themaladjustment of estrogen <strong>and</strong> hypophysis (See Figure I<strong>and</strong> Coordinates Figure). The thenic condition of ability ofhormone is likely to induce some <strong>org</strong>anizations, e.g.thyroid gl<strong>and</strong>, the front part of pituitary, cortex of adrenalgl<strong>and</strong>, ovary, spermary womb, <strong>and</strong> <strong>org</strong>anization ofgalactophore to be stimulated, thus yielding hyperplasia of<strong>org</strong>anization <strong>and</strong> further benign tumor (See Figure IV).The long-term hyperpituitarism <strong>and</strong> the sthenic conditionof sex hormone (maladjustment of sex hormone willdevelop to foul-up of ACTH) can result into the restrain ofimmune ability. Under the influence of under-clinic phase,together with the abnormal protein produced by virusparticle (E5, E6, E7), released by highly-dangerous viruses.The oncogen change into cancer gene, furthermore invade<strong>and</strong> develop into vilulent tumor.The over-secretion or under-secretion of hormone, the toomuch Dr too few killing of liver can both causepathological response. The underst<strong>and</strong>ing of the site ofinactivation, the timing character of target cell, target gl<strong>and</strong>,liver or lung will help to instruct the clinic medicine-taking.Figure of Interaction Between Hormone Secretion (See Figure VI)Made <strong>and</strong> Explained by Dr. ChenQi ChenRepresentPromoteRepresent RestrainThe sixth group: Therapeutics correlation researchAuthor clinically treats mastopathy cases whichaccompanies with uninjured side mammary fibrous Cystichyper plasma constitution, by detecting hypophysis-sexhormone dynamic changing, observed that (FSH、E2)Hormone increased, after using anti-estrogen drug“Nolvadex”, uninjured side mammary fibrous Cystichyperplasia constitution change into Normal tissue, thedisease of overflow of the mammary gl<strong>and</strong> was ameliorated,proved that Nolvadex as Prophylactic drug which prevents190


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinahyperplasia of mammary gl<strong>and</strong>s changing into mammarycancer <strong>and</strong> makes mammary gl<strong>and</strong> lump disappear,proceeds selective intervention Estrogenic drug treatment,adjust body endogenetic gonadal hormone level, canachieve treatment on mammary gl<strong>and</strong>. According todynamic changes of menstrual cycle, estrogen increaserapidly between the days of 9 <strong>and</strong> 12 after menstruationcoming, <strong>and</strong> descent after ovulation, the secondincreasement of estrogen appear between 21 <strong>and</strong> 22 days(see coordinate figure 1), according to periodicalMedication is consistent with physio-change of estrogenlevel in vivo , aiming at long-term remaining highConcentration FSH 、 E2 Hormone, adopt periodicityshot-term (three-six months) giving TAM treatment, that istaking medicine after menstruation cleaning, 20mg per daywithdrawal after 15days, half life of TAM is 7 days, drugscan maintain in blood more than 1 weeks (9), total amount120 – 180 sheets is one course.For mastopathy cases <strong>and</strong> mammary cancer patients in theperiod of menopause, ovary function is shut down ordefected, <strong>and</strong> follicle atrophy, fibrous tissue metaphase(above 70 percent), ovary steroid generate <strong>and</strong> turn toadrenal cortex. As a result, adrenal cortex Cells excrete alot of <strong>and</strong>rosterone <strong>and</strong> turn into Estrogens <strong>and</strong> stimulategonadal hormone(GnRh),Hormone secretion of FSH 、LH increase, Concentration of FSH in blood clean slowly,change little, reserve in high concentration level for a longtime (perhaps this is related to the reduction of follicleinhibin), body menotropin preparation (FSH) <strong>and</strong> lutropin(LH) under high level. In conclusion, concentration contentof FSH、LH in blood of mastopathy cases <strong>and</strong> mammarycancer patients have no obvious difference. The increasingof FSH hides the ability of estrogenic release <strong>and</strong>stimulates excretion of estrin indirectly. Estrogens in highlevel promote the Hormones axe disturbance of HPA & GH<strong>and</strong> make sex hormone binding globulin decreasing <strong>and</strong>make <strong>org</strong>anism Utilization of T <strong>and</strong> E increasing at thesame time, SHBG towards T & E partial to male hormone,<strong>and</strong>rogen stimulate sensible cells proliferate. Formastopathy cases <strong>and</strong> mammary cancer patients in theperiod of menopause, the author suggest to adopt selectiveblock estrin synthesis <strong>and</strong> competition arimedex (AIS)which can prevent <strong>and</strong>rogen turning into estrogenic drugs,femara 2.5 mg Po. Qd withdraw after 15 days, total amount45-90 sheets is one course. At the same time, it is suggestedthat using anti oxidation <strong>and</strong> anti tumor cells the traditionalChinese medicine therapeutic alliance. Mammary gl<strong>and</strong>lump will disappear, Hypothalamic-Pituitary hormone <strong>and</strong>MAMMOGRAPHY X review contrast, the total efficiencyis above 90 percent. Complication doesn’t exit, preventinghyperplasia of mammary gl<strong>and</strong>s turning to mammarycancer, which provide the demonstration of endocrinedynamic treatment.Every marker synopsis of 53 cases in the period of follicle before treatment (X±S)(See Figure VII)E2 FSH LH P T PRL ACTH GH TSH CO IGF-IIpretreatmenpost-treatment94.83±71.00 8.72±5.72 6.99±7.46 2.08±7.44 57.72±28.77 18.43±13.63 10.62±16.65 2.91±4.36 2.11±1.37 384.02±148.99 0.60±0.2682.11±76.31 8.11±4.15 4.82±4.69 1.32±3.73 58.02±35.93 18.61±10.11 28.51±83.78 4.30±6.95 2.76±4.44 390.39±146.27 0.53±0.48191


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaEvery marker synopsis of 15 cases in the period of menopause (X±S)(See Figure VIII)E2 FSH LH P T PRL ACTH GH TSH CO IGF-ⅡPretreat102.4 ± 48.16 ±21.19±12.3.38±10.051.53±27. 10.56±3.7354.91±187.ment122.644.72719058 4.40±4.59 1.99±2.07 2.42±1.6622 0.46±0.23Post-tre64.26 ± 27.86±24.14.46±9.555.87±27. 11.30±5.7 6.39±11.3351.11±134.atment50.73359 1.16±1.195609 3.76±4.30 5.43±8.1504 0.39±0.20Summary:In this paper, the low thalamencephalon-pituitary-sexvariety in mammary cancer <strong>and</strong> hyperplasia of mammarygl<strong>and</strong>s patients’ blood was detected <strong>and</strong> observed. Througha statistics to a total number of 1084 mastopathy cases, weproved the interaction between blood hormone axes whichusually induce a series of mastopathy cases <strong>and</strong> theirrelevant reasons. The concentration of FSH、E2 axes areabsolutely high in long term which can stimulate thevariations in mammary gl<strong>and</strong> canal epithelia hyperplasia<strong>and</strong> the increasing receptor (ER, PR). Analyze thepathogenesis which is about the endocrine hormonemaladjustment exp<strong>and</strong>s to the hormone mess of pituitary(GH axis, PRL axis), causing onco-virus releases the virusgrain in <strong>org</strong>anism, producing abnormal proteins (E5, E6, E7[6]) to form compound with MHC-I in cell <strong>and</strong> expressthem in the cell surface afterwards are ejected to the celloutside, seek for the common rule of the varieties ofhormone axes <strong>and</strong> provide the theory on which thediagnoses <strong>and</strong> dynamic treatment of diseases base.die away more quickly, avoid unnecessary operations,improve the living quality of women, be beneficial tosociety, ameliorate <strong>and</strong> normalize diagnoses <strong>and</strong> treat ofdisease. It is an innovative, scientific foundational theory,<strong>and</strong> will work practically by being applied in thediagnoses <strong>and</strong> treatment of disease as a theory.Reference:[1] Boyuan, The application of RIA in iatrology, Beijing: A-energypublished company, 1991,130-134.159-161.293.296[2] Chengqi Chen, The probe into variations in hormone secretioncased by mammary gl<strong>and</strong>s cancer <strong>and</strong> augment of mammarygl<strong>and</strong>s, Chinese iatrology league-Conclusion:This subject studies <strong>and</strong> verifies the cause of the normaldisease of women, elaborates that the interaction betweencancer <strong>and</strong> complicated hormone secretion <strong>and</strong> confirms itby interpreting the augments of FSH-E2 S.S-IGF hormone,determining the hormone of lowthalamencephalon-pituitary-sex on the base of the mostadvanced foundational scientific theories <strong>and</strong> analyzingthe data from the test. This subject to treat diseases byapplying both selective anti-female hormone medicine <strong>and</strong>Chinese traditional medicine will be able to preventmammary gl<strong>and</strong>s from augment <strong>and</strong> mammary gl<strong>and</strong>scancer from formation, make the mammary gl<strong>and</strong>s lump[3] Yifan, incretion <strong>and</strong> metabolizability of Xiehe, Beijing: sciencepublishing company, 1999, 23, 143, 220, 260, 1757.[4] Chengqi Chen, the explaining of incretionary hormone interactionwith chart, Chinese Herbalist doctor <strong>and</strong> Western medicinemagazine, 2002,Vol.3, 7th, 604-606[5] Haitao Zhu, Detong Yang, The evolvement of the research aboutthe relation between CH-ICF <strong>and</strong> tumor, China tumor. 2002.Vol.11,12th 720-722[6] Wang Mei, Jiangbin, Chunyan Xue, Expression of PS2 inmammary gl<strong>and</strong>s cancer <strong>org</strong>an <strong>and</strong> its clinic significance, Chinesecancer Journal, 2002, vol.12, 10th, 301-302.192


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, China[7] Lifang, SanTai Song, Chinese tumor clinic, The relation betweenfemale hormone inducement protein PS2 <strong>and</strong> mammary gl<strong>and</strong>cancer clinic. 2002, vol.29, 12th, 899-902.[8] Yangqing, Wangli, Sunbo, The evolvement of the research aboutthe engender of tumor induced by virus of hominine papilla,Chinese medical journal-oncology, 2001, vol.15, 4th, 358-359[9] Chunming Li, The effect of treatment on 100 cases which sufferfrom augment of mammary gl<strong>and</strong> by providing 3-phenyl amineoxide periodically, Chinese coal-industry medical journal 2001,vol.4, 3rd, 211.193


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaA Method for Extracting the Movement Track of Cardiac Structure Gray Pointsin Omnidirectional M-mode Echocardiographic ImagesWu Wenji, LinQiangBiomedical Engineering Institute, Fuzhou University, Fujian Fuzhou, China, 350002Abstract- Omnidirectional M-mode echocardiographicimage is composed by a set of sequentialgray-points, which are fixed on the sampling line,represents the movement track of cardiacstructure caused by hemodynamics together withcardiac muscle bounce [1] . Based on aboveprinciples, we present a method of nearlyautomated, adaptive <strong>and</strong> hierarchical forextracting the movement track. Firstly, weemploy a series of strict conditions includingenclosure with elastic lines, binary segmentationwith threshold array, 4-neighborhood connecting<strong>and</strong> c<strong>and</strong>idate region selection, linear template toextract initial track points where edge characteris obvious (defined as the bottom layer).Subsequently, on the basis of track points weobtained in the bottom layer, time correlation,position correlation <strong>and</strong> energy (gray level)correlation are introduced into process layer bylayer to guide the further extraction of trackpoints where edge character is not so obviouseven fuzzy.Index terms- Track Extraction, ThresholdingSegmentation, Linear Template, OmnidirectionalM-mode Echocardiographic ImageⅠ. INTRODUCTIONEchocardiography is currently the predominanttechnique to assess cardiac function.Sequential echocardiographic images reflect the2-D cross section figure <strong>and</strong> the movement ofcardiac structure. Furthermore, a commonly usedmethod for extracting dynamic information isbased on M-mode echocardiographic imagesbuilt by sampling a set of sequential gray-points,which are fixed on the sampling line, fromsequential echocardiographic images of 2-Dcross section.Our research is based on omnidirectionalM-mode echocardiographic images which aremore scientific, accurate <strong>and</strong> creditable thangeneral M-mode echocardiographic imagesbecause the position <strong>and</strong> direction of samplingline corresponding to general M-mode echocardiographicimages are limited due to theconfined direction of ultrasound wave beamswhile the position <strong>and</strong> direction of sampling linecorresponding to omnidirectional M-mode echocardiographicimages are free. The omnidirectionalM-mode echocardiographic images comefrom the LEJ-1 omnidirectional M-mode echo-cardiography system developed by our institute.One column pixels st<strong>and</strong>s for one frame graypoints fixed on sampling line. So an omnidirectionalM-mode echocardiographic imagethat we want to process consists of 5 secondsequal to 250 columns sequential gray points ofthe fixed sampling line after one timeinterpolation. Consequently, the image containsabundant information of time correlation,position correlation <strong>and</strong> gray level correlation.A basic goal in the computer processing of2-D echocardiographic images is the identifycationof anatomical border [2] . Instead, the basicgoal in the computer processing of M-modeechocardiographic images is the extraction of aset of gray points st<strong>and</strong>ing for the stretchedmovement track of a certain boundary of cardiacstructure caused by hemodynamics together withcardiac muscle bounce. So our track extraction isequal to border detection. The extraction isnecessary for the derivation of variousquantitative cardiac parameters, such as oneorder differential of the extracted movementtrack st<strong>and</strong>s for velocity, <strong>and</strong> its two order foracceleration [1] . However, edge detection presentssome challenging difficulties. Edge detection isgreatly complicated by the inherent characteristicsof these images, including low contrast,speckle noise <strong>and</strong> edge dropouts caused bysignal dropouts.Figure1. Omnidirectional M-mode echocardiographicimage with its movement trackVarious border identification methods on 2-Dechocardiographic images have been implemented.But the extraction technique applicablefor M-mode echocardiographic images is rarelymentioned. In clinical application, cardiologistsmainly measure on finite specific positions ofM-mode echocardiographic images <strong>and</strong> calculatequantitative diagnostic parameters for diagnosis.Consequently, scientific extraction of the movementtrack becomes fundamental <strong>and</strong> significant.The purpose of this paper is to present amethod of nearly automated, adaptive <strong>and</strong>hierarchical for extracting movement track layerby layer according to its inherent characteristics<strong>and</strong> correlations of time, position <strong>and</strong> gray level.194


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaⅡ. METHODOLOGY1. PreprocessingAn omnidirectional M-mode echocardiographicimage probably consists of complicatedgray information, which reflects several movementtracks simultaneously. Furthermore, Thepresence of gray information, which reflectsother movement tracks, badly impedes theextraction on the target movement track.Additionally, due to the inherent characters ofechocardiography including low contrast,speckle noise <strong>and</strong> signal dropouts, we employelastic lines [3] to manually enclose a rough targetarea where much interferential gray informationis excluded, as shown in figure 2. The simple butpractical method saves us much trouble when theinterferential gray information is difficult to beexcluded by automatic computer processing. Forthis reason, the extraction for movement track isdefined as nearly automated. However, theapproach of enclosure with elastic lines could beneglected when target area is legible in image.Figure 2. Enclosure with elastic linesConsidering the shadows <strong>and</strong> inhomogeneousluminosity in the image, we choose the partialthreshold iterative algorithm [4] for gray statistic,<strong>and</strong> then apply binary segmentation on imagewith threshold array. The detailed process for apartial threshold is:[1] Evaluate the maximal <strong>and</strong> minimal gray levelZ max , Z min in partial target area, then define theinitial threshold T 0 =(Z min +Z max )/2;[2] Segment current partial target area into twoparts with T k , calculate the mean gray value Z O ,Z B of two parts, then comm<strong>and</strong> T k+1 =(Z O +Z B )/2;[3] If T k+1 ==T k , comm<strong>and</strong> T k+1 as the thresholdof the current partial image, otherwise, return to[2] for further iterative threshold.After binary segmentation with thresholdarray, region growing of 4-neighborhood algorithmis employed to connect points, markregions <strong>and</strong> make statistic on size (counted inpixel) of each region <strong>and</strong> the total size of regions.Considering the periodicity of M-modeechocardiographic images <strong>and</strong> potential trackdropouts evoked by signal dropouts, we choose1/5 as size threshold of regions. The regionwhose size is greater than 1/5 of total size ispicked out, then we defined the region which isnot lengthways overlap with other picked regionas c<strong>and</strong>idate region. The desired movement trackis usually restricted along these c<strong>and</strong>idateregions in most omnidirectional M-modeechocardiographic images.2. Linear template <strong>and</strong> its probe rangeGenerally speaking, the edge of image existsin diverse directions. Classic edge detectionalgorithms are partly carried out with bothhorizontal template <strong>and</strong> vertical template,utilizing the law of one order or two orderdifferential coefficient of gray points near theedge, <strong>and</strong> partly carried out with dynamic modeof manual intervention to extract the edge whengray lever distribution near the edge variescomplicatedly [4] . Though the respective advantagesof these classic algorithms, they areinadequate for the extraction of movement trackin omnidirectional M-mode echocardiographicimages. Expect for the limitation of eachalgorithm oneself, what is more important is theinherent characteristics of omnidirectionalM-mode echocardiographic images, themovement track consists of sequentialintersecting points of cardiac ventricle wall <strong>and</strong>sampling line. Consequently, the desiredmovement track can be described as aone-dimension function that varies in verticaldirection. Furthermore, due to the finiteamplitude of diastole <strong>and</strong> systole, we restrict theprobe range non-linearly. As shown in table 1, Dst<strong>and</strong>s for interval (counted in pixel) of currentcolumn <strong>and</strong> its nearest identified track point, Sst<strong>and</strong>s for the effective probe range of currentcolumn, S A is the Y-coordinate value of itsnearest identified track point.Table 1. Relationship of interval <strong>and</strong> probe rangeIntervalProbe RangeD = 1 S A –3


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaA. Gray levels of U 0 、U 1 、U 2 are all lower thanthreshold g T we have calculated from partialthreshold iterative algorithm, <strong>and</strong> gray levels ofD 0 、D 1 、D 2 are all higher than threshold g T .B. The gradient of current position is maximal inthe effective probe range.C. The difference of mean of D 0 、D 1 、D 2 <strong>and</strong> itsnearest identified track point is less than half ofcurrent gradient.D. There is no c<strong>and</strong>idate point belongs to thesame connected region above the current probeposition in 4-neighborhood connected binaryimage.3. Processing FlowThe extraction is serial <strong>and</strong> hierarchical fromlayer to layer. The identification of high layertrack points is based on the identified trackpoints of its lower layer. Consequently,exactness of lower layer track points severelyaffects the final extracting result. For this reason,sufficient strict conditions are employednecessarily to guarantee the extraction exactnessof movement track.3.1 Bottom layerDue to the particular position characteristic ofwave crests, speckle noise <strong>and</strong> signal dropoutsaffect the edge character of wave crests muchslightly than other positions in mostomnidirectional M-mode echocardiographicimages. Obviously, track dropouts rarely occurnear wave crests. Consequently, we capturebottom layer track points (are also defined asseed points) near wave crests. Firstly, we detectwave crests from left to right in the c<strong>and</strong>idateregions of 4-neighboehood connected binaryimage. As shown in Figure 4, when a wave crestis expressed as straight line [5] , it owns twocharacteristics [1] length of straight line isminimal, [2] there is no c<strong>and</strong>idate point belongsto both separated region <strong>and</strong> c<strong>and</strong>idate regions.wave crests in primitive image. If meets thetemplate condition A, B, D, the current positionwill be identified as seed points, as shown infigure 7.3.2 Constructive layerThe extraction of bottom layer track points iscrucial because these seed points will guide thefurther extraction. However, we can’t describethe desired movement track with so sparse points.Therefore, combining with the bottom layertrack points, the linear template extraction ineffect probe range proceeds. When meets alltemplate conditions, the current position will beidentified as track point. Time correlation <strong>and</strong>position correlation behave as the restriction onprobe range, <strong>and</strong> gray level correlation behavesas template condition C, D in process. Thedetailed process:[1] Utilizing identified track points <strong>and</strong> table 1,we extract track points from left to right. If oneposition of current column meets all templateconditions, identify it as track point <strong>and</strong> move toits next right column for next track point,otherwise, move to the next right column fornext track point straightly. The extracting resultis shown in figure 8.[2] Change the direction, extract the track pointon the basis of its nearest right track point.Linear template functions as above. Theextracting result is shown in figure 9.[3] Generally, there are still some columnswhose track point can’t be identified after abovetwo steps. If c<strong>and</strong>idate point belongs to targetregions exists in current column <strong>and</strong> the point isamong the Y-coordinate values of the nearest left<strong>and</strong> nearest right identified track points, both theY-coordinate values will be introduced as newprobe range for linear template. In a relativelylegible image, which has only one c<strong>and</strong>idateregion such as figure 5, the extraction is almostfinished in this layer. The extracting result isshown in figure 10.Figure 4. Sketch map of wave crest detectionwith straight lineWhen the first wave crest is detected,considering the cardiac periodicity, we search forsecond wave crest in the c<strong>and</strong>idate region apartfrom previous wave crest between 30 <strong>and</strong> 90pixels, analogize sequentially. The result isshown in figure 6. Then we probe for seed pointswith linear template in 25-neighborhood of theseFigure 5. Primitive imageFigure 6. Wave crests detection result withstraight line in c<strong>and</strong>idate region of 4-neighborhoodconnected binary image196


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaFigure 7. Bottom layer track pointsPut formula (3) into formula (4), then evaluatethe differential coefficient of, <strong>and</strong> comm<strong>and</strong> thedifferential coefficient as 0, we will extractT −1TC = [ M M ] [ M Y ]………………… (5)C is the coefficient we desire. According to2f ( x)= c0+ c1x+ c2x , we fix the trackpoint of current column. As shown in figure 11<strong>and</strong> figure 12, figure 11 is the extraction beforefitting layer <strong>and</strong> figure 12 is the fitting result.Figure 8. The result of right forward extractionFigure 11. The extraction before fitting layerFigure 9. The result after left forward extractionFigure 12. The extraction after fitting layerFigure 10. The extraction of constructive layer3.3 Fitting layerWe can’t make guarantee for extracting anuninterrupted movement track through bottomlayer <strong>and</strong> constructive layer, especially in asevere noise or track dropouts image. However,the unidentified track points are sparse afterextraction of above two layers. Consequently,we employ Curve Fitting of Minimal MeanSquare Error [5] (CFMMSE) combined withposition correlation to integrate a whole track. Iftrack point of current column is unidentified,seek for its six nearest identified track points{( xi, yi),i = 1,2, L,6}, then seek for a2quadratic function f ( x)= c0+ c1x+ c2x612to meet MSE = ∑[ yi − f ( x i)]……(1)6 i=1The task of CFMMSE is to evaluate coefficientc . Expressed as matrix, ifi2⎡y1⎤ ⎡1x ⎤1x1⎢ ⎥ ⎢ ⎥ ⎡c20⎤⎢y2⎥⎢1x2x2Y = , M = ⎥,C =⎢ ⎥L⎢ M ⎥ ⎢M⎥ ⎢c1(2)⎥⎢ ⎥ ⎢ ⎥ ⎢⎣c ⎥22⎦⎣y6⎦⎢⎣1 x6x6⎥⎦Error matrix will be expressed asE = Y − MC ……………………………(3)<strong>and</strong> formula (1) will be expressed as1 TMSE = E E ………………………… (4)6Figure 13. Extraction with elastic line enclosureⅢ. CONCLUSIONWe have implemented a method for extractingthe movement track of cardiac structuregray-points. Application has demonstrated theencouraging results even without manualinterference. The preprocessing step includingenclosure with elastic lines, binary segmentationwith threshold array, 4-neighborhood connecting<strong>and</strong> c<strong>and</strong>idate region selection largely removesthe severe noise <strong>and</strong> much interferential grayinformation, which can’t be removed easily byautomatic computer processing. Hierarchicallayer processing <strong>and</strong> strict linear templateconditions, which are based on correlations oftime, position <strong>and</strong> gray level, guarantee theexactness of extraction.We have to admit that the extracting methodneeds further improvement. The detection ofwave crests with straight line needs to becombined much closer with prior knowledge ofcardiac periodicity. Furthermore, coefficient oflinear template can be ameliorated. By far, ourextraction of movement track is based on thedetection of genuine edge points <strong>and</strong> anomnidirectional M-mode echocardiographicimage might contain spurious edge points <strong>and</strong>genuine edge points which are not part of thetarget border. For this reason, sequent manualcorrection becomes necessary for cardiologistsin some images.197


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaREFERENCES[1] LinQiang, JiaWenjing, ZhangLi. A Methodfor Detecting Dynamic Information ofSequential Images-Omnidirectional Gray~time Waveform <strong>and</strong> its Applications inEchocardiography Images. Proceedings ofCISST’ 2001, July 2001 in Las Vegas U.S.A,Vol.Ⅱ: 760-763.[2] A Hammoude. A Contour Extraction Algorithmfor Echocardiographic Images. IEEEComputers in Cardiology, Vol 24, 537-540.1997.[3] Zhang Junyan. Edge Detection of AlldirectionalM-mode EchocardiographyImages. Journal of Chengdu University ofInformation Technology, Vol 19. Mar 2004.[4] Zhang Yujin. Image Process <strong>and</strong> Analysis.Tsinghua University Press. 1999.[5] Luo Shuqian, Zhou Guohong. <strong>Medical</strong> ImageProcess <strong>and</strong> Analysis. Science Press. 2003.[6] Zhang Lu. <strong>Medical</strong> Image Process. ShanghaiScience <strong>and</strong> Technology Press. 2002.198


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaResearch on movement information of special goalfrom a sequence of heart’s B-ultrasonic imagesYangXiuZhi LinQiang (Biological medicine engineering institute of FuZhouUniversity,FuJian,FuZhou,350002)Abstract: There are lots of outlines offigure in heart’s B-ultrasonic images. Themovements of these outlines describe maincharacter <strong>and</strong> function of heart’s insidestructure. But it is very difficult to obtainusing optical flow method, because heart isa transmutative <strong>and</strong> anomalistic object <strong>and</strong>the difference of medium inside heart issmall. This paper discusses the method forfinding the movement information ofheart’s inside structure in heart’s B-ultrasonic images. It finds instantaneousdisplacement using the method of samplingat discretion based on the characteristic ofheart’s movement, then seeking out speed<strong>and</strong> acceleration of characteristic’s point. Itprovides quantitative foundation for heart’sdiagnoses.Key words: sequence image;movement information; to find1. IntroductionIt’s a quite significant topic in theresearch of computer vision to obtain theobject’s structure, position <strong>and</strong> movementinformation from a sequence of pictures ondifferent time in the same scene. Especially,parameter estimation of object’s movementis a rather important application in themilitary, industry <strong>and</strong> medical science.Currently, study in parameter estimation ofobject’s movement is mostly improvedbased on the optical flow constraintequation that was put forward in the earlyeighties of last century by Horn <strong>and</strong>Schunck who were both American scholars.Its process is applying Horn’s optical flowconstraint equation, <strong>and</strong> increasing theoverall smooth stipulation or segmentingimage according to the characteristic of theimage at the same time. Using differentcharacteristic of each block to increase thepartial smooth stipulation, the object’soptical flow field (the speed field) <strong>and</strong> itsvector of displacement in the different areaswould be obtained. This is an analyticalmethod based on optical flow field. It putsforward the following assumption todescribe the image of moving object: (1)There is a short interval between closeimages. (2) The optical flow field causedby the same moving object should be aconsecutive <strong>and</strong> smooth [1] ,that is to say ,speeds of close points are similar on thesame object. For these limited conditions<strong>and</strong> restriction by abnormal characteristicof the optical flow constraint equation, abetter analytical method hasn’t been foundin gray scale image with high noise, whosetarget is transformed object, such as heart’sB-ultrasonic image. In this case, this paperdiscusses a method of parameter estimationbased on the characteristic of object’smovement, different from the method basedon the optical flow constraint equation. Itgets a better result to obtain the speed (theoptical flow field) <strong>and</strong> acceleration (opticalflow field of two times) of characteristicpoints in heart’s inside structure from asequence of heart’s B-ultrasonic images.2. The characteristic of the heart’sB-ultrasonic imageDue to dynamical function of blood,movement <strong>and</strong> transformation take placecontinuously in each structure inside theheart in the process of motion. And becausethe surface of each medium in the heartreflects different intensity on ultrasonic,199


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinathere are some contours of the heart’sintracavitary structure whose figure <strong>and</strong>gray both change in the section ofB-ultrasonic image. These contoursdescribed the main characteristic of eachheart’s intracavitary structure in themovement that includes two parts ofmotions about displacement <strong>and</strong> distortionof the heart structure, <strong>and</strong> also reflectsappearance <strong>and</strong> function of each structureinside the heart. Therefore, it’s quitesignificant to study the movementregulation of these contours for thediagnosis <strong>and</strong> analysis of heart disease. Butit’s rather difficult to obtain the movementcontour of heart structure by using themethod of optical flow field, because of thefollowing reasons: 1 The heart is theobject of transformation, <strong>and</strong> it existstransformation in the process of movement.Because of movement <strong>and</strong> noise, a certainmarginal points (characteristic points)would be covered over (or lost), so thecontour of heart’s inside structure isuncompleted <strong>and</strong> some breaks appear in thecontour from the section of B-ultrasonicimage. Therefore, it is difficult to applyoptical flow equation in the supposingcondition of the consecutive <strong>and</strong> smoothbrightness of object’s characteristic points.2The heart is an irregular object. Whenthe ultrasonic enters from differentdirections, the reflecting intensities in thesame surface are different according to thevarious angles of incidence. In this case,edges representing the same structure in thesection of B-ultrasonic image have differentgray values in different positions, or thesame edge has different gray value in adifferent moment. 3 The ultrasonic’sreflection is strong in the surface of mediawith large difference, while it is ratherweak in the ones with small difference. Thelatter results in the unclear appearance ofultrasonic image’s boundary, such as thecontour of left ventricle affected by thenipple muscle. Therefore, even applying thecentral matching method of edge clustering[2]to detect the contour of ventricle, wehasn’t yet gotten a good method to obtaincharacteristic points <strong>and</strong> match next framecurrently. Due to the above factors, theextraction method of movement’sparameter based on characteristic isadopted.3. The extraction of movementinformation based oncharacteristicBecause of being impulsed by theblood <strong>and</strong> pulled by the surrounding muscle,the movement of heart is composed of itselfcontraction (or expansion) <strong>and</strong>displacement caused by blood’s impulsion<strong>and</strong> muscle’s pull. But contraction orexpansion of heart, as well as opening <strong>and</strong>closing of heart’s valve, represents theheart’s function, so we are only concernedabout these movements. According to agreat deal of observation from a sequenceof heart’s B-ultrasonic images, we c<strong>and</strong>iscover that the overall movement of heartis complicated, but the contour’s movementwithin a small scope is nearly along straightline, such as the movement of the walls ofleft ventricle in heart image’s minor axis.Therefore, by means of collecting positionsof characteristic points in the walls ofheart’s left ventricle in this straightdirection at different frames, <strong>and</strong> displayingthem in time sequence, we can get movingtrack of this structure, <strong>and</strong> then obtain thespeed <strong>and</strong> acceleration information of thewalls of ventricle.The method is implemented as thefollowing processes: Firstly, some framesof heart’s B-ultrasonic images sequence arecollected <strong>and</strong> stored in calculator memory,<strong>and</strong> make sure that the number of the storedimages should cover 3~5 heart’s beatingperiods. Generally speaking, the heart’sbeating period is 1 second or so. Forrecurrence of the whole process of heart’sbeating in distortion-free condition, we200


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinacollect 25 pictures per second. Secondly,we select heart structure’s contour which isinterested to us, observe the heart’smovement, <strong>and</strong> draw a sampling line invertical direction of the structural contour’smovement, <strong>and</strong> then record each point’sgray value of this sampling line in everyframe. Finally, we arrange these grayvalues recorded in time sequence to build agray~time diagram [3] . From this diagram,we can see that the moving track of heart’sparticular structural contour reflects notonly the state of the amplitude of heart’scontraction or expansion, but also thetranslation movement in the same direction.The movement direction of the heart’s eachstructure is different, so the direction of thesampling line is also discretional.In the gray~time diagram, we can finda line covering contour’s characteristicpoints from each frame, which is thespreading of the characteristic point’smoving track in time axis. Then themovement speed of this contour pointwould be figured out using the formula ofdsv = , <strong>and</strong> smoothing problem should bedtnoticed when calculating speed. Since theinterval between each picture is very small,the contour’s displacement is also verysmall. The time axis <strong>and</strong> displacement axisof time~gray diagram both take pixel asthe unit. Each pixel is about 0.38 mm whenrepresenting displacement <strong>and</strong> 40 ms whenrepresenting time. In this way, thedisplacement is represented by integralmultiple of 0.38 mm. Due to the influenceof quantization error <strong>and</strong> noise whenobtaining contour boundary, the speedbetween every two frame would presentstep change, but it can't truly reflectcontour’s movement status. Therefore,smoothing technique on average of severalframes is adopted to smooth the burr <strong>and</strong>mutation points, that is, the speed of No. iframe is v i =(y i+1 -y i-1 )/(t i+1 -t i-1 ), in which y i+1<strong>and</strong> y i-1 represent the displacement of No.i+1 <strong>and</strong> i-1 points, while t i+1 <strong>and</strong> t i-1represent the corresponding time of No. i+1<strong>and</strong> i-1 frames.4. Experiment resultUsing the method to analyze asequence of heart’s B-ultrasonic images,moving waveform of the wall of heart’sventricle is obtained, as illustrated in figure1 <strong>and</strong> figure 2. And moving speed ofcontour’s characteristic point is calculatedaccording to its moving track, as illustratedin figure 3 <strong>and</strong> figure 4.Fig. 1 Moving waveform picture of the Fig. 2 Moving waveform picture ofwall of heart’s left ventricle in minor axis heart’s left ventricle in long axiscm/scm/s201


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaFig. 3 Contour’s moving speed of Fig. 1 Fig. 4 Contour’s moving speed of Fig 25. Existing problemBeing impulsed by blood <strong>and</strong> pulled bymuscle, the movements of the heart in someparts do not move along straight linecompletely in the process of heart’smovement, thus moving track collectedalong the sampling line is not certain points,<strong>and</strong> results in deviation of calculation. Butdue to the wholeness of the heart, there issimilarity of heart structure’s movement ina small area. Even if the point moves a little,contraction or expansion of its nearbypoints is still similar to it, whichapproximately represents this point’smovement status, <strong>and</strong> the movementinformation obtained can provide referencefor the diagnosis. However, when thetranslation movement of characteristicpoint is obvious, the method is no longerapplied. The best method to resolve it is toseparate the translation movement fromheart function’s movement, which is alsoour next crucial work.REFERENCES[1] 陈 震 , 高 满 屯 , 沈 允 文 . 图 像 光 流 场计 算 技 术 研 究 发 展 . 中 国 图 像 图 形 学 报2002.5:434~439[2] 林 强 . 心 脏 B 超 轮 廓 光 流 场 的 一 种 检测 方 法 . 电 子 学 报 . 1996.4[3] 林 强 , 张 莉 . 全 方 向 灰 度 ~ 时 间 波 形系 统 的 实 现 及 其 在 超 声 心 动 图 上 的 应 用 .电 子 测 量 和 仪 器 学 报 , 2002.6[4] 吴 立 德 . 计 算 机 视 觉 [M]. 上 海 : 复 旦大 学 出 版 社 .1993.202


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaAssessment of interatrial septal motion by LEJ-1 OmnidirectionalM-mode echocardiographyCardiovascular Disease Research Institute, Fujian Provincial Hospital, Fuzhou 350001, ChinaGUO WEI, LU LI-HONG, CHEN BIN, et al.To evaluate clinic value of interatrial septal echocardiography (TEE) imaging <strong>and</strong> measuredmotion by LEJ-1 omni-directional M-mode simultaneously by LEJ-1 OME. Resultsechocardiography (OME). Methods forty-onepatients with valvular diseases (male 16 <strong>and</strong>female 25, aged 14-65 years), <strong>and</strong> seven patientswith congenital heart diseases (CHD) (male 4<strong>and</strong> female 3, aged 22-57 years), <strong>and</strong> twelveobjects with normal (male 14 <strong>and</strong> female 40,aged 14-48 years) were studied. The interatrialCompared with the normal group, the velocitiesof interatrial septal motion were reduced(P


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaStatisticsAll data are represented in the form ofmean ± st<strong>and</strong>ard deviation (x±s). Sets of dataare compared using non-parameter t-test.Differences were considered statisticallysignificant at value of P


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaUpper rim Mid-septum Lower rimThickness(mm) time(s) velocity(mm/s) thickness(mm) time(s) velocity(mm/s) thickness(mm) time(s) velocity(mm/s)Systole 4.97±1.12 0.21±0.04 46.31±16.10** 4.28±1.23 0.19±0.04 41.65±23.34 7.30±1.74** 0.25±0.06 39.05±13.80Diastole 4.75±1.40 0.21±0.05 33.04±16.64 3.84±1.07 0.22±0.05 26.74±10.85 5.96±1.60 0.27±0.06 37.78±16.22** Compared with normal group:Ρ


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinaventricular size <strong>and</strong> function. Am JCardiol 1998 Jun 81:82G-85G5 Katsuki K, Nakatani S, Kanzaki H, etal. Clinical validation of accuracy ofanatomical M-mode measurement:effect of harmonic imaging. J Cardiol2001 Jan 37:35-426 Iwado Y, Mizushige K, Watanabe K,et al. Quantitative analysis ofmyocardial response to dobutamineby measurement of left ventricularwall motion using omnidirectionalM-mode echocardiography. Am JCardiol 1999, Mar 83:765-9Figure-1: normal interatrial septum motionFigure-2: interatrial septum motion of valve diseaseFigure-3 : interatrial septum motion of congenital heart disease206


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaInterpretation of the Interaction ofEndocrine Hormone Axial Systems & Mammary Gl<strong>and</strong> TumorChen ChengqiXiamen First People’s Hospital, Xiamen 361003The cause <strong>and</strong> pathogenic mechanism of mammarycancer <strong>and</strong> hyperplasia of mammary gl<strong>and</strong>s are closelyrelated to endocrine dysfunction. Extended influences ofinternal <strong>and</strong> external factors result in the disorder of theendocrine hormones <strong>and</strong> the immune system <strong>and</strong> thefunctional sex hormone dysfunction, which deteriorates tothe dysfunction of hypophyseal hormones <strong>and</strong> theobstruction of enzymes metabolism. E2 <strong>and</strong> IGF hormonescombine with each other to inhibit the mutation of cancergene P53; the cancer gene is activated <strong>and</strong> new bloodvessels formed. This paper is an exploration of these issues.1. The Dysfunction of Endocrine Axial Systems Is the Result ofMultiple FactorsMammary gl<strong>and</strong> disease is affected by the entireendocrine hormone system, which adjusts thefunctional balance all the times <strong>and</strong> creates thenerve-endocrine-immune network systems, whichinteract with <strong>and</strong> compensate for each other with thechange of the menstrual cycle, stimulating the processof the metabolism <strong>and</strong> subinvolution of mammarygl<strong>and</strong> tissues. Documents [1] reported that an analysisof the time of the splitting of mammary cancer cellsthroughout the lifetime of the adult show that on theaverage, it takes 10 years from the initial cells to turninto the clinical tumor, <strong>and</strong> for tumors aftermenopause, the time is longer, indicating that thetumor is in the state of dormancy [2] .The dysfunction of endocrine axial systems is theresult of a combination of exogenous <strong>and</strong> intrinsicfactors. Exogenous factors include unhealthy diet(addiction to one kind of food stuff, excessive intakeof alcohol, etc.), consumption of high-fat,high-protein <strong>and</strong> high-calorie foodstuff (fish, shrimps,etc.), abuse of sex hormone or conceptive drugs orcertain antihypertensives, artificial abortion, <strong>and</strong>artificial menopause; <strong>and</strong> hypothalamic-hypophysealhormone dysfunction caused by psychological <strong>and</strong>psychic factors play an important role in thedysfunction of endocrine axial systems. Research [3]by Zeng Kajia, Wu Yi Long, et al reveals that womenwho suffer phychological traumas from setback atwork, hardship in life, loss of beloved one, domesticdisharmony or separation, or accidents are 32 timesmore likely to have mammary cancer than normalwomen. Research [4] by Zhang Lihui <strong>and</strong> HuangDaifa shows that marriage <strong>and</strong> domestic problemsaccount for 66.4%, personal problems in studies <strong>and</strong>work 8.5%, <strong>and</strong> problems in other aspects 9.8%. QianLiqi <strong>and</strong> Li Jie reported in their survey of theeducational level <strong>and</strong> the metal labor the upper classof society that cadres <strong>and</strong> intellectuals account for59.4%, workers <strong>and</strong> employees in governmentinstitutions 33.8%, <strong>and</strong> peasants 6.73%. These figuresindicate that mental suppression is the major riskfactor related to endocrine dysfunction <strong>and</strong> mammarycancer. Endogenous factors include heredity, obesity,unpregnancy, internal endocrine gl<strong>and</strong> disease orimpairment (such as hypothalamic-hypophysealminiature adenoma, hyperthyroid, hepatitis, cholecystpolypus, adrenal tumor, hysteromyoma, ovarian cyst,etc). Mental pressure <strong>and</strong> suppression <strong>and</strong> extendedinsomnia caused by such diseases or traumas fromrepeated accidents can stimulate the issue cells of thehypothalamic-hypophyseal endocrine gl<strong>and</strong>s,continuously creating or repeating irritability,stimulating release <strong>and</strong> the negative-feedback tosuppress the hormones; when such a condition cannotbe remedied with medication, functional sex hormonedysfunction occurs <strong>and</strong> progresses to hypophysealhormone dysfunction, which releases abnormalamount of hormones, resulting in the disorders of theTSH axial system, the HPO axial system, the HPAaxial system, the GH axial system <strong>and</strong> the SHBG207


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, China(transport hormone combination carrier), plasma freehormones, receptor, hormones <strong>and</strong> gene (transport <strong>and</strong>combination) disorder; meanwhile, as hormonemetabolite is oxygen-deficit <strong>and</strong> cannot be restored, itsuffers from acidosis, inducing the base cells of thetumor cells to generate <strong>and</strong> secrete angioganic factorsof the blood vessels, such as VEGF <strong>and</strong> bFGF [6] ,exciting blood vessel neogenesis, inducing, togetherwith the estrogen, binding with the regulatory factors,introducing tumor cells targets to inside of the genes,thereby promoting the growth <strong>and</strong> movement ofcancer <strong>and</strong> tumor cells.2. Interaction of Endocrine Sex Hormone Axial Systems ofHyperplasia of Mammary Gl<strong>and</strong>sThe interaction of the sex hormone axial systems inpatients with hyperplasia of mammary gl<strong>and</strong>s varies:some remains normal, some rise, <strong>and</strong> some drop. Thisis because (1) the endocrine gl<strong>and</strong> of each patienthave a different threshold value; (2) besides thehormones which specially affect target <strong>org</strong>ans, thehormones of other target <strong>org</strong>ans also play the role ofregulating the target <strong>org</strong>ans; <strong>and</strong> (3) nerves <strong>and</strong> humorother than hormones also play the role of regulating[7] . However, most of the patient groups exhibitregular changes in endocrine hormones, indicatingthat the endocrine hormones still play the dominantrole. When the human body is under the influence ofunfavorable environment <strong>and</strong> hereditary factors, theE2-P hormones in the plasma stimulate the mammarygl<strong>and</strong> <strong>and</strong> growth <strong>and</strong> change regularly with themenstrual cycle; when the secretion of the endocrinehormones lose its balance, the mammary gl<strong>and</strong>s isaffected <strong>and</strong> hyperplasia of the mammary gl<strong>and</strong>soccurs.3. Analysis of Malignant <strong>and</strong> Benign Diseases of Mammary Gl<strong>and</strong>s<strong>and</strong> Changes of Endocrine Hormones in High-risk Pre-cancerGroups of People with Pathological ChangePatients with mammary cancer in the early stagesuffer from ovarian dysfunction, irregularmenstruation, the disrupted balance between estrogens<strong>and</strong> progestogens, <strong>and</strong> the effect of high-concentrationE2 negative feedback on the HPO [8] .High-concentration progesterone negative feedbackaffects the HPA, causing the short <strong>and</strong> long feedbacksof the ACTH <strong>and</strong> the cortisol hormones to be indisorder, <strong>and</strong> the ACTH to stimulate the GHhormones to produce more secretion;high-concentration GH hormones could also stimulategrowth-inhibiting hormones (Somato, Stain) <strong>and</strong>gonadal hormones to release GnRh, resulting inincreased secretion of FSH <strong>and</strong> LH hormones; theslow clearance <strong>and</strong> limited change of the FSHconcentration of the plasma <strong>and</strong> the highconcentration level of the long-term reserve areprobably related to the decrease of follicle inhibins [9] .The overwhelming majority of patients withmammary cancer <strong>and</strong> hyperplasia of mammary gl<strong>and</strong>sin the menopausal period suffer from ovari<strong>and</strong>ysfunction or impairment, follicular atrophy, <strong>and</strong>fibrous tissue metaplasia (over 70%). These patients’ovarian steroid turned into deflecting adrenal cortex,causing reticular cells to secret a huge amount of<strong>and</strong>rosterone <strong>and</strong> converting to estrogen, <strong>and</strong> thefollicle-stimulating hormone (FSH) <strong>and</strong> luteotropichormone (LH) in the body remained at hyper levels;therefore, the statistical results of the FSH <strong>and</strong> LHconcentration levels in the plasma of the patientsshowed no big difference. The FSH increases itscapability of hiding <strong>and</strong> releasing estrogens <strong>and</strong>indirectly stimulating the secretion of estrogens. Thepersistently high levels of estrogens could cause theHPA <strong>and</strong> GH hormone systems to lose their balance,resulting in the decrease of the levels of thecombination of the sex hormones inside the body withglobulin (SH-BG) as well as the increases in thebiological utilization levels of T <strong>and</strong> E; the level of T<strong>and</strong> E from SHBG is inclined towards <strong>and</strong>rogens [10,11] , which stimulates the excessive hyperplasia ofsensitive cells, resulting in the rapid acceleration <strong>and</strong>expansion of cancer cells.4. Hyperplasia of Cancer & Tumor Genes Stimulated by theInteraction of Estrogens <strong>and</strong> Growth FactorsThe disrupted balance of the HPA axial systemcaused to dopamine to stimulate growth-stimulating208


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinahormones to release rhGH <strong>and</strong> secrete GH hormones.The function <strong>and</strong> secretion of the GH hormones areregulated by the growth inhibiting hormones S.S(Somator, Stain) <strong>and</strong> IGF, resulting in the interactionamong the dopamine-rhGH-GH, FSH-E2, <strong>and</strong>S.S-IGF hormone axial systems. It has been known [9,12, 13, 14] that growth hormones (GH) are the trophichormones of the IGF factors. The GH can alsodirectly stimulate cell differentiation through localgeneration of IGF-I, <strong>and</strong> can indirectly stimulatetumor growth. S.S in normal physiological quantitydoes not have the function of stimulating the GH,TSH, PRL, ACTH, LH, or FSH. The rising of the GHstimulates the HPA axis to release noradrenalin <strong>and</strong>secrete S.S, thereby enabling the S.S to function byinhibiting the GH release; the increased secretion ofS.S can also inhibit the function of multiple stimulusfactors (such as mast cells, the epithelial tissue of thegastrointestinal tract, <strong>and</strong> internal <strong>and</strong> externalsecretory gl<strong>and</strong> tissue); the rising levels of S. S canalso inhibit TSH secretion <strong>and</strong> indirectly affectgastrointestinal hormones such as IGF. The IGF-1 <strong>and</strong>IGF receptor (IGF-IR) regulates the body metabolismthrough the medium. One of the possible mechanismsof the over-expression of the IGF-IR could be themutation of the cancer-inhibiting gene P53 <strong>and</strong> theloss of inhibition of the IGF-IR gene. The IGF-1 inthe secrum of Phases I <strong>and</strong> II mammary cancer is 25%higher than the that of the control samples in thecorresponding age group, <strong>and</strong> the IGF-II levels in thestroma of infiltrating mammary cancer exceed normallevels by 50% [15] . The IGF-IR signal channel isprobably open, stimulating the hyperplasia of cells,inhibiting wilting, <strong>and</strong> creating pathologicalhyperplasia of mammary cancer, which is probablyrelated to increased expression of the IGF-IR. At theinitial stage of development of mammary cancer, bothcancer-inhibiting genes <strong>and</strong> cancer genes exist in abalanced state in the benign <strong>and</strong> malignant tumors ofthe mammary gl<strong>and</strong>s. Protooucogenes contain viralgenes with homologous DNA sequence.Protooucogenes are expressed on the surface of thecells as a compound when abnormal protein productscombine with MHI-type molecules inside the cells;protooucogenes are secreted outside the cells <strong>and</strong>activated in forms of point-mutation <strong>and</strong> generearrangement, expansion <strong>and</strong> translocation, <strong>and</strong> turninto cancer genes. Cancer genes currently knowninclude bcl-x, bax, bad, bak, bik, <strong>and</strong> bid;cancer-inhibiting genes include bcl-2, bcl-xl, A1 <strong>and</strong>CED9 <strong>and</strong> MCL-1 [16] . Cancer genes normallyregulate genes (PS2 genes) with growth hormones <strong>and</strong>their growth factors (EGFR, VEGF, bFGF, etc.) aswell as estrogens, activate genes (P21 genes) withH-ras, <strong>and</strong> transcript <strong>and</strong> regulate genes with c-myc,<strong>and</strong> interact with their receptors. Meanwhile, ashormone metabolite is oxygen-deficit <strong>and</strong> cannot berestored, it suffers from acidosis, inducing the basecells of the tumor cells to generate <strong>and</strong> secreteangioganic factors of the blood vessels, such as VEGF<strong>and</strong> bFGF. VEGF is the activating factors ofendothelial cells of the blood vessels. VEGF belongsto the heparin-binding protein family whose membersinclude VEGF-A, VEGF-B, VEGF-C <strong>and</strong> VEGF-D.VEGF is under the effect of tyrosine kinase. Tyrosinekinase receptors such as VEGFR includes VEGFR-1(flt-1), VEGFR-2(flk-1/KDR) <strong>and</strong> VEGFR-3(flt-4).Tyrosine kinase excitants can stimulate the growth<strong>and</strong> activity of tumor cells, promote the expression ofsurface receptors; surface receptors bind withextracellular matrix <strong>and</strong> move forward along thematrix channel; the enzymes remove the obstacles onthe path forward [17] . The solution on the basementmembrane of the blood vessels <strong>and</strong> around thebasement stimulates the increase of endothelial cellsof the blood vessels. It increases hyperplasia of theendothelial cells of the blood vessels <strong>and</strong> thepermeability of the blood vessels, indicating that IGFplays a role in stimulating the growth of endothelialcells <strong>and</strong> the formation of blood vessels. E2 <strong>and</strong> IGFhormones rise <strong>and</strong> interact with each other. Originalcancer genes are excited <strong>and</strong> turn into cancer genes.Cancer genes often participate with estrogens, growthhormones <strong>and</strong> various enzymes in urokinase-typefibrinolysin excitant system, metalloprotein enzymesystem (cathepsin D is an estrogen-induced lysosomalenzyme), <strong>and</strong> the adhesion <strong>and</strong> solution of tyrosinekinase excitants, <strong>and</strong> interact with their genes,resulting in the stimulation the growth of cancer cells<strong>and</strong> the formation of new blood vessels <strong>and</strong> the209


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinahyperplasia <strong>and</strong> transfer of tumor cells.5. Interaction between Endocrine Hormones & Immune SystemThe rise of ACTH hormones in patients withmammary cancer is probably linked to the creation ofACTH hormones by mammary cancer cells. The riseof plasma S.S levels can also cause the ACTHsecretion to increase. High-concentration progestin isa likely result of 90% of progestin inside the liverbinds with albumin through <strong>and</strong> 10% binds with CBGor transcortin, which degrades <strong>and</strong> is transported totarget receptor cells. High-concentration progestinnegative feedback of the plasma stimulates thesecretion of the hypothalamic- hypophysealadrenocorticotropic cells (ACTH); the rise of theACTH causes the inhibition of the adrenal cortexethanol <strong>and</strong> the disorder of the HPA axis, resulting inthe inhibition of active factors of the immune cells,such as interferon, tumor necrosin <strong>and</strong> immune T cells<strong>and</strong> B cells. The prevailing belief [9] is that the rise ofthe glucocorticoid, progestin <strong>and</strong> adrenocorticotropinis a reaction of the inhibited immune system. The riseof the estrogen could inhibit the reaction oflymphocyte; high-concentration progestin <strong>and</strong> GHcould also inhibit the reaction of immune lymphocyte,which results in the inhibition of the immune system,the induction of related cancer genes by the activeestrogen, <strong>and</strong> the over-splitting, hyperplasia <strong>and</strong>reproduction of active cells.6. Interpretation of the Figures concerning the Impact of theDysfunction of Endocrine Hormone Axial Systems <strong>and</strong> CancerGenesThe dysfunction of endocrine axial systems is theresult of a combination of exogenous <strong>and</strong> intrinsicfactors. The dysfunction of hypothalamichypophysealendocrine hormones caused by mentalstress is the major risk factor behind mammary cancer.The Figure of the Interaction of Endocrine SexHormone Axial Systems shows that the interaction ofendocrine sex hormone axial systems (See the leftfigure above) develop into hormone <strong>and</strong> genedysfunction (See the right figure above). Due toirregular menstruation, the balance between E2 <strong>and</strong> Pis disrupted; estrogens increase <strong>and</strong> affect the HPOaxis; the dysfunction of liver metabolite results in thedisorder of the utilization rate of T-C by SHBG; thestimulation of the sensitive cells by <strong>and</strong>rogens resultsin over-hyperplasia <strong>and</strong> fold increases of the sensitivecells; progestins cause the malfunction of the short<strong>and</strong> long feedback of thehypothalamic-hypophyseal-<strong>and</strong>renal axis <strong>and</strong> theshortage of phenylalanine, hydroxylase <strong>and</strong>aminoethanol in the liver forhypothalamic-hypophyseal hormones, such as GH,S.S., <strong>and</strong> ACTH, thereby impacting the syntheticpheylpropionate→tyrosine→dopa→dopamine→noradrenaline→adrenaline <strong>and</strong> resulting in the inhibitionof the immune system <strong>and</strong> occurrence of functionalsex hormone dysfunction, which progresses tohypophyseal hormone dysfunction. E2 <strong>and</strong> IGFhormones interact with each other; transport carrierbinding (SHBG) diminishes, the metabolite ofenzymes is obstructed; gene translocation <strong>and</strong> generearrangement occur, inducing neogenesis of bloodvessels <strong>and</strong> formation of benign tumor blood vessels.Meanwhile, original cancer genes are excited <strong>and</strong> turninto cancer genes that inhibit the mutation of cancergene P53, thereby causing estrogen-susceptible cancergenes to undergo clonal selection <strong>and</strong> stimulating theinitial development of mammary cancer. The Figuresof the Interaction of Endocrine Sex Hormone AxialSystems provide the theoretical basis for the diagnosis<strong>and</strong> treatment of hyperplasia of mammary gl<strong>and</strong>s <strong>and</strong>early-stage mammary cancer.References[1] Basil A, Stoll MD. Pre-menopausal Weight Gain <strong>and</strong> Progression ofBreast Cancer Precursors. Cancer Detection <strong>and</strong> Prevention,1999-23: 31-36.[2] Bi Xun, Zhang Jinzhe & Ye Qinqin, Progress of Research on theDormancy of Malignant Tumor, Cancer, 2000, Volume 19, Issue 7,722-723.[3] Zeng Kajia, Wu Yilong, Ma Guosheng, et al. Comparative Research210


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinaof Cases of the Risk Factors of Mammary Cancer in GuangzhouCity, China Tumor, Volume 10, Issue 12, 2001, 702-704.[4] Zhang Lihun, Huang Daifa, Xu Fengzhi, et al. Discussion of Cause<strong>and</strong> Variety of Mammary Cancer <strong>and</strong> Their Association with LifeEvents, China Tumor Clinic & Recuperation, Volume 7, Issue 5,2001, 47-48.[5] Qian Liqi, Li Jie, Liu Qilun, et al. Research on Causes ofHyperplasia of Mammary Gl<strong>and</strong>s (with statistical analysis of 363clinical cases), China Tumor Clinic & Recuperation, Volume 6,Issue 6, 2001, 23-24.[11] Preaiosi P, Barrett-Connon E, Papoz L, et al. Interrelation betweenPlasma Sex Hormone-Binding Globulin <strong>and</strong> Plasma Insulin inHealthy Adult Women, the Telecom Study. J. Clin EndocrinolMetab. 1993, 76: 283-287.[12] Xu Qin, Ma Limin <strong>and</strong> Sang Jianfeng. The Effect of GrowthHormone on Transfer Tumor of Carcinoma of Colon in Mice.Journal of Practical Tumor, 2001, 16 (2): 90-92.[13] Lu Nan, Chi Zhihong <strong>and</strong> Zheng Wenyao. Growth Inhibins <strong>and</strong>Their Receptors & the Diagnosis <strong>and</strong> Treatment of Tumor. ChinaTumor Clinic & Recuperation, 2000, 7 (5): 92-93.[6] Li Jinjun, Ge Chao & Zhu Hongxin, Establishment of Tumor VesselNeogenesis Reasearch Technology Platform, Tumor, Volume 12,Issue 6, 2001, 435-437.[7] Chen Chengqi, Hyperplasia of Mammary Cancer & Changes ofEndocrine Hormones, Marked Immunity & Clinic, 1999, 6(2) 104,130-132.[8] Yin Boyuan, et al. Application of Radioactive Immune Analysis in<strong>Medical</strong> Science, Beijing: Atomic Energy Press 1991 130-134[9] Shi Yifan, Consonance Endocrine & Metabolism Science, Beijing:Science Press, 1999, 23, 143, 220, 260, <strong>and</strong> 1757.[10] Peiris AN, Sothman MS, Ainan MS, et al. The Relationship ofInsulin to Sex Hormone-Binding Globulin: Role of Adiposity. FertilSteril, 1989, 52: 69-72.[14] Liu Zhenxin <strong>and</strong> Li Ming. Insulin Growth Factors I <strong>and</strong> Receptor &Summary of Tumor <strong>and</strong> <strong>Medical</strong> Research, 2001, 7(1): 5-7.[15] S<strong>and</strong>ra ED. AD Ominant Negative Mutant of the Insulin-likeGrowth Factor 1 Receptor Inhibits the Adhesion, Invasion <strong>and</strong>Metastasis of Breast Cancer. Cancer Research, 1998-58:3533-3361.[16] Adams JM, CoryS. The Bel-2 Protein Family: Arbiters of CellSurvival. Science. 1998, 281 (5381): 1322-1326.[17] Cai Qiang, Ye Yinghu & Wang Guo’an, Molecular Mechanism ofthe Invasiveness of Colloid Cell Tumor, Summary of <strong>Medical</strong>Science, Volume 7, Issue 12, 2001, 730-732.[18] Liu Xinmin: Functional Endocrine Science, Beijing,PPress,1997,2 nd Edition,519.211


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaBy Chen ChengqiIndicating inhibitionIndicating PromotionnRh:gonadotropin releasing hormone CRH: corticotropin releasing hormone GRF: growth hormone releasing factor TRH: thyrotropineleasing hormone Somatostatin: growth-hormone-releasing-inhibiting-somatostatiCHT: adrenocorticotrophic hormone TSH: thyroid-stimulating Hormone GH: growth hormoneH: luteotropic hormone FSH: follicle stimulating hormone PRL: Prolactindrenal: adrenal gl<strong>and</strong> Cortisol: cortisol Renin: rennin: testosterone P: progestine E: estradiol212


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaACTH?TSHG HadrenalLHFSHPRL(cortisolRenin)TEPMetabolicinactivationExcretion213


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaA Method for Detecting Dynamic Information of SequentialImages—Rebuilding of Gray (Position)~time Function on ArbitraryDirection LinesLinQiang, LiWei (Department of Information <strong>and</strong> Communication Engineering, FuzhouUniversity, Fujian Fuzhou China, 350002) Email: chianglin@fzu.edu.cnAbstract—Contrasted with other carriers, such asspeech, text, etc., images include largestinformation data, especially for sequentialimages, which contain not only immense orderlystatic images but such dynamic information ascorrelation, difference, <strong>and</strong> temporal relationshipbetween different frames. So it can be said thatinformation <strong>and</strong> data hidden in the sequentialimages are mass data. It is a very importantproblem how to mine these information dataespecially dynamic information data. This paperproposes a method for mining dynamicinformation of sequential images, which is basedon the rebuilding of their gray (position)~timefunction on direction lines. This is a method ofanalyzing dynamic information based on motion,which is operated on sequential images. Based onthe discussion of the setting of direction lines <strong>and</strong>the algorithm for rebuilding the gray(position)~time function, the paper emphasizes onthe introducing of a method of omnidirectionalgray (position)~time waveform <strong>and</strong> itsapplication to B-scan images. A system applyingthe proposed method named LEJ-1Omnidirectional M-mode echocardiographysystem <strong>and</strong> its clinical application are given in thepaper.Index terms—mining of data, tracking of movingobjects, rebuilding of gray (position)~timefunction, Omnidirectional gray (position)~timewaveformⅠ.INTRODUCTIONEQUENTIAL images, composed of staticimages orderly arranged one by one, contain notSonly information of static images, but alsomotion information hidden among frames, which ismore important. Optical flow detecting technology, awell-known method applying computer vision todynamic information detection, analyzes the velocityfield of each pixel in images from the viewpoint ofstatistics [1]. But it always suffers from manydifficulties on mathematic analysis when it is used tosolve the problem on deformable objects <strong>and</strong> noncontinuousview, <strong>and</strong> furthermore, there exist manystatistic, conjectural <strong>and</strong> uncertain problems as far asspecific underst<strong>and</strong>ing of velocity is concerned.Therefore, we have searched for a new method tomine motion information from sequential images,which can rebuild gray (position)~time function(waveform) on direction lines to extract motioninformation of moving view, moving objects, <strong>and</strong>moving characteristic points in images.Rebuilding of gray (position)~time function(waveform) on arbitrary direction lines is toreappear the recognized gray of view, objects orcharacteristic points on direction lines in sequentialimages on the certain direction of the line <strong>and</strong>further to make certain its position. The position ofeach pixel on direction lines is determinate, for thewhole line segment is defined artificially, <strong>and</strong> asknown from basic requirement of sequential images,every view, object or characteristic point in the sameframe can be seen stationary relatively. So, if thegray position of the recognized direction line isdeterminate, <strong>and</strong> with the certainty of the timeinterval between two neighboring frames, thefunction (waveform) of the gray (position)~timeformed from the moving gray points on directionlines extending with the time, can be obtaineduniquely at last.The movement of view, objects or characteristicpoints of practical images, however, is verycomplex, <strong>and</strong> it is impossible they simply moveback <strong>and</strong> forth on single direction line in most cases.As a result, the setting of direction lines ofsequential images in practical application should bea combination of many ways so as to mine practicalinformation data <strong>and</strong> to trap the tracked target pointswith the combination of the setting of direction lines<strong>and</strong> certain algorithm.According to the analysis above, this kind ofanalyzing method, combined with certain algorithm,focuses on the tracking of moving objects to set thedirection lines, so it is a method of analysis based onthe movement of moving objects with the help ofsequential images, which are composed of staticimages. This method is a significant detectingmethod of dynamic information because the resultedobjects are clear <strong>and</strong> every parameters of the movinginformation are definitive <strong>and</strong> correct, <strong>and</strong> it can beused to underst<strong>and</strong> the moving details.Ⅱ. THE SETTING OF DIRECTION LINES TOREBUILD GRAY (POSITION)~TIME FUNCTIONAND THE MINING OF THE MOTIONINFORMATION DATAAs shown above, direction lines are used to track214This work was supported by Fujian province Natural


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinamoving objects, so the optimal direction line is justthe trace of the moving object being tracked. Butthis is impossible because if the motion trace wereknown, which equals to the knowing of allmovement, there would be no need to analyze itsmovement. What we can do is to set artificiallysome controllable <strong>and</strong> positionable direction lineswhich can capture the predefined view, objects <strong>and</strong>characteristic points <strong>and</strong> combines with some relatedalgorithms used to set direction lines to obtain themoving trace of the trap objects, that is gray(position)~time function, which one-orderdifferential is the velocity of movement, <strong>and</strong> twoorderis its acceleration. Thus, we can mine variouscoherent moving information data.The method used to set direction lines can beclassified into two kinds. The first is the setting offixed direction lines <strong>and</strong> the second is the setting ofdirection lines variable with frames, listed asfollows:(1) Setting of fixed direction linesIn this way, the position coordinates ofdirection lines to be set are fixed <strong>and</strong> they areinvariable in the images to each frame ofsequential images. The detailed setting can bedivided into three classes:• Single arbitrary direction line• Arbitrary number arbitrary direction lines• Multigroup arbitrary direction linesWhere, multigroup arbitrary direction lines, a kindof arbitrary number arbitrary direction lines, meansthere are many groups of moving entirety in a singleimage. With this setting, the settings of severaldirection lines are associated <strong>and</strong> restricted eachother, because every moving entirety in a group iscoherent. For example, in a moving entirety group ofleft ventricular short axis of cardio scans, themoving directions of every part tend to becentripetal. Thus, the direction of every directionline on every part of the group features commonly tobe centripetal, which forms the association <strong>and</strong>restriction of each direction.(2) Setting of direction lines variable with framesWith this way of setting, the position coordinatesof the direction line are keeping altering <strong>and</strong>changing with the altering of sequential frames,rather than fixed in images to each frame of thesequential images. It can be arbitrary numberarbitrary direction lines, as shown above.Therefore, this method is more complex than thesetting of fixed direction lines. However, it maybe a way to track more complex moving objects.As required above, the major idea of the methodis that all of the direction lines should becontrollable <strong>and</strong> positionable, which requests thealtering of this kind of direction lines should beregular <strong>and</strong> operated according to certainalgorithm so that they can be positioned indifferent frames of the sequential images.With this way, we can track the moving traceof an object, which moves very complexly withina deviation of R to the centerline, as illustrated inFig.1. Where the length of the direction line isDirection Line n inFrame NDirection line n+1 inFrame N+1dFig.1 The setting of direction lines varied with frames in thetracking of moving objects2R, <strong>and</strong> the line is perpendicular to thecenterline. The stepping translation of thedirection line between two neighboring frames isd, <strong>and</strong> there is a direction line corresponding toeach frame to track an object point extendingPrevious frameCurrent framewith time frame by frame. As a result, themoving trace of the object can be extracted fromthe gray (position)~time function (waveform),like sampling techniques.The moving trace of the ECG object on the lowerpart of echocardiogram is obtained with thisdirection line in practice, illustrated in Fig.2.The paper has introduced various setting methodsof direction lines above. We focus on introducing anapplication of the rebuilding of gray (position)~timewaveform function to echocardiography, because thelevel of the research on sequential images ofechocardiography is continuously improving, relatedtechnologies such as gray contour detectingRRDirectionlineVariedwaveformFig.2 The application of setting of direction linesvariable with frames in the tracking of the bright spotobject of ECG on Echocardiogram.215


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinatechnique is maturing gradually, <strong>and</strong> related researchsuch as detection for contour-based optical flowfield of heart in B-scan images becomes much moreprofound. This kind of application belongs to thesetting of Multigroup arbitrary number directionlines, in which we developed LEJ-1 OmnidirectionalM-Mode Echocardiography System, <strong>and</strong> applied itto clinic. This research won the National patent ofinvention in P. R. of China. The operation principlesof the system <strong>and</strong> its clinical application are detailedas follows.Ⅲ. THE PRINCIPLES AND THE CLINICPRACTICES OF OMNI-DIRECTIOANALGRAY~TIME ECHOCARDIO-GRAPHICWAVEFORMWhen this method is applied to echocardiographysequential images, this system can produceomnidirectional gray~time echocardiographywaveform or called omnidirectional M-modeechocardiography. The long axis (the left) <strong>and</strong> shortaxis (the right) ultrasound wave-line direction(broken lines); the actual motion orientation (reallines) of cardiac structure; along with theirgray~time waveform of direction line of three points(A, B, C) on the echocardiography images areillustrated in Fig.3. As seen from the figure, there arefew ultrasound wave-lines that can follow the tracksof the actual motion direction of some parts incardiac structure. Especially for most parts in shortaxis, provided that someone direction wave-line istangent with a point of wall of ventricle at amoment, it will not keep so at next moment,however. According to the rebuilding principle ofomnidirectional gray~time waveform, if thedirection of sample line selected is same with actualmotion direction, the gray intensity data of the pixelson them (e.g. B) can generate gray~time waveformcombined with corresponding temporalrelationships. As shown in Fig.3, the gray~timewaveform of three lines is rebuilt from long, shortaxis echocardiography.BAO O ?CB`A`C`Fig.4 The overview of LEJ-1 Omnidirectional M-modeEchocardiography systemFig.5. Omnidirectional gray~time echocardiographywaveform in automatic operationIt represents motion waveform of wall of ventricleof cardiac structure, because these gray values canst<strong>and</strong> for the boundary of cardiac structure inultrasound images. What’s more, all these resultedM-mode echocardiography reflecting movement ofdifferent parts in different structures in the heart issynchronous in time. One-order differential of thesewaveforms st<strong>and</strong>s for motion velocity of the part atthis moment, <strong>and</strong> two-order for acceleration. Theresearch is very important because all above can beused to underst<strong>and</strong> the motion details parameters ofall parts in cardiac structure of the images, namelymotion amplitude, velocity <strong>and</strong> acceleration, etc. Itcan be used to underst<strong>and</strong> the motion detailsparameters of any parts in other video images. Fromabove analysis of the principle, We can concludeFig.3. The long axis (the left) <strong>and</strong> short axis (the right) ultrasoundwave-line direction(broken lines); the actual motion orientation(reallines) of cardiac structure; along with their gray~timeechocardiography waveform of direction lines of three points(A,B,C) on the echocardiography images216


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinathat as for omnidirectional gray~time waveformdetected from echocardiography sequential images,we can select arbitrary direction of any parts in anystructures as sample line (direction line), which canbe used by to follow the motion tracks of this part[5]. Thus, sample lines are not confined by thedirection of ultrasound wave-line any longer; consequentlya substantial improvement in detection accuracyis obtained. Moreover, we can compare the motionparameter of every echocardiography <strong>and</strong> theirtemporal-phase relationships, because arbitrarynumber sample line whose waveform is synchronousin time can be selected.We manufactured LEJ-1 M-mode omnidirectionalgray~time waveform system, illustrated in Fig.4, thesystem consists of two operations, namely automaticoperation <strong>and</strong> manual operation. In the former, whenconfirmed a center, 12 oriented direction lines willeject out every 30 angles, which looks like a clockpanel, <strong>and</strong> then their gray~time waveform ofspecified duration will expend in screen or beprinted on laser printer paper, as illustrated in Fig.5.This kind of operation is mainly suit for leftventricle short axis sequence images whose motionis mainly acentric <strong>and</strong> centripetal.The second operation is manual operation. Inaccordance with any positions, any structures inechocardiography images, based on their motiondirection, operator can select any direction sampleline with mouse set manually to build theiromnidirectional echocardiography gray~timewaveform last for 5 seconds. In this operation, therewill be a preview (echocardiography waveform) displayedin monitor to decide whether choosing it ornot after a sample line is selected.No matter which operation is carried out, a seriesof resulted M-mode echocardiography waveformcorresponding to a sampling line can be chosen outto do further detection for various parameters.Parameters such as amplitude, velocity, compressingpeak velocity <strong>and</strong> the quality of thickness, etc. ofcardiac structure motion, which are detected fromappointed sample line in echocardiographywaveform in Fig.6. We rebuild ECG belowechocardiography on omnidirectional gray~timeechocardiography waveform temporal axis so as tocompare omnidirectional gray~time waveform withECG.Passed through the physical test of Fujian provincecentral test institute <strong>and</strong> clinically practiced inseveral hospitals, the method is verified that resulteddetection data is accurate, scientific, <strong>and</strong> trustworthy.Fig.6. A lot of Omnidirectional of gray~time echocardiographyWaveform in manual operation Comparative Waveform withECGⅣ. THE DYNAMIC INFORMATION INSEQUENTIAL IMAGESThe most extinctive distinguish between ouromnidirectional gray~time waveform system <strong>and</strong>Norway Vingmed company Anatomy M-modecardiography is that we have obtained synchronousomnidirectional gray~time waveform group of anyparts, any direction <strong>and</strong> arbitrary columns(corresponding arbitrary sample lines) based on dynamicinformation detecting from sequential images.The dynamic analysis here we made in sequentialimages is similar with amplifier analysis made forsome object unit in static images. One example isthat streets in static whole scene just look like lines,however, when further amplified, these lines c<strong>and</strong>isplay cars, shops, or other details in these streets.Under similar circumference, the sequential imagesshow the “whole scene” of every cardiac structurefrom its full view. It can be implemented by the wayof omnidirectional gray~time waveform detectingmethod to acquire detailed knowledge of the motionstate, namely dynamic details of any parts in anystructures (including motion waveform, amplitude,velocity, <strong>and</strong> accelerate, etc. when the point’smovement along with time).We have applied the method to the analysis of cardiacB-scan/color scan (or echocardiography)sequential images, as a matter of fact, this motiondetails analysis method of sequential images canalso be employed to other medical sequential imagessuch as X-ray, coronarography, etc. Consequently,this method for motion analysis can also be spread<strong>and</strong> developed further.Ⅴ. CONCLUSIONWith the high-speed development of the science<strong>and</strong> technology, the method used to analyze motioninformation is becoming much profounder. The mainmethods currently used can be classified as217


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinaviewpoint based on static, which is a basic analysismethod based on sequential images, st<strong>and</strong>pointfocusing on dynamic objects, <strong>and</strong> point focusing onmovement. While based on sequential images, themethod proposed in this paper constitutes all itstactics, such as the setting of the direction lines <strong>and</strong>the algorithm combined, based on moving objectsprincipally. So the designers’ know of the movementto some extent <strong>and</strong> the prior knowledge base arerequired in order to set direction lines easier toanalyze the motion information.It is required that all the pixels are stationaryrelatively in a single sequential image. The timeinterval between two neighboring frames is 1/25second in PAL system <strong>and</strong> 1/30 second in NTSCsystem, etc., which is a very long time for usualobjects. So, if the time of frame interval is reducedfarther, the dynamic information detection of higherspeedmovement can be obtained with this method.The direction lines used above are all straight lines,<strong>and</strong> furthermore, if they are modified intocontrollable, positionable, <strong>and</strong> non-linear directionlines, it will be easier to track more complexmovement <strong>and</strong> improves this method to a higherlevel, whereas at the cost of a more difficult indetermining direction <strong>and</strong> algorithm.REFERENCES[1] B.K.Horn, B.G.Schunck, Determining OpticalFlow, Artificial Intelligence, 1981,17:185~204.[2] Vikram Chalana, David T. Linker, David R.Haynor <strong>and</strong> Yongmin Kim. A Multiple ActiveContour Model for Cardiac Boundary Detection onEchocardiography Sequences. IEEE Trans. Onmedical <strong>Imaging</strong>, 1996,Vol, 15: 290-298.[3] LinQiang. A Detecting Method for contour-Based Optical Flow Field of Heart in Ultrasound B-Scan Images. Acta Electronics Sinica,1996,Vol.24,No.4:122-125.[4] LinQiang, ZhangLi, JiaWenjing. DynamicInformation in B-Scan Ultrasound SequentialImages <strong>and</strong> Omni-directional M-modeEchocardiography. Chinese Journal of ScientificInstrument. 2000,Vol.21,No.5:41-43.[5] LinQiang, TianGuilan. An Investigation on theMatching Tactics of Common Motion in DetectingFlow by Clustered Algorithm. Acta ElectronicsSinica, 1997,Vol.25,No.1:58-61218


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaA STUDY ON METALLIC MICRO-CAPSULATION FORIMMUNOISOLATION IN TRANSPLANTING CELLS AND TISSUESZhan Minjng 1 , Li Gang 1 , Stephen.Lu 3 ,Cui Hualei 1,21、Tianjin University, 2、 The children’s hospital of Tianjin,3.Inspiring engineering laboratory of Tianjin UniversityAbstract-At present, the materials of microencapsulationfor cells or tissues were made fromgelatinous crude or complex macromoleculematerial such as agar, alginate-polylysine-alginate.Order to characteristics of macromolecule material,the kinds of cell microcapsules could not apply toclinic. The paper presents a novel method tomicroencapsulate cells or tissues. Microcapsules ofcells or tissues are made from metal. The metallicmicrocapsule is based on biocompatible metal oralloy, such as titanium, aurum, iron, etc. Themicrocapsule has the structure of a hollow metallicsphere, which contains separated <strong>and</strong> purified cellsor tissues for transplantation. This microcapsulecan maintain a long-term stability in vivo <strong>and</strong>prolong survival of the functional tissues or cells.The physical size of the microcapsule is smallerthan that of those current microcapsules, <strong>and</strong> hencewill not swell by sopping water both in vivo <strong>and</strong> invitro.Keyword: metallic microcapsule; graft; titaniumimmunoisolation; microencapsulation;Ⅰ. IntroductionEncapsulation of tissues or cells in a semipermeablemembrane as a simple <strong>and</strong> reliablemethod to efficiently protect the graft fromrejection <strong>and</strong> to allow long-term survival of thegraft is a promising experimental method ofsubstitute for transplant operation. Thesemipermeable membrane protects cells from theimmune system of the host, while allowingoutward diffusion of the needed therapeuticproduct. Unlike conventional drug deliverydevices, the cells have an inexhaustible supply ofthe therapeutic agent (pending cell viability) in anintrinsically stable form without the problem <strong>and</strong>expense of protein or drug purification. Thetechnology arises from the sixties of the twentiethcentury <strong>and</strong> the semi-permeable membrane wasmade from gelatinous crude or complexmacromolecule material such as agar, alginatepolylysine-alginate(APA), chitosan, <strong>and</strong>polyethylene glycol [1~3] .Quality considerations of ideal microcapsulesinclude roundness, smallness, durability, <strong>and</strong>uniformity of size [4] . However, there are a numberof influences currently prevent microencapsulationfrom application of clinical reality [5~6] . Some ofthese obstacles are as follow.1、 The polymer membrane does not havegood machine intensity, hence cannotmaintain long-term stability ofmicroencapsulated cells.2、 The encapsulated cells or tissues haveshort-term viability or function becauseof polymer solution concentration, timeof gelation reaction, variety <strong>and</strong> dosageof additive.3、 The membrane permeability ofmicrocapsule is influenced bymechanical strength, materialbiocompatibility, chemic ingredient, <strong>and</strong>thickness <strong>and</strong> layer amount ofmembrane.4、 The cleavage products ofmicroencapsulation membrane areunknown, <strong>and</strong> so are their effects on host.Therefore, the transplant of xenogeneiccells or tissues is restricted for clinicaltherapy.5、 Unknown virus in xenogeneic cell ortissue will invade host body afterpolymer membrane of microencapsulatedcell decompounded. Since we do notknow the influence of virus on host, thetransplant of xenogeneic cell isrestricted.6、 Microencapsulated cell or tissue couldnot maintain a long-term survival offunction tansplanted cell or tissue.7、 The size of membrane wall aperturecould not be controlled well, so manycapsules do not have an adequatenutrient supply.219


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, China8、 The kind of microcapsules usually swellbecause of sopping up the water in vivoof host <strong>and</strong> the volume are usually toolarge.Ⅱ. Metallic Micro-enocapsulationThe paper presents a novel method tomicroencapsulated cell or tissue in a foil. Themetallic microcapsule is based on biocompatiblemetal or alloy, such as titanium, aurum, iron, etc. Itcan also be coated with titanium or titanic alloys.The microcapsule has the structure of ahollow metallic sphere what are made up of twohemispheres that looked like bowl. The metallichemispheres are drilled numerous aperture whosediameter allow adequate nutrient (glucose,oxygen) supply, metabolite efflux, allowingtherapeutic agent to be released into thebloodstream <strong>and</strong> at the same time excludingpotentially destructive immune cells <strong>and</strong>antibodies.Metallic micro-encapsulation will overcomeabove mentioned obstacles <strong>and</strong> will promote theallogeneic cell especially xenogeneic celltransplant for human patients.A. MaterialsⅢ. Materials <strong>and</strong> methods• Iron what be coated or covered with athin layer of titanium alloy.B. MethodsThe microcapsule has the structure of ahollow metallic sphere what are made up of twohemispheres that looked like bowl.Hemispheroidal metal-foil or alloy-foil are madeby exact foundry <strong>and</strong> are plated a membrane forpreventing oxidation reaction. The outside radiusof the sphere is generally between 0.05mm <strong>and</strong>5mm. The thickness of the membrane wall of thesphere is generally between 0.0001mm <strong>and</strong>0.5mm.It contains separated <strong>and</strong> purified cells ortissues for transplantation. Apertures of themetallic membrane what are made by technologyof laser stiletto allow adequate nutrient (glucose/oxygen) supply, metabolite efflux while allowingoutward diffusion of the therapeutic agent <strong>and</strong>preventing immunoglobulin (Ig) from contactingwith the host cells. The aperture in the membraneof the sphere is typically between 0.00005mm <strong>and</strong>0.01mm.Two metallic hemisphere with aperture wherecan be putted in some active cells or tissues fortransplantation are integrated by jointing,conglutination, or tight assembly. (figure 1~3)Many metals <strong>and</strong> alloy are widely applied toclinic because of good biocompatibility, innocuityfor host such as titanium what has not only goodbiocompatibility <strong>and</strong> innocuity but also good antierode,lower density, smaller coefficient of thermalexpansion, <strong>and</strong> higher melting point, especiallytransplantation of artificial tooth.The materials of metallic microcapsules areas follow.• Titanium or titanic alloy;• Aurum or aurum what be coated orcovered with a thin layer of titanium;• Aurum or aurum what be coated orcovered with a thin layer of titaniumalloy;• Silver or silver what be coated orcovered with a thin layer of titanium;• Silver or silver what be coated orcovered with a thin layer of titanium;• Iron what be coated or covered with athin layer of titanium;220


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinain host vivo. The size of membrane wall aperturecould be controlled well by technology of laserstiletto, so unknown virus in xenogeneic cell ortissue will not invade host body <strong>and</strong> cells or tissuesof grafts will gain adequate nutrient supply. Themetallic microcapsules can maintain a long-termstability in vivo <strong>and</strong> prolong survival of thefunctional tissues or cells.B. Obstacles of metallic microcapsuleFigure1: There are two hemispheres which basedon biocompatible metals or alloys have the samesize. Some cells or tissues for transplantation whathas been separated <strong>and</strong> purified from transplantedapparatus are putted into the hemisphere (fig.1 (a))<strong>and</strong> integrative (fig.1 (b)) by laser spot weldingafter purgation <strong>and</strong> cure of said hemisphericalsurface.Figure 2: There are two hemispheres which basedon biocompatible metals or alloys. Onehemisphere is bigger than another. Some cells ortissues for transplantation what has been separated<strong>and</strong> purified from transplanted apparatus are puttedinto the hemisphere (fig.2 (a)) <strong>and</strong> integrative(fig.2 (b)) by tight assembly after purgation <strong>and</strong>cure of said hemispherical surface.Figure 3: There are two hemispheres which basedon biocompatible metals or alloys. Onehemisphere is bigger than another. Some cells ortissues for transplantation what has been separated<strong>and</strong> purified from transplanted apparatus are puttedinto the hemisphere (fig.2 (a), (b)) <strong>and</strong> integrative(fig.2 (c)) by pins after purgation <strong>and</strong> cure of saidhemispherical surface.Ⅳ. DiscussionA. Features <strong>and</strong> advantages of metallicmicrocapsuleThe microcapsules of cells or tissues fortransplantation according to the paper are based onbiocompatible <strong>and</strong> innocuous metal or alloy; hencecannot swell because of sopping up the water bothin vivo <strong>and</strong> in vitro. Machine intensity of metallicmembrane is better than current membranepolymer membrane <strong>and</strong> never be decompoundedA number of obstacles currently preventmetallic microcapsules from development. Thefirst obstacle is preparation of metallic membrane.The microcapsules will be transplanted into hostbody by surgery in order to remedy some diseasethat cannot be healed by conventional drugs orinstead of some apparatus. Diabetes <strong>and</strong>necrotizing nephrosis are examples of this methodof treatment. That dem<strong>and</strong> large numbers ofmetallic microcapsules <strong>and</strong> smeller size ofmicrocapsules. Capsules of approximatelyhundreds in diameter with a several decade wallthickness were formed. Machining methods atpresent cannot process metals to satisfy thatrequirements. The second obstacle is low-levelefficiency of stiletto in metal wall. Despite thedevelopment of laser technology enable micronprecision machining, the low-level efficiency willlimit a batch production of metallic microcapsules.The third obstacle is the encapsulation of cells ortissues <strong>and</strong> detects the survival of cell aftermicroencapsulation. At last, most researchers don’tpay attention to the size of cells or tissues <strong>and</strong> thesize of Ig. A lot of unknown factors wait for us tofind.REFERENCE1. Chang, Thomas Ming Swi;Pharmaceutical<strong>and</strong> therapeutic applications of artificial cellsincluding microencapsulation ; EuropeanJournal of Pharmaceutics <strong>and</strong>Biopharmaceutics ; Volume: 45, Issue: 1,January, 1998, pp. 3-8;2. Marc R. Garfinkel, M.D. Robert C. Harl<strong>and</strong>,M.D., <strong>and</strong> Emmanuel C. Opara,Ph.D.;Optimization of the MicroencapsulatedIslet for Transplantation1; Journal of surgicalresearch; 76, 7–10 (1998);3. Robeton RP, Dovis C, Laraen J, et al.Pancreas <strong>and</strong> islet transplantation for prospectwith diabetes. Diabetes Care, 2000, 23(1): 112.4. Wolters, G. H. J., Fritschy, W. M., Gerrits, D.,<strong>and</strong> van Schilf- begaarde, R.; A versatile221


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, China6.alginate droplet generator applicable formicroencapsulation of pancreatic islets; J.Appl. Biomater. 3:281, 1992;5. Brunicardi, F. C., <strong>and</strong> Mullen, Y. Issues inclinical islet transplantation. Pancreas 9: 281,1994;Bartlett, S. T., Dirden, B., Schweitzer, E. J.,Sheffield, C., <strong>and</strong> Hadley, G. D. Prevention ofrecurrent autoimmune islet transplantdestruction of simultaneous kidneytransplantation; Transplant. Proc. 26: 737,1994.222


The Research of Dynamic <strong>and</strong> Relativity of Hormone Secretion Breast TumorChen ChengqiXiamen First People’s Hospital, Xiamen 361003AbstractObjective: Based on a comparison of endocrinehormones between patients of mammary cancer <strong>and</strong> those ofhyperplasia of mammary gl<strong>and</strong>s, a preliminary analysis of theinteraction between endocrine hormones <strong>and</strong> the immunesystem was conducted.Methods: The experiment involved 50 cases ofmammary cancer <strong>and</strong> hyperplasia of mammary gl<strong>and</strong>s each.Blood samples were taken from pre-menopausal <strong>and</strong>menopausal patients; six kinds of hypophyseal hormones(PRL, GH, TSH, ACTH, FSH <strong>and</strong> LH) <strong>and</strong> three kinds of sexhormones (E2, P <strong>and</strong> T) were subjected to RIA tests.Results : Wilcoxon match- paired assay <strong>and</strong> normalapproximation of the experiment indicated that the FSH levelbefore pre-menopause <strong>and</strong> the ACTH level during menopausein patients with mammary cancer were higher that those ofpatients suffering hyperplasia of mammary gl<strong>and</strong>s.Conclusion: Statistics show that the normal rhythmbetween endocrine hormones <strong>and</strong> the immune system isdisrupted in mammary cancer patients, the feedbackmechanism of the hypothalamo-hypophyseal-adrenal systemis maladjusted, resulting in inhibition of the immune function.Female hormones induce the gene mutation <strong>and</strong> the sensitivityof the cells is increased, resulting in a significant accelerationof the hyperplasia of cancer cells.Keywords Mammary Cancer, Endocrine HormoneThe cause <strong>and</strong> pathogenic mechanism of mammarycancer <strong>and</strong> hyperplasia of mammary gl<strong>and</strong>s are closelyrelated to endocrine dysfunction. An analysis wasconducted of the endocrine changes of 100 cases ofpatients who were diagnosed as suffering mammarycancer or hyperplasia of mammary gl<strong>and</strong>s in pathogenic<strong>and</strong> endocrine examinations. These 100 cases werepicked from the patients who received medical treatmentat my hospital during the 1999-2000 period. Thisanalysis of the possible cause <strong>and</strong> pathogenicmechanism of mammary cancer <strong>and</strong> hyperplasia ofmammary gl<strong>and</strong>s was intended as a theoreticalexploration of the interaction between endocrinehormones <strong>and</strong> the immune system.1. Materials & Methodology1.1 Case DataOf the patients who were diagnosed during the1999-2000 period in my hospital as sufferingmammary cancer or hyperplasia of mammary gl<strong>and</strong>sin pathogenic <strong>and</strong> endocrine examinations, 50pretreatment cases, 28 pre-menopausal cases <strong>and</strong> 22menopausal cases were picked <strong>and</strong> subjected tostatistical analysis of endocrine changes. All of thepatients with mammary cancer were women, with theoldest being 67 years old, the youngest 22 years old,<strong>and</strong> the average age 46.3±9.9; all of the patientssuffering hyperplasia of mammary gl<strong>and</strong>s werewomen, with the oldest being 67 years old, theyoungest 23 years old, <strong>and</strong> the average age 45.4±9.9.1.1 Examination MethodsAll patients were examined at my hospital’s RIACenter. Examination agents used were supplied by theACS/180SE Chemical Illuminator manufactured bythe US-based Chiron Corporation. All blood sampleswere taken in the pre-menopausal or menopausalperiod. Six kinds of hypophyseal hormones weretested, namely, prolactin (PRL , µg/L), somatropin(GH, µg/L), thyrotropic hormone (TSH IU/L),adrenocorticotropic hormone (ACTH ng/L),follicle-stimulating hormone (FSH IU/L), <strong>and</strong>luteotropic hormone (LH IU/L); three kinds of sexhormones, namely, estradiol (E2, µg/L), progestin (Pµg/L), <strong>and</strong> testosterone (T µg/L).1.2 Statistical Processing


Rank-sum tests <strong>and</strong> normal approximation methods(Wilcoxon match-paired assay) were used to processthe statistics.2. ResultsSummary of the results: Both groups of samples wereless than 50 cases. Examination Objects: Mammarycancer (T1 patients, 50 cases) <strong>and</strong> hyperplasia ofmammary gl<strong>and</strong>s (T2 patients, 50 cases),pre-menopausal period (N1 group) 28 cases each(N1=28) <strong>and</strong> menopausal period (N2 group) 22 caseseach (N2=22). Patients in these two periods wereexamined to determine the difference in hypophysis<strong>and</strong> sex hormones. Rank-sum tests <strong>and</strong> normalapproximation methods were used to process thestatistics. See Tables 1 <strong>and</strong> 2 for results in detail. TheFSH concentration levels of the plasma of T2 patientsin the pre-menopausal period (N1 group) were muchhigher than those of T2 patients (P0.05).The TCTH concentration levels of T1 patients in themenopausal period (N2 group) were much higher thanthose of T2 patients (P0.05).Table 1 Comparison of Endocrine Hormones in Mammary Cancer (T1) <strong>and</strong> Hyperplasia of Mammary Gl<strong>and</strong>s (T2) (Pre-menopausal Period)E2 FSH LH P T RPL ACTH GH TSHN1 Group (N1=28)T1 : 814 940 860 802 737 778 867 742 702T2 : 726 600 680 738 803 763 673 798 838U=0.50 2.62 1.27 0.29 0.78 0.09 1.39 0.70 1.37U≥1.96 P≤0.05 U≥2.58 P≤0.01P>0.05 0.05 >0.05 >0.05 >0.05 >0.05 >0.05 >0.05Difference No Yes No No No No No No NoTable 2 Comparison of Mammary Cancer (T1) <strong>and</strong> Hyperplasia of Mammary Gl<strong>and</strong>s (T2)(Menopausal Period)E2 FSH LH P T RPL ACTH GH TSHN2 Group (N2=22)T1 : 477 509 504 516 538 517 613 489 512T2 : 558 526 531 519 479 518 404 546 523U=0.65 0.06 0.03 0.22 0.72 0.24 2.83 0.37 0.12U≥1.96 P≤0.05 U≥2.58 P≤0.01P>0.05 0.05 >0.05 >0.05 >0.05 >0.05 >0.05 >0.05Difference No Yes No No No No Yes No No3. Discussionmultiplication of sensitive cancer cells.3.1 The FSH <strong>and</strong> ACTH levels of patients with mammarycancer in this group increased to such an extent thatthe normal rhythm between endocrine hormones <strong>and</strong>the immune system was disrupted. The analysisindicated that the rising of the ESH <strong>and</strong> ACTH levelscould affect the hypothalamo - hypophyseal - gonad(HPO) <strong>and</strong> hypothalamo - hypophyseal - adrenal(HPA) systems, the maladjusted HPA might result inthe inhibition of the immune function, <strong>and</strong> that thedysfunction of the HPA <strong>and</strong> GH hormone systemscould also have an effect on the HPO, resulting in theestrogen inducing cells susceptible to cancer intoclonal selection <strong>and</strong> gene mutation, <strong>and</strong> the <strong>and</strong>rogenstimulating the excessive hyperplasia <strong>and</strong>3.2 Statistical ResultsThe rise of the FSH in patients with mammary cancer(T1) in their pre-menopausal period (N1 group) waslikely to be a result of the ovarian dysfunction,irregular menstruation, the disrupted balance betweenestrogens <strong>and</strong> progestogens, <strong>and</strong> the effect ofhigh-concentration E2 negative feedback on the HPO.High-concentration progesterone negative feedbackaffected the HPA, causing the short <strong>and</strong> longfeedbacks of the ACTH <strong>and</strong> the cortisol hormones tobe in disorder, <strong>and</strong> the ACTH to stimulate the GHhormones to produce more secretion;high-concentration GH hormones could also stimulatethe gonadal hormones to release GnRh, resulting in


increased secretion of FSH <strong>and</strong> LH hormones; theslow clearance <strong>and</strong> limited change of the FSHconcentration of the plasma <strong>and</strong> the highconcentration level of the long-term reserve wereprobably related to the decrease of follicle inhibins [1] .The FSH increased its capability of hiding <strong>and</strong>releasing estrogens <strong>and</strong> indirectly stimulating thesecretion of estrogens. The persistently high levels ofestrogens could cause the HPA <strong>and</strong> GH hormonesystems to lose their balance, resulting in the decreaseof the levels of the combination of the sex hormonesinside the body with globulin (SH-BG) as well as theincreases in the biological utilization levels of T <strong>and</strong> E;the level of T <strong>and</strong> E from SHBG was inclined towards<strong>and</strong>rogens [2,3] , which stimulated the excessivehyperplasia of sensitive cells, resulting in the rapidacceleration <strong>and</strong> expansion of cancer cells.3.3 The statistical results of the endocrine hormones(TSH, LH, GH, PRL, T, E2 <strong>and</strong> P) of thepre-menopausal period (N1 group) <strong>and</strong> themenopausal period (N1 group) showed no majordifferences in patients with mammary cancer (T1) <strong>and</strong>patients with hyperplasia of mammary gl<strong>and</strong>s (T2).This is probably related to the interaction among thehormones of the TSH, HPO <strong>and</strong> HPA axes whichplunged the GH axis (hormones <strong>and</strong> genes) intodisorder. The disrupted balance of the HPA axialsystem caused to dopamine to stimulategrowth-stimulating hormones to release rhGH <strong>and</strong>secrete GH hormones (See Fig. 1). The function <strong>and</strong>secretion of the GH hormones were regulated by thegrowth inhibiting hormones S.S (Somator Station) <strong>and</strong>IGF, resulting in the stimulation of the proteinsynthesis in the normal body by thedopamine-rhGH-GH, FSH-E2, <strong>and</strong> S.S-IGF hormones,the improvement of the immune function <strong>and</strong> thepreservation of the lean tissue groups;high-concentration or ultra-concentration GHstimulated the growth of the tumor. The rising of theGH stimulated the HPA axis to release noradrenalin<strong>and</strong> secret S.S, thereby enabling the S.S to function byinhibiting the GH release <strong>and</strong> allowing the S.S toexert an indirect impact on gastrointestinal hormonessuch as IGF. The IGF-1 <strong>and</strong> IGF receptor (IGF-IR)regulated the body metabolism through the medium.One of the possible mechanisms of theover-expression of the IGF-IR could be the mutationof the cancer-inhibiting gene P53 <strong>and</strong> the loss ofinhibition of the IGF-IR gene. The IGF-1 in thesecrum of Phases I <strong>and</strong> II mammary cancer was 25%higher than the that of the control samples in thecorresponding age group, <strong>and</strong> the IGF-II levels in thestroma of infiltrating mammary cancer exceedednormal levels by 50% [4] . The IGF-IR signal channelwas open, stimulating the hyperplasia of cells,inhibiting wilting, <strong>and</strong> creating pathologicalhyperplasia of mammary cancer, which was probablyrelated to increased expression of the IGF-IR resultingfrom the regulation of gene (PS2 gene) by estrogen<strong>and</strong> from the interaction among the H-ras startinggene (P21 gene), c-myc transcription <strong>and</strong> regulationgene, <strong>and</strong> epithelial growth factor, which caused theexcessive splitting, hyperplasia <strong>and</strong> reproduction ofcells. Relevant documents [1, 5, 6, 7] show that theGH is the trophic hormone of IGF, GH directlystimulates cell differentiation through local productionof IGF-I <strong>and</strong> indirectly stimulates the proliferation ofclonal selection, <strong>and</strong> that IGF-I <strong>and</strong> E2 interact witheach other, stimulating mitosis, inhibiting wilting <strong>and</strong>promoting development in the initial phase ofmammary cancer.3.4 The overwhelming majority of patients withmammary cancer <strong>and</strong> hyperplasia of mammary gl<strong>and</strong>sin the menopausal period (N2 group) suffered fromovarian dysfunction or impairment, follicular atrophy,<strong>and</strong> fibrous tissue metaplasia (over 70%). Thesepatients’ ovarian steroid turned into deflecting adrenalcortex, causing reticular cells to secrete a hugeamount of <strong>and</strong>rosterone <strong>and</strong> converting to estrogen,<strong>and</strong> the follicle-stimulating hormone (FSH) <strong>and</strong>luteotropic hormone (LH) in the body remained athyper levels; therefore, the statistical results of theFSH <strong>and</strong> LH concentration levels in the plasma of thepatients showed no big difference.3.5 The rise of the ACTH plasma concentration inpatients with mammary cancer (T1) in themenopausal period (N2 group) was a possible result


of the generation of the ACTH hormone by mammarycancer cells <strong>and</strong> the stimulation of the ACTHhormone by the negative feedback of thehigh-concentration progestin. The rise of the ACTHcaused the inhibition of the adrenal cortex ethanol <strong>and</strong>the disorder of the HPA axis, resulting in theinhibition of active factors of the immune cells, suchas interferon, tumor necrosin <strong>and</strong> immune T cells <strong>and</strong>B cells. The prevailing belief [1] is that the rise of theglucocorticoid, progestin <strong>and</strong> adrenocorticotropin is areaction of the inhibited immune system. The rise ofthe estrogen could inhibit the reaction of lymphocyte;high-concentration progestin could also inhibit thereaction of immune lymphocyte, which resulted in theinhibition of the immune system [1] <strong>and</strong> the inductionof related cancer genes by the active estrogen. Onearticle [2] reported that an analysis of the time of thesplitting of mammary cancer cells throughout thelifetime of the adult showed that on the average, ittook 10 years from the initial cells to turn into theclinical tumor <strong>and</strong> for tumors after menopause, thetime could be longer, thereby providing a theoreticalbasis for the diagnosis <strong>and</strong> treatment of hyperplasia ofmammary gl<strong>and</strong>s <strong>and</strong> the endocrine hormones insidethe mammary cancer in its initial stage, <strong>and</strong> for theimprovement of the immune mechanism.Acknowledgement: Special thanks to Li Shaohua, Directorof the RIA Section of Xiamen First Hospital <strong>and</strong> ZhaoJingxin, Director of the Endocrine Dept., for theirassistance in my research.References[1] Shi Yifan, Consonance Endocrine & Metabolism Science, Beijing:Science Press, 1999, 23, 143, 220, 260, <strong>and</strong> 1757.[2] Periris AN, Sothman MS, Ainan MS, et al. The Relationship ofInsulin to Sex Hormones-Binding Globulin: Role of Adiposity.Fertil Steril, 1989, 52:69-72.[3] Preziosi P, Barret-Connon E, Papoz L, et al. Interrelation betweenPlasma Sex Hormone-Binding Globulin <strong>and</strong> Plasma Insulin inHealthy Adult Women: The Telecome Study. J Clin EndocrinolMetab, 1993, 76: 283-287.[4] S<strong>and</strong>ra ED. AD Ominant Negative Mutant of the Insulin-like GrowthFactor 1 Receptor Inhibits the Adhesion, Invasion <strong>and</strong> Metastasis ofBreast Cancer. Cancer Research, 1998-58:3533-3361.[5] Xu Qin, Ma Limin <strong>and</strong> Sang Jianfeng. The Effect of GrowthHormone on Transfer Tumor of Carcinoma of Colon in Mice.Journal of Practical Tumor, 2001, 16 (2): 90-92.[6] Lu Nan, Chi Zhihong <strong>and</strong> Zheng Wenyao. Growth Inhibins <strong>and</strong> TheirReceptors & the Diagnosis <strong>and</strong> Treatment of Tumor. China TumorClinic & Recuperation, 2000, 7 (5): 92-93.[7] Liu Zhenxin <strong>and</strong> Li Ming. Insulin Growth Factors I <strong>and</strong> Receptor &Summary of Tumor <strong>and</strong> <strong>Medical</strong> Research, 2001, 7(1): 5-7.[8] Basil A, Stoll MD. Pre-menopausal Weight Gain <strong>and</strong> Progression ofBreast Cancer Precursors. Cancer Detection <strong>and</strong> Prevention,1999-23: 31-36.


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaAuto Data <strong>and</strong> Semantic Integration SystemHuang LiqinFuzhou University Network CenterFuzhou, Fujian 350002Abstract-Because people can manually <strong>and</strong> quicklydistinguish none consistent describing to the sameattribute that comes from different data source,but it is very difficult to the computer. Semanticschema mapping must auto distinguish themeaning that the character described, then mapthe data from different source to the right maindatabase. This is one of the most difficulties thatshould overcome. This paper analyses characterdescribing of the local database field, gets the keywords, computes their frequency as the authorityparameter, then processes semantic schemamapping using space vector model.1. concept of semantic integrationIn information auto-collection systeminformation source locate on heterogeneousdistributed circumstance. Generally they havedifference data types <strong>and</strong> data operation. Everyinformation source has comparatively steadylanguage environment <strong>and</strong> mode. Theycommonly have their own expression <strong>and</strong> format.They can not be compatible in semantics <strong>and</strong>syntax. In order to share this information ,realize the mutually operation <strong>and</strong> give user awhole consistent view, we should conquersemantic difference among information sourcesthrough semantic integration.Because data from difference source hasinconsistent describing for same attribute, it isvery difficult for computer to distinguish fromthem, but easy for human. Semantic integrationshould rightly <strong>and</strong> automatically match datafrom difference information source based onsemantic content that the words describe. Atpresent there are two ways for semanticintegration. The first is composite approach. Itprovide a global consistent semantic view aboutdifferent information source, <strong>and</strong> gives mappingbetween global semantic mode <strong>and</strong> localsemantic mode. This way is suitable to soluteinconsistency among semantic mode. But whenthe amount of information source reach a certainlevel <strong>and</strong> add one another information source orsome information source changes, it becomesenormous to maintain the whole semantic mode.Another way is building semantic federaldatabase[1], provide several local semanticmode content <strong>and</strong> semantic information sharingtools. User should deal with semanticinconsistent of different information source byhimself. This way is relatively simple tomaintain, but user should underst<strong>and</strong> everyinformation source content. This article want tobuild a initiatively collection information baseddata warehouse, through a reformativecomposite way. In order to reduce the mappingworkload of whole semantic mode <strong>and</strong> localsemantic mode, we must solute how to mapbetween whole semantic mode <strong>and</strong> localsemantic mode automatically, try our best toreduce manual work <strong>and</strong> realize system’sautomation.2 Semantic integration realization of auto datacollection system2.1 System frameworkData auto-collection system wants toprovide a global perspective view forprogrammer <strong>and</strong> realize perspective accessing toevery member website. The goal is to realizedata transparence <strong>and</strong> operation transparence.The data transparence has three meanings.1) Database or file physical location istransparent.2) Every database has heterogeneous <strong>and</strong>operation transparent in data mode,attribute mode <strong>and</strong> output mode.Network framework, protocol <strong>and</strong>connection method is transparent ininformation auto-collection system.Every member site’s OS, DBMS istransparent.3) Transparent for heterogeneousdatabase format transform <strong>and</strong>integration.When add a new data source or datasource changes, we should add or modify wholeview mode <strong>and</strong> local view mode in order torealize transparence. Generally there are 3 stepsto build semantic knowledge.1) draw-out mode: in this step we want toget source data mode throughinformation source data storage type<strong>and</strong> format. In this article it refers todescribing information of attribute field<strong>and</strong> characteristic of contentinformation.2) Concept matching2]: semanticknowledge <strong>and</strong> different modeintegration heavily depends on conceptmatching result. It bases on draw-outmode result <strong>and</strong> produce matching tableaccording field describing information<strong>and</strong> characteristic of content226


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinainformation. To this procedure, wecould discuss viable arithmeticrealizing concept matchingautomatically. In initial stages may needmanual work. Most important is tobuild semantic knowledge.Constructing a semantic databaseutilize synonymous dictionary databasethen process training. Along withtraining times increase, new semanticknowledge is extended. Result ofconcept matching is a attributematching table.3) Data collection: according attributematching table knowledge, datacollection module translate knowledgeof data from different informationsource, <strong>and</strong> build the whole consistentdatabase. This procedure is aintelligence process.In fig 1:1) Semantic knowledge constructor: It isto build semantic mapping knowledgebetween two semantic mode. Buildingmapping between local semantic mode<strong>and</strong> global semantic mode is on thebasis of draw-out mode <strong>and</strong> conceptmatching. The knowledge of mappingis in mapping table.2) Metadata dictionary: It containsdescription of information source, suchas mode, storage path, type, owner,domain <strong>and</strong> heterogeneous table nameetc.3) Semantic knowledge database: Itcontains knowledge to underst<strong>and</strong>attribute field of global mode table,such as synonym, similar word, map ofChinese <strong>and</strong> English word etc.4) Attribute mapping table: samesemantic fields may have differentname in different member databasesystem. For example, medicineinformation data get from A web site,medicine label field names with“label”, but data get from B web site,its field names with “tag”. We mustdefine equivalence semantic explaintable for global mode attribute <strong>and</strong>member locale mode database in orderto giving out consistent attribute <strong>and</strong>data description view. The structurecould express like this:(GDB.attrib,Description,LDB1.attrib,LDB2.attrib,..., LDBn.attrib).GDB.attrib is the attribute to user(globe mode attribute),Description is thedescription of globe mode attribute ,thereafter LDBn.attrib is the correspondingattribute of heterogeneous database frommember web site.2.2 arithmetic realization of semantic integrationThe work of semantic integration can bedivided into 2 steps: first, constitute everysemantic class model for globe semantic mode.semantic class model include every attributefield semantic information. It can expressdetailed semantic of every field. Second,mapping technology. It map attribute field oflocal table to attribute field of global table. Themain process is to build semantic modelaccording semantic classifying of globalsemantic model, map them to local data field,then utilize feedback to modify semanticclassifying model.This article give out an kind of basic idea ofsemantic integration: utilize concept extensionmodel based on dictionary. Thus we can extentglobal semantic classifying model <strong>and</strong> let itbetter express semantic classifying model inorder to get more accurately mapping rate.Similar rate of local field semantic can becalculated according to extented global semanticclassifying model <strong>and</strong> map them using givingthreshold[3].2.2.1 semantic extension based on dictionaryIn semantic mapping system which basedon key words, whether global semanticclassifying model map with local semantic fielddepend on how much common key words theyown. Common key words may concludesynonym <strong>and</strong> similar words. Synonym meansdifferent people use different express <strong>and</strong>words according to personal requirement,environment, knowledge level <strong>and</strong> languagehabit. For example, “vocation” <strong>and</strong>“occupation”、 “age” <strong>and</strong> “year”. It has beeninvestigated that different people using same aword to express a very familiar thing not morethan 20%. So key words that local field uses isnot same to global field, but two fields meansame semantic. In fact they are matching. This isbecause of synonym. Expressing same or similartopic, different filed use different key words, butthey may have many same item on concept. So it227


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinacan be more better to reflect the similar rate ofwords when words is mapping to concept level.It can use key words to express globalsemantic classifying. These key words must beindependent in semantic. Natural language hasabundant expressing. For example, manyrelations such as synonym, similar, implication,conjunction lie in words. In order to soluteconflict between independence of characteristicwords <strong>and</strong> multiplicity of natural words, we canconstruct three dictionaries: main dictionary,synonym dictionary <strong>and</strong> implication dictionary.These dictionaries are used for words division<strong>and</strong> words frequency statistic. It requires thatimplication of words in main dictionaries isindependent mostly. The structure of dictionaryis like table 1.SemanticclassifyingTABLE 1Main dictionaryKey word coefficien synonymtdictionaryimplicationdictionary2 Vocation 1 true ture3 Age 1 true false… … … … …synonym dictionaryword synonym coefficientVocation occupation aVocation work aage year a… … …implication dictionaryword implication coefficientVocation profession bVocation employment b… … …It can solute multiplicity problem of naturalwords <strong>and</strong> deal with other language semantictranslation(just treat it as synonym words) whenuse synonym <strong>and</strong> implication dictionaries. Inpractice we can construct similar <strong>and</strong>conjunction dictionary depending onrequirement to improve mapping rate.It is supposed that key word sets of everysemantic classifying is U k =(t 1 ,t 2 ,...,t n ), kmeans the k semantic classifying. Use semanticmodel mapping to concept space model V.Define concept mapping as:→U ( t1, t2,... tn)( < t 1 > , < t 1 > ,... < t ,1 > , Q ( t ), Q ( t ),... Q ( t ), P ( t ), P ( t ),... P ( t )) = V1,2,n12n12nIn above expression,Q t ) ( < c w > , < c w > ,... < c w ) ,(i=i1 , i1i2,i2il,il>{c i1 ,c i2 ,…,c il } is the set that has same semanticcode with t i in synonym dictionary; w ij iscoefficient of c ij ;P t ) ( < d x > , < d x > ,... < d x ) ,(i=i1 , i1i2,i2il,il>{d i1 ,d i2 ,…,d il } is the set that has same semanticcode with t i in implication dictionary; x i iscoefficient of d ij . Coefficient of key word ti is 1,are called origin coefficient of the k semanticclassifying. w ij =a(0


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinamodel can be expressed:U = ( < t1 , u1> , < t2, u2> ,..., < t m, um> ,)t i is the characteristic word, v i iscoefficient of t i . The similar rate betweenglobal semantic model <strong>and</strong> local semanticmodel is :y= Sim(U,V ) =m∑u vii=1m2∑ujj=1im∑j=1when giving similar threshold θ, if similardegree of local table field greater than θ , itcan be regard as a mapping semantic.2.2.3 identify arithmeticWe can compute y n that is toward class n,get set ={y n ,n=1,…,N}. if y j =max{y n }, <strong>and</strong> y j >θ, then it is belong s to j class.Arithmetic description:Core arithmeticMain()Input: description of local table attribfield; main dictionary、synonym dictionary <strong>and</strong>implication dictionary of global table attrib field,similar degree threshold S0.Output : mapping between local table<strong>and</strong> global tableData structure:every semantic class QJB of global table{semantic classifying、word、coefficient}every semantic class QJB of local table{local table identify、field、description}vocable coefficient V{ vocable,coefficient}similar degree XSD mapping local tablefield to every semantic classsemantic class JBBLB of local tableattribute field { local table sign、field sign、globaltable field}steps:1.Construct main dictionary、synonymdictionary、implication dictionary of global tablesemantic classifying through inquiring Chinesev2jDictionary. Finally constuct QJB{semanticclassfying,word,coefficient}2.While every local table doWhile every field of local table doV emptyFor every global semantic classfying doIf QJB


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaThree-dimensional Skin Surface Texture Editing Based on Self-similarityJunyu DongLin QiDepartment of Computer ScienceOcean University of ChinaQingdao, China 266071Guojiang ChenCollege of TeachersUniversity of QingdaoQingdaoChina 266071AbstractAbstract - Skin surfaces normally exhibit similar texturefeatures such as wrinkles <strong>and</strong> pores. Some diseased skin alsocontains a number of small disordered patches with similar colors,surface roughness or shapes. In this paper, we propose a skin imageediting method based on the self-similarity of skin surfaces. Unliketraditional 2D image editing tools which only make changes topixel color or intensity values, the method can automatically detect<strong>and</strong> change colors, geometry <strong>and</strong>/or reflectance of all similar areason the skin surfaces while requiring little user input. The proposedmethod can be efficiently applied in virtual medicine, virtualtreatment planning, virtual cosmetic surgery <strong>and</strong> e-cosmeticapplications.Key words: skin images, surface texture, texture editing, virtualmedicineI INTRODUCTIONAnalysis <strong>and</strong> processing of skin images are very important inmany research <strong>and</strong> application fields, such as medicalimaging, pattern recognition, post processing <strong>and</strong> computergame development [2, 15, 16, 17, 12]. Some tasks, e.g.virtual treatment planning, virtual cosmetic surgery <strong>and</strong> e-cosmetic, require real skin images to be processed or editedso that virtual skin images with different features, includingcolors, pores, wrinkles <strong>and</strong> pigments, can be generated <strong>and</strong>virtual effects can be shown to the patients or customers. Forexample, doctors may wish to see the virtual appearances ofdiseased skin images at different stages when they work onvirtual treatment planning. These virtual images can begenerated by editing real images.Although researchers have developed varioussophisticate tools that can be used for editing skin images inthe past, these tools are often so sophisticate that they mayrequire users to have considerable art <strong>and</strong> computer skills,which are beyond many medical workers. The editingprocess is often time-consuming <strong>and</strong> similar to h<strong>and</strong> drawing.It is desirable to develop new methods so that the editingprocess can be automated with minimal user effort.230On the other h<strong>and</strong>, real skin surfaces comprise rough surfaceJiahua WuWellcome Trust Sanger InsitituteCambridgeCB4 1HHUnited Kingdomgeometry <strong>and</strong> various reflectance properties. Their imagescan therefore vary dramatically with illumination. Forexample, (a) <strong>and</strong> (b) show two example skin imagesilluminated from two directions. The images are from theRutgers skin texture database [2]. Thus, traditional 2D imageediting technique cannot provide the informationrequired for rendering under other than the originalillumination <strong>and</strong> viewpoint conditions. This presents obviouslimitations for high-fidelity rendering of skin surface texturesin many applications.The goal of this paper is to present inexpensive <strong>and</strong>efficient approaches for editing real skin surface textures.We exploit the self-similarity on skin surfaces for automatingthe editing process. These self-similarities include similartexture features such as wrinkles, pores <strong>and</strong> small disorderedpatches with similar colors, surface roughness <strong>and</strong> shapes ondiseased skin. We adapt Brooks <strong>and</strong> Dodgson’s work in [1]<strong>and</strong> perform painting <strong>and</strong> warping over 2D as well as 3Dskin surface images using self-similarity. For 3D skinsurfaces, we use surface height <strong>and</strong> albedo maps as therepresentation to perform editing. The output skin surfacetexture representations can be rendered to produce skinimages under varied illumination or viewing conditions. A(c)(a)Fig. 1. (a) <strong>and</strong> (b) are two images of a skin surface with a wrinkle imagedunder differing illumination. Block arrows show the illumination directions.(c) <strong>and</strong> (d) are the relit images produced by self-similarity based paintingover skin surface height <strong>and</strong> albedo maps. The white dot shows the inputlocation (within the wrinkle). The wrinkle depth on the skin surface isreduced by editing the height map.(d)(b)


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, Chinasingle editing operation at a given location can affect allsimilar areas <strong>and</strong> cause changes on all rendered images.Although no previous work is available for editing 2Dor 3D skin surfaces using self-similarity, two papers arerelated to the work described in this paper. The fundamentalidea of our work is similar with Zelinka <strong>and</strong> Garl<strong>and</strong>’ssurface modeling methods [9], but their approach operateson surfaces of vertex-based 3D models instead of 3D surfacetextures. The output of their method is a 3D model whileours is new skin surface height <strong>and</strong> albedo maps. In [10],Dischler et. al. mentioned that by geometrically modeling thesurface height their method can change orientations, sizes,<strong>and</strong> r<strong>and</strong>omness of “macrostructures” specified by the userover a texture bump map. It did not, however, considersurface reflectance properties [3, 13]. In contrast, ourapproach uses simple methods to represent skin surfacegeometry as well as reflectance properties <strong>and</strong> canautomatically edit the whole surface with minimum userinput.The rest of this paper is <strong>org</strong>anized as follows. Section 2describes the selection of 3D skin surface representations forediting. Section 3 introduces our approach on self-similaritybased editing of skin surface textures. Finally, we concludeour work in section 4.II REPRESENTING 3D SKIN SURFACES FOR EDITINGA. Selection of RepresentationsObviously, we may use the self-similarity basedapproach proposed in [1] for editing 2D skin images.However, an important characteristic of 3D skin surfaces isthat they exhibit different appearances under varied lighting<strong>and</strong> viewing conditions. Thus, multiple images are requiredto describe 3D skin surface textures [2]. It is s is notpractical to directly edit the original image as calculatingself-similarity using the method described in [1] is difficultin multidimensional space <strong>and</strong> may requires largecomputation. We therefore need to:1.capture images of the sample skin that will enable us toextract a suitable representation of the 3D surface,2. edit the skin surface representation to produce a newdescription of the required dimensions, <strong>and</strong>3.render (or relight) the skin surface representation inaccordance with a specified set of lighting <strong>and</strong> viewingconditions.As previously discussed our aim is to developinexpensive techniques for the editing of 3D skin surfaces.This limits us to approaches that (a) use low dimensionalrepresentations, <strong>and</strong> (b) use simple or common graphicscalculations such as dot-products. We therefore choosesurface height <strong>and</strong> albedo maps as the representation, as theyare widely used in computer graphics applications for bumpmapping <strong>and</strong> are compatible with almost every graphicssoftware package. In addition, skin surface height <strong>and</strong> albedomaps can also be used for analysis skin conditions [14].We exploit the Rutgers skin texture database(http://www.caip.rutgers.edu/rutgers_texture/) for extractingskin surface height <strong>and</strong> albedo representations. The database231contains many sample skin surface textures; each skinsurface is represented using multiple images captured undera variety of different illumination <strong>and</strong> view conditions.B. Generating Surface Height <strong>and</strong> Albedo MapsWe first use photometric stereo [4] to estimate skinsurface normal <strong>and</strong> albedo maps. Surface height maps arethen generated by using a global integration technique inFourier space [8], which minimizes the problems associatedwith integrating errors. This is an efficient <strong>and</strong> non-recursiveapproach, but suffers problems when the gradient data hasbeen affected by shadows.At a pixel location, the Lambertian reflectance functionis expressed asi ( x,y)= λαn ⋅ l (1)where:i ( x,y)is the intensity of an image pixel at position (x, y)λ is the incident intensity to the surfaceα is the albedo value of the Lambertian reflectionl is the unit illumination vector at position (x, y) <strong>and</strong> canbe expressed asTTl = ( l , l , l , ) = (cosτsinσ, sinτsinσ, cosσ)xyzτ is the tilt angle of illuminationσ is the slant angle of illuminationn is the normalized surface normal at position (x, y) <strong>and</strong>can be expressed asp− p+ qTzn = ( nx,ny,n )= (22,+ 1p2− q+ q2,+ 1p21+ q2)+ 1p <strong>and</strong> q are the partial derivatives of the surface heightfunction h(x,y) in the x <strong>and</strong> y directions respectively.If the incident intensity to the texture surface λ isconstant—as assumed in this paper, we can treat λ as ascalar <strong>and</strong> merge it with albedo α . To simplify notations,we use α to represent λ α .According to equation (1), three non-collinear imageprovide a linear equation group, from which the surfacegradient p(x,y) <strong>and</strong> q(x,y) <strong>and</strong> albedo maps can be obtained.More images can also be used to produce surface gradientmaps by solving an over-constraint equation group.In order to generate the surface height map, we first takeFourier Transform on the spatial surface gradient maps p(x,y)<strong>and</strong> q(x,y). We use P(u, v) <strong>and</strong> Q(u, v) to denote p(x,y) <strong>and</strong>q(x,y) in frequency domain respectively, where (u, v) is the2D spatial frequency co-ordinate. By Fourier theories, wehave the following equations:P(u, v) = ju S(u, v) (2)Q(u, v)= jvS(u,v) (3)where S(u, v) is the frequency domain denotation of thespatial surface height map s(x, y) <strong>and</strong> j is the square root ofminus one.The surface height map in frequency domain can beexpressed as:− ju P(u, v) − jv Q(u, v)S(u,v) = (4)2 2u + vT


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaThe height map h(x,y) in spatial domain can then begenerate by taking inverse Fourier Transform. If we want torelight the surface height map, we may differentiate thesurface height map to obtain gradient maps <strong>and</strong> then applythe Lambertian model (1).If skin surfaces exhibit complex reflectance, surfaceheight <strong>and</strong> albedo maps may be generated using many othermethods [7, 6]. The recovered height <strong>and</strong> albedo maps areused for directly editing <strong>and</strong> rendering.III SKIN SURFACE TEXTURE EDITING BASED ON SELF-SIMILARITYA. The FrameworkFor 2D skin images, self-similarity based editing on aparticular pixel causes changes on colors of all pixels thatexhibit similar local neighborhoods [1]. For example, wemay change the colors of all the disordered patches on adiseased skin image, since these patches share certainsimilarities. To simplify the description of the framework,we also call 2D skin images as skin surface representationmaps.For 3D skin surface editing, a single editing operationwill result in changes on all images of the sample texturerendered under different illumination <strong>and</strong> viewing conditions.Following Brooks <strong>and</strong> Dodgson’s work [1], we performediting operation over skin surface textures in two aspects:painting <strong>and</strong> warping. Each of these operates on skin surfaceheight <strong>and</strong> albedo maps in 2D space.The editing operation comprises four steps:(i) Select a pixel location on a sample skin surface image.The pixel location normally lies in one of the similarareas. For example, if we wish to change the color of alldisordered skin patches, the location should be in one ofthose patches.(ii)Choose a proper neighborhood size <strong>and</strong> calculate asimilarity map.We use the causal neighborhood as defined in [1]. Thesize of the neighborhood should be selected in accordancewith the size of all the similar elements based on human’svisual perception. For each pixel location on therepresentation maps, the similarity between its neighborhood<strong>and</strong> the neighborhood around the input location is calculatedusing the sum of the Euclidian distances between all pixels inthe two neighborhoods. Note that when editing operates in2D (surface height <strong>and</strong> albedo) space, the neighborhoodscomprise pixel values in all representation maps at thecorresponding locations. In this way, we obtain a similaritymap for 3D skin surface representations.(iii) The value at each pixel location on the surfacerepresentation is replaced by the multiplication of thevariation value at the input pixel location <strong>and</strong> the similarityvalue obtained from the previous step. For 2D skin images,final results will be produced after this step.(iv) For 3D skin surfaces, the result representation mapsare rendered at different illumination <strong>and</strong> viewing angles.232B. Painting on 3D Surface Textures1)Painting on 2D skin imagesWe refer painting on 2D skin images to changing colorsof all pixels with similar neighborhoods [1]. The colorpainted at a pixel location (x,y) is decided according to thesimilarity between its neighborhood <strong>and</strong> the neighborhoodaround the input pixel location. Thus, we use the followingequation to calculate the result pixel at (x,y)I ′( x,y)= I(x,y) *(1 − s(x,y))+ i′* s(x,y)where:I ′( x,y)is the pixel value at (x,y) of the output imageI ( x,y)is the pixel value at (x,y) of the sample images(x,y) is the similarity value at location (x,y) of the similaritymapi′ is the color value that we want to paint at the inputlocation.In this way, those pixels that have larger similarity valuesto the one of the input location will receive painted colorvalues closer to the input value. As used in [1], we alsooperate a threshold strategy, which forbids those pixels withsimilarity values smaller than the threshold to be painted.We may also use colors r<strong>and</strong>omly selected from differentpixel location on the sample skin image for painting, insteadof the fixed input color. shows the results produced bypainting on 2D skin images.2) Painting on 3D skin surfacesWe define painting on 3D skin surfaces as the editingoperation that changes surface geometry <strong>and</strong> reflectance.Thus, pixel intensities or colors of all similar areas will alsobe changed when the surface is rendered under certainillumination <strong>and</strong> viewpoints. The direct way of editing skinsurface geometry <strong>and</strong> reflectance is to use surface height <strong>and</strong>albedo maps as input.In order to edit skin surface height <strong>and</strong> albedo maps, weadapt the self-similarity based editing algorithm proposed byBrooks et al. in [1] so that it can take two dimensionalvectors as input to generate a similarity map. In addition, wemay edit a surface height map or an albedo map separately.We use equation (5) to generate new surface height <strong>and</strong>albedo maps, which can then be rendered at differentillumination <strong>and</strong> viewing directions.⎛ h′′(x,y) ⎞ ⎛ h(x,y)⎞⎛ h′⎞⎜ ⎟ = ⎜ ⎟ * (1 − s(x,y))+ ⎜ ⎟ * s(x,y)(5)⎝ a ′′ (x,y) ⎠ ⎝ a(x,y)⎠⎝ a′⎠where:h(x,y) <strong>and</strong> a(x,y) are skin surface height <strong>and</strong> albedomaps at pixel location (x,y) respectivelys(x,y) is the similarity value at location (x,y) of thesimilarity maph′ <strong>and</strong> a′ are the input height <strong>and</strong> albedo values that theuser wants to changeFig. 1 (c) <strong>and</strong> (d) show the results produced by selfsimilaritybased painting over surface height <strong>and</strong> albedomaps. The wrinkle depth is reduced by changing heights atthose pixel locations exhibiting similar neighborhoods to theone inside the wrinkle (the white dot). The neighborhoodsize is 3x3.


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaC. Similarity-based Warping of Skin Surface TexturesIn [1], Brooks et. al. proposed to use neighborhoodsimilarity as a measurement to exp<strong>and</strong> or shrink all similarareas on the sample images by using a non-linearmagnification technique [11]. We adapt the technique forwarping of both 2D <strong>and</strong> 3D skin surfaces. Warping of 2Dskin images can be used for shrinking similar areas likewrinkles, pores <strong>and</strong> disorderedpatches on diseased skin. Warping of 3D skin surfacesoperates on skin surface height <strong>and</strong> albedo maps <strong>and</strong>therefore can produce similar effects on all images of thesample rendered under different lighting <strong>and</strong> viewingdirections.In the case of 2D skin image editing, the similarityvalue at each pixel location defines the magnifying orshrinking degrees. A numerical algorithm described in [11]is then used to move neighbor pixels away or closeaccording to magnification or shrinking operation. Forwarping over surface height <strong>and</strong> albedo maps, it is anextension of the non-linear magnification algorithm from 1Dspace to multi-dimensional space. Thus, if we wish to makewrinkles narrower or reduce the depth on the skin surface,we may simply click on a location within a wrinkle. Theediting process will automatically make all similar wrinkleareas on the surface height map become narrower orshallower. By rendering the result height <strong>and</strong> albedo maps,we obtain the effect of smaller wrinkles. Fig. 3 (a) showsshrinking disordered patches on the diseased skin, <strong>and</strong> (b)shows the result of shrinking a wrinkle. Fig. 4 showsrendering results of warped surface height <strong>and</strong> albedo, whichare produced by shrinking the wrinkle area on the surfaceheight map. Fig. 5 shows a comparison of the original <strong>and</strong>warped surface height data around the wrinkle areas.Fig. 2 Results produced by self-similarity based painting on diseased skin.The top left is a sample image of Lichen Simplex Chronicus. The white dotshows the input pixel location. The top right is the result image producedby painting using colours r<strong>and</strong>omly selected from normal areas (whosesimilarity values are smaller than the threshold). The bottom row shows twoimages produced by painting using fixed colour values, but with differentthreshold.IVCONCLUSIONIn this paper we introduced inexpensive methods forediting both 2D <strong>and</strong> 3D skin surface textures by exploitingthe self-similarity. The 2D texture editing method proposedby Brooks et. al. were adapted <strong>and</strong> applied to edit surfaceheight <strong>and</strong> albedo maps, which are used as the representationof 3D skin surfaces. The resulting 3D skin surface texturescan be rendered under varied illumination <strong>and</strong> viewingangles. The proposed methods can be efficiently used invirtual treatment planning, virtual cosmetic surgery, e-cosmetic, virtual medicine <strong>and</strong> other computer graphicsapplications.Fig. 3 Results produced by shrinking 2D skin images. The first row showsthe original (left) <strong>and</strong> result (right) images of Lichen Simplex Chronicus.The second row shows wrinkle depth reduction effect by using selfsimilaritybased warping; the original is on the left <strong>and</strong> the result image ison the right. White dots show the input location.Fig. 4 Relighting results produced by shrinking a wrinkle on the skinsurface. The left column row shows the original skin images with twodifferent illumination directions (block arrows); the second column showstwo images produced by shrinking the wrinkle area. White dots show theinput location.233


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaFig. 5 Comparison of the original (left) <strong>and</strong> warped (right) surface heightmaps around the wrinkle areas.REFERENCES[1] S. Brooks <strong>and</strong> N. A. Dodgson, “Self-Similarity BasedTexture Editing”. ACM Transactions on Graphics,21(3), July 2002, pp. 653-656,[2] O.G. Cula, K.J. Dana, F.P. Murphy, <strong>and</strong> B. K. Rao,“Skin Texture Modeling,” International Journal ofComputer Vision , Vol. 62: No. 1-2, pp. 97-119, April-May 2005[3] M. L. Koudelka, S. Magda, P. N. Belhumeur <strong>and</strong> D. J.Kriegman, “Acquisition, Compression, <strong>and</strong> Synthesisof Bidirectional Texture Functions,”. Proceedings ofthe 3rd International Workshop on Texture Analysis<strong>and</strong> Synthesis. pp59-64.2003[4] R. Woodham, “Analysing images of curved surfaces,”Artificial Intelligence, 1981. 17:117-140.[5] J. Dong, <strong>and</strong> M. Chantler, “On the relations betweenfour methods for representing 3D surface textures undermultiple illumination directions,” Proceedings of the2004 International Conference on Computer <strong>and</strong>Information Technology, Sep. 2004, pp.807-812.[6] J. Dong, <strong>and</strong> M. Chantler, “Estimating Parameters ofIllumination models for the synthesis of 3D surfacetexture,” Proceedings of the 2004 InternationalConference on Computer <strong>and</strong> Information Technology,Sep. 2004, pp716-721.[7] G. Kay <strong>and</strong> T. Caelli, “Estimating the parameters of anillumination model using photometric stereo,”Graphical models <strong>and</strong> image processing, Vol. 57, No.5, September 1995.[8] R. T. Frankot, <strong>and</strong> R. Chellapa, “A method forenforcing integrability in shape from shadingalgorithms,” IEEE transactions on pattern analysis <strong>and</strong>machine intelligence, 10(4), July, 1988. pp. 439-451.[9] S. Zelinka <strong>and</strong> M. Garl<strong>and</strong>, “Similarity-Based SurfaceModelling Using Geodesic Fans,” Proceedings of theSecond Eurographics Symposium on GeometryProcessing (2004), Eurographics Association, pp. 209-218.[10] J. Dischler <strong>and</strong> D. Ghazanfarpour, “Interactive Image-Based Modeling of Macrostructured Textures,” IEEEComputer Graphics <strong>and</strong> Applications, 1999, 19, 1, 66-74.[11] A. Keahey, <strong>and</strong> E.Robertson, “Nonlinear MagnificationFields,” IEEE Symposium on InformationVisualization, 1997, p.p.51-58.[12] D. Bartz et al. “Advanced Virtual Medicine:Techniques <strong>and</strong> Applications for Medicine-OrientedComputer Graphics” Eurographics 2004 Tutorial T6.[13] J. Dong, <strong>and</strong> M. Chantler, “Capture <strong>and</strong> synthesis of3D surface texture,” International Journal of ComputerVision: special issue on texture analysis <strong>and</strong> synthesis.April-May 2005, Vol.62:Nos,1-2,pp177-194[14] A. Matsumoto, H. Saito, <strong>and</strong> S. Ozawa, "3-DReconstruction of Skin Surface from PhotometricStereo Images with Specular <strong>and</strong> Inter Reflections", The4th International Conference on Control, Automation,Robotics, <strong>and</strong> Vision (ICARCV'96), Vol.2, pp.778-782,Singapore, Dec.1996.234[15] S. Mukaida <strong>and</strong> H. Ando, “Extraction <strong>and</strong>Manipulation of Wrinkles <strong>and</strong> Spots for Facial ImageSynthesis.” Proceedings of Sixth IEEE InternationalConference on Automatic Face <strong>and</strong> GestureRecognition, 2004, pp. 749[16] H. Shimizu, K. Uetsuki, N.Tsumura, Y. Miyake, N.Ojima, " Analyzing the effect of cosmetic essence byindependent component analysis for skin color images,"Proceedings of the third international conference onmultispectral color science, pp.65-68 (2001, June,Joensuu, Finl<strong>and</strong>)[17] N. Ojima, N. Tsumura, H. Shimizu, H. Nabeshima, S.Akazaki, K. Hori <strong>and</strong> Y. Miyake, “Measurement ofSkin Chromophores by Independent ComponentAnalysis <strong>and</strong> the Application to Cosmetics,” Proc.IS&T PICS Conference, 571-574(2003,Rochester).


Proceedings of International Conference on <strong>Medical</strong> <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaAuthor Index235


Proceedings of International Conference on medical <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaAuthor IndexMIT 2005AuthorPageAAuffermann, W 114BBagayoko, C 108Bai ,J 14, 76Bai,.Y P 85, 125Batty, S 70Bayford, R 40Bergstrom, R 52Bi, Y 143, 147Birhane, D 45Bradley, L 102CChen, B 203Chen, C 182, 207, 223Chen, G 231Chen, J C 151Chen, S 143, 147Chen, Y 87, 139, 156Chen, .M 128Chueh, H S 151Clark, J 45, 70Cui, H 219DDong, J 231Du, F 125FFatehi, M 68Fryer, T 70Fu, H M 151GGao, X 160, 166Gao, X. L 166Gao, X. W. 45, 70, 79Galushka, M 102Galatsanos, N 63Geissbühler, A 2, 57, 108Ging, B 76Guo, W 203236


Proceedings of International Conference on medical <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaGuo, X 128HHan, X P 32He, J 24He, P 79He, X 28Heuberger, J 57Hintz, T 28Huang, L 227JJia, W J 24Ju, Y 177LLasebae, A 131Lehmann, T 63Lei, J 128Li, G 92, 219Li, H 172Li, Q 20Li, W 214Li, X 92Lin, L 92Lin, P 156, 87Lin, Q 95, 98, 194, 199, 214Lin, Y 182Liu, B 160, 166Liu, C 76Liu, Z 139, 177Liu, Y 20Liu, Y J 85Liu, Y L 92Luo, M R 8Luo, S Q 32, 160, 166Lu, L 203Lu, S 219MMapp, G 131Mlynek, D 63Müller, H 2, 57, 63, 108PPeng C 128Passmore, P 40Patterson, D 102Q237


Proceedings of International Conference on medical <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaOi L (1) 231Qi L (2) 76RRen, X 85SSaedi, D 68Shaikh, F 131Shu, H 166Shui, H 20Stettin, J 114TTang, S 20Tao, D 143, 147Tasi, W K 151Tian, Q 63WWang, B H 36Wang, B L 139, 156, 177, 87Wang, Y 92Wang, Y T 20Wang, Z 172Warncke, B 114Wei, J 36Wu, J 24, 231Wu, Q 28Wu, S 85Wu, W 98, 194XXie, X 119, 143, 147Xu, M 166Xu, X 87, 139, 177YYang, H 139Yang, J 20Yang, X 199Yin, H 160, 166ZZ<strong>and</strong>, S. 68Zeng, Y 85Zhan, M 219Zhang, H 95Zhang, J 36Zhang, Y 40Zhang, Y H 76238


Proceedings of International Conference on medical <strong>Imaging</strong> <strong>and</strong> <strong>Telemedicine</strong> (MIT 2005)August 16-19, Wuyi Mountain, ChinaZheng, H 102Zhou, H 24Zhou, S 20Zhou, Y 14Zhu, P P 160, 166239

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!