07.02.2013 Views

Issue 10 Volume 41 May 16, 2003

Issue 10 Volume 41 May 16, 2003

Issue 10 Volume 41 May 16, 2003

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

This paper presents a new QRS complex detection algorithm that can be applied in various on-line FCC processing<br />

systems. The algorithm is performed in two steps: first a wavelet transform filtering is applied to the signal, then QRS complex<br />

localization is performed using a maximum detection and peak classification algorithm The algorithm has been tested in two<br />

phases, First the QRS detection in FCC registrations from the MIT-BIH database has been performed, which led to an average<br />

detection ratio of 99,50\%, Then, the algorithm has been implemented into a microcontroller-driven portable Holter device.<br />

DTIC<br />

Electrocardiography; Wavelet Analysis<br />

<strong>2003</strong>0034655 Dicle Univ., Diyarbakir, Turkey<br />

A New Approach for Diagnosing Epilepsy by Using Wavelet Transform and Neural Networks<br />

Akin, M.; Arserim, M. A.; Kiymik, M. K.; Turkoglu, I.; October 25, 2001; 5 pp.; In English<br />

Report No.(s): AD-A4<strong>10</strong>462; No Copyright; Avail: CASI; A01, Hardcopy<br />

Today, epilepsy keeps its importance as a major brain disorder. However, although some devices such as magnetic<br />

resonance (MR), brain tomography (BT) are used to diagnose the structural disorders of brain, for observing some special<br />

illnesses especially such as epilepsy, EEG is routinely used for observing the epileptic seizures, in neurology clinics. In our<br />

study, we aimed to classify the EEG signals and diagnose the epileptic seizures directly by using wavelet transform and an<br />

artificial neural network model. EEG signals are separated into delta, theta, alpha, and beta spectral components by using<br />

wavelet transform. These spectral components are applied to the inputs of the neural network. Then, neural network is trained<br />

to give three outputs to signify the health situation of the patients<br />

DTIC<br />

Neural Nets; Electroencephalography; Wavelet Analysis; Signal Processing<br />

<strong>2003</strong>0034658 Ecole Nationale Superieure des Telecommunications, Brest, France<br />

Medical Image Indexing and Compression Based on Vector Quantization: Image Retrieval Efficiency Evaluation<br />

Ordonez, J. R.; Cazuguel, G.; Puentes, J.; Solaiman, B.; Cauvin, J. M.; October 25, 2001; 5 pp.; In English; Original contains<br />

color illustrations<br />

Report No.(s): AD-A4<strong>10</strong>470; X5-X5; No Copyright; Avail: CASI; A01, Hardcopy<br />

This paper addresses the problem of efficient image retrieval from a compressed image database, using information<br />

derived from the compression process. Images in the database are compressed applying two approaches: Vector Quantization<br />

(VQ) and Quadtree image decomposition. Both are based on Konohen’s Self-Organizing Feature Maps (SOFM) for creating<br />

vector quantization codebooks. However, while VQ uses one codebook of one resolution to compress the images, Quadtree<br />

decomposition uses simultaneously 4 codebooks of four different resolutions. Image indexing is implemented by generating<br />

a Feature Vector (FV) for each compressed image. Accordingly, images are retrieved by means of FVs similarity evaluation<br />

between the query image and the images in the database, depending on a distance measure. Three distance measures have been<br />

analyzed to assess FV index similarity: Euclidean, Intersection and Correlation distances. Distance measures efficiency<br />

retrieval is evaluated for different VQ resolutions and different Quadtree image descriptors. Experimental results using real<br />

data, esophageal ultrasound and eye angiography images, are presented.<br />

DTIC<br />

Vector Quantization; Angiography; Ultrasonics; Image Processing<br />

<strong>2003</strong>0034661 Air Force Research Lab., Edwards AFB, CA<br />

The Accuracy of Remapping Irregularly Spaced Velocity Data onto a Regular Grid and the Computation of Vorticity<br />

Cohn, Richard K.; Koochesfahani, Manoochehr M.; March 15, 1999; 4 pp.; In English<br />

Contract(s)/Grant(s): AF Proj. 3058<br />

Report No.(s): AD-A4<strong>10</strong>461; AFRL-PR-ED-TP-FY99-0060; No Copyright; Avail: CASI; A01, Hardcopy<br />

The velocity data obtained from Molecular Tagging Velocimetry (MTV) are typically located on an irregularly spaced<br />

measurement grid. In this method of velocimetry, the flowing medium is premixed with a molecular complex that can be<br />

turned into a long life-time tracer upon excitation by photons. The velocity vector is determined from the displacement of<br />

small regions ‘tagged’ by a pulsed laser that are imaged at two successive times within the lifetime of the tracer. This technique<br />

may be viewed as the molecular counterpart of PIV. To take advantage of standard data processing techniques, the MTV data<br />

need to be remapped onto a regular grid with a uniform spacing. This study examined the accuracy and noise issues related<br />

to the use of various low-order polynomial least-square fits for remapping and the subsequent computation of vorticity from<br />

these data. The information obtained has relevance to PIV data processing as well. As noted by Spedding and Rignot, the best<br />

178

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!