24.12.2014 Views

Earthquake Engineering Research - HKU Libraries - The University ...

Earthquake Engineering Research - HKU Libraries - The University ...

Earthquake Engineering Research - HKU Libraries - The University ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

549<br />

Cappazzo (1984), Heyn et al (1996), Kidder et al (1996), Sampath et al. (1998) and Hansen et al.<br />

(2002)]. <strong>The</strong> ability to carefully analyze and<br />

characterize a patient's manner and rate of<br />

movement (gait) has greatly aided in the<br />

development of treatment options for the physically<br />

handicapped. Requirements of very precise<br />

measurements, extended workspaces and limited<br />

physical constraints are similar to those faced in<br />

many civil engineering applications. An example of<br />

a human motion study (applied to dance) conducted<br />

using an optical (light)-based approach is shown in<br />

Figure 1.<br />

Vision-Based Systems<br />

Vision-based systems may be classified into either<br />

image- or light-based. Image-based systems in the<br />

context of motion tracking rely on feature detection<br />

between frames of a color texture map. In parallel<br />

with feature detection, image processing and<br />

reconstruction must be conducted, a challenging task<br />

for resolving the tracking to the level of resolution<br />

required for seismic motions.<br />

FIGURE 1. Example of human motion study<br />

(dance) using light-based approach (courtesy of<br />

Motion Analysis).<br />

Early forms of light-based tracking were based on emitter/receiver systems using arrays of light<br />

emitting diodes (LEDs) mounted statically at specific locations in a test space and a CCD camera<br />

(charge-coupled device) responsible for recording the created reference pattern. Given the exact<br />

spacing and position of the LEDs, the relative position of the CCD can then be triangulated. More<br />

recent techniques have removed the need for emitter/receiver pairs by directly analyzing image (video)<br />

data. <strong>The</strong> most popular approach currently uses reflective markers to identify points of interest within<br />

the environment that should be tracked, significantly reducing the required processing time. In this<br />

case, a range of wavelengths of light are filtered out, thus decreasing the required amount of<br />

information to be collection, allowing higher speeds and resolution to be captured. <strong>The</strong> CCD cameras<br />

are then used to measure the light intensity at each pixel. A strobe constructed of a cluster of highintensity<br />

LEDs is used on a per camera basis to illuminate the scene with red light. This strobe can<br />

easily illuminate reflective markers at distance between 2-25 meters.<br />

During the data acquisition stage only raw data (images showing the marker position) is acquired on a<br />

per camera basis. Each marker is represented by multiple pixels in the final image. Before the spatial<br />

position can be calculated, the marker "blobs" have to be approximated (by ellipses or circles) and<br />

their respective centroids determined. <strong>The</strong> position of the corresponding points can then be matched<br />

between image pairs obtained from two cameras and triangulated to obtain their 3D position. <strong>The</strong><br />

corresponding markers between two images can be found based on the epipolar constraint, which<br />

states that a point in the first image must lie on the epipolar line in the second. (This reduces the<br />

matching problem from an area to a line segment). Given the projection of a marker into one image<br />

plane at p l and into another image plane at p 2 , the epipolar constraint is expressed by:<br />

•F- Pl =0 (1)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!