TrueView White Paper
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
Peakspeed TrueView TM Orthorectification
The Need for Orthorectification
All imaging sensors, including the human eye, see things from a certain perspective. Generally
this works for most needs, but there are times when the observations need to be presented in a
more objective, standardized format to
a) permit measuring positions and distances on the image in real-world units,
b) allow for overlaying the imagery on a map,
c) enable mosaicking with other imagery taken from different perspectives, and
d) enable comparing images against others taken at different times for change detection.
Sensor characteristics like resolution and field-of-view, the position and pointing angle of the
sensor, the topography of the area being imaged, and the curvature of the earth all conspire to
cause geometric distortions in the image. Even if the sensor was located an infinite distance
away and looking straight down on a small area of Earth, there would still be the unavoidable
problem of a flat image on a round planet; a mathematical transformation from 2-D Cartesian
coordinates to geographic latitude and longitude must be employed.
The Orthorectification Process
Orthorectification is the process of "warping" an input perspective aerial or satellite image into a
terrain-corrected, map-projected image. It requires
a) a digital elevation model (DEM) covering the area of interest, and
b) a sensor model that relates a pixel's image coordinates with a 3-D imaging ray for
intersecting with the DEM.
TrueView uses a sensor model in the form of ratios of polynomials, commonly referred to as
RPCs, and provided with the image.
The process begins with selecting a map projection and output resolution (GSD) for the desired
product image. Subsequently, for each pixel in the product image, its position in the input image
is computed by the following series of calculations (refer to Fig 1):
1. A map projection is used to compute the corresponding geographic latitude and
longitude of the output pixel coordinates (u,v).
2. The digital elevation model (DEM) is referenced to interpolate an elevation at the point's
geographic location to arrive at a 3-D geographic point.
3. The resulting 3-D geographic point is applied to the sensor model equations (RPCs) to
arrive at the fractional pixel location on the input image (x, y).
Figure 1. Three steps for calculating the input image coordinates of an output map pixel.
Figure 2. Bilinear resampling. The (x, y) fractional coordinates previously obtained are used in
computing the interpolation weights for the surrounding four pixel "points". The normalized
weighted sum of the four input pixels becomes the value for the output pixel at (u, v).
Finally, The input pixels surrounding the output pixel location are then accessed, and their
values resampled according to some interpolation scheme to arrive at an output pixel value.
This process is repeated for all output pixels, completing the orthorectification.
Orthorectification, Accelerated
While GPUs are typically employed for accelerating image processing, the requirements of
orthorectification has confined its implementation to the conventional CPU. The unconstrained
programmability of the FPGA makes it highly adaptable to accelerate applications previously
confined to the CPU.
Image Lines Samples Orthorectification
CPU Time (s)
TrueView
Processing Time (s)
Performance
Gain
Image A 23,215 22,778 1479 2.24 660 Times
Faster
Table 1. TrueView Xilinx U250 vs Orthorectification on AWS C5.2xlarge, 30M DEM