Download report - ETH - ASL Student Projects
Download report - ETH - ASL Student Projects
Download report - ETH - ASL Student Projects
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
Semester-Project<br />
Corner extraction in<br />
Omnidirectional Images<br />
Autumn Term 2010<br />
Autonomous Systems Lab<br />
Prof. Roland Siegwart<br />
Supervised by: Author:<br />
Stephan Weiss Abraham Oyoqui A.<br />
Laurent Kneip<br />
Markus Achtelik
Contents<br />
Abstract iii<br />
Symbols v<br />
1 Introduction 1<br />
1.1 Related Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1<br />
1.2 Wide-angle Lens Model . . . . . . . . . . . . . . . . . . . . . . . . . 3<br />
2 Method 7<br />
2.1 Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7<br />
2.2 FAST on a Sphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8<br />
2.3 First Results and Further Improvement . . . . . . . . . . . . . . . . 9<br />
3 Results 15<br />
3.1 Comparison with the Original FAST . . . . . . . . . . . . . . . . . . 15<br />
3.2 Affine Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . 15<br />
3.3 Application to a Real Image . . . . . . . . . . . . . . . . . . . . . . . 15<br />
4 Conclusions 24<br />
A Comparison using 150 ◦ angle lens 27<br />
B Comparison using 90 ◦ angle lens 29<br />
Bibliography 31<br />
i
Abstract<br />
Omnidirectional cameras present a number of advantages that make them attractive<br />
for robotics applications, e.g. they are cheap and they can represent a large<br />
portion of the environment with a single snapshot which reduces computational<br />
cost. However, omnidirectional cameras introduce radial distortion where lines that<br />
do not pass through the center of the image are strongly bent. This distortion<br />
along with the change of resolution within the same image leads to a deterioration<br />
on the performance of feature detectors when such effects are not modeled. The<br />
purpose of this project is to modify the FAST corner detector, a high-speed and<br />
reliable detector which is suitable for real-time applications, so that it takes into<br />
account the lens model and can accommodate the distortion on omnidirectional<br />
images. Specifically, we aim at applying the FAST algorithm on a spherical model<br />
of the image, the construction of which will implicitly contain the distortion due to<br />
the lens model.<br />
iii
Symbols<br />
Symbols<br />
H Second-order moments matrix for Harris corner detector<br />
C Corner response function<br />
P Linear projection matrix<br />
X Real-world coordinates<br />
λ Depth scale<br />
q Image vector on image sensor plane<br />
u” Real coordinates on image<br />
u’ Pixel coordinates on image<br />
A Rotation matrix for affine transformation<br />
t Translation matrix for affine transformation<br />
r Radius of circle of analysis<br />
rc<br />
Radius of circle of analysis at the center of the image (3)<br />
k Design constant<br />
ρ Distance from center of image. Equivalent to |u’|<br />
f Lens function. Equivalent to g<br />
g Lens model function for projection on omnidirectional image.<br />
In this <strong>report</strong> a Taylor model.<br />
M Number of pixels analyzed by the FAST.<br />
Can be less than the actual number of pixels on the image.<br />
Acronyms and Abbreviations<br />
FAST Feature from Accelerated Segment Test<br />
RD Radial Distortion<br />
SIFT Shift Invariant Feature Transform<br />
v
Chapter 1<br />
Introduction<br />
The aim of this project is to modify an existing corner detector so that its performance<br />
on omnidirectional images is improved. Specifically, we will use the FAST<br />
(Features from Accelerated Segment Test) corner detector which is a very fast corner<br />
detector and performs well with images taken from cameras that closely follow<br />
the ideal rectilinear pinhole camera model. However on omnidirectional images its<br />
performance decays, mainly due to the change in resolution as we move away from<br />
the center of the image and the radial distortion.<br />
A simple example is depicted in Figure 1.1. A simple pattern will change its shape<br />
and size dramatically even if the changes in perspective are not that great. Figure<br />
1.2 depicts the performance of the original FAST corner detector where only one<br />
out of the 4 corners is detected. At the end of this <strong>report</strong> we will compare this same<br />
image with the same parameters against the modified FAST detector.<br />
The next section will discuss briefly the existing literature on the subject of feature<br />
detection on omnidirectional images. Afterwards, a brief summary of the theoretical<br />
background concerning the methodology and the general context of this work will<br />
be given.<br />
1.1 Related Research<br />
Corners are a very important cue for vision systems in robotics. Features like<br />
corners or edges are interesting to vision systems because they reduce the amount<br />
of information that has to be dealt with and also a descriptor of the scene can be<br />
made based on these features. The most common approach to finding a corner<br />
or an edge consists of calculating a corner response function on an image. If this<br />
response exceeds a certain value (threshold) then it is considered a corner. Further<br />
discrimination can be made via a non-maximal suppression step to avoid multiple<br />
detections on the same spot.<br />
For example, the Harris corner detector calculates locally the second order moments<br />
of the intensity variations in the vertical and horizontal directions.<br />
⎡<br />
H = ⎣ ( ∂f<br />
∂x )2 ∂f ∂f<br />
∂x ∂y<br />
∂f ∂f <br />
∂x ∂y ( ∂f<br />
∂y )2<br />
From this matrix it has been proven that the largest eigenvalues will determine the<br />
directions of largest change in intensity. Ideally, an edge is an abrupt change in<br />
intensity between two uniform regions in one direction while a corner is a change<br />
in 2 directions. Therefore, a pixel is considered a corner when both eigenvalues on<br />
1<br />
⎤<br />
⎦
Chapter 1. Introduction 2<br />
Figure 1.1: Depending on its position away from the center of the image, a corner<br />
gets different representations<br />
the matrix H are big. However, to avoid calculating the eigenvalues, the following<br />
response function is used<br />
C = |H| − k(traceH) 2<br />
A more extensive revision on the literature on corner detection can be found in<br />
[1]. The corner detector used in this project however does not compute a corner<br />
response function. Instead of defining mathematically an edge or a corner, in FAST<br />
a corner is searched for by looking at the intensities around each candidate pixel.<br />
This means that around each pixel a discretized circle is analyzed, usually of radius<br />
3 and it is considered a corner if there is a continuous sequence of points that are<br />
brighter (or darker) than the analyzed pixel plus (minus) a certain threshold. In<br />
Figure 1.3 we can see an example of this. In this case the threshold was set so<br />
that only 9 points on the circle are found to be brighter than the pixel p. Under<br />
the so-called fast10 (where the test for being a corner requires a sequence of 10<br />
points) this would not be a corner. However, under the fast9 it would be. This will<br />
be important later on where we use the most strict test (fast12 ) to compare the<br />
algorithms.<br />
Changes from viewpoint can be accounted for with these state-of-the-art methods<br />
where the features’ appearance is made invariant to affine transformations, e.g.<br />
changes in scale and rotation. When all of these approaches to corner detection, or<br />
in general to image key-points, are applied to images with radial distortion (RD)<br />
they do not perform correctly. In some research studies like Burschka et al.[2]<br />
it is decided to ignore the effects of RD while in others an image rectification is<br />
performed prior to the application of their feature detection method. Hansen et
3 1.2. Wide-angle Lens Model<br />
Figure 1.2: Corners detected (green) by the original FAST algorithm on an omnidirectional<br />
image<br />
Figure 1.3: FAST criteria for finding corners. Image from Machine learning for<br />
high-speed corner detection. E. Rosten, T. Drummond (2006)<br />
al. uses a spherical representation of the image which is constructed by knowing<br />
the camera parameters. This paper is a modification to the SIFT (Shift Invariant<br />
Feature Transform) in which they “consider defining the scale-space response for<br />
wide-angle images as the convolution of the image mapped to the sphere and the<br />
solution of the (heat) diffusion equation on the sphere”[3]. In this project, a similar<br />
approach was taken in the sense that a spherical model of the images is also built<br />
based on a known camera calibration.<br />
1.2 Wide-angle Lens Model<br />
As it was mentioned, omnidirectional cameras can represent a large portion of the<br />
environment as opposed to perspective cameras which we are used to working with<br />
where a linear mapping is described by a projection matrix P, also called pin-hole<br />
model. However they have in common the fact that both possess a single effective<br />
viewpoint i.e. there is a point through which all rays pass, called the projection<br />
center. This is desirable since it means that any point in the image corresponds to
Chapter 1. Introduction 4<br />
only one direction in the real-world and geometrically correct perspective images<br />
can be generated [4]. There exists a number of different wide-angle lens models<br />
to approximate the actual lens and/or mirror that the camera uses. While some<br />
are approximate function models like the fish-eye transform (FET), polynomial<br />
fish-eye transform (PFET) or division model for example, others are derived projection<br />
functions e.g. equidistant, equisolid, orthographic projection function [5].<br />
The advantage of the first type of models is that they can include errors in the<br />
manufacturing of the lens.<br />
Figure 1.4: An omnidirectional image can capture a whole room in a single snapshot.<br />
Appearance of objects however, change as we move away from the center of the<br />
image.<br />
In this project the Matlab Toolbox from Davide Scaramuzza for camera calibration<br />
was used. As described in the documentation that accompanies the toolbox [6] the<br />
lens model is approximated by a polynomial, the order of which is usually 4. The<br />
coefficients of this polynomial are the calibration parameters to be extracted with<br />
the toolbox.<br />
According to Micusik [7], any pixel in the image point can be represented by a unit<br />
vector in 3D, i.e. on a sphere. The projection equation can then be written as<br />
λq = P · X<br />
with q, a unit vector representing the image point. When using the spherical model<br />
we can assume that there exists always a vector p” with the same direction as q<br />
which is mapped to u” on the sensor plane (Fig 1.5). More formally, there exist two<br />
functions g and h that map from the vector u” on the sensor plane to the vector p”,<br />
one function accounting for the shape of the lens or mirrors and the other for the<br />
type of projection onto the sensor plane. For perspective cameras and orthographic<br />
projection h = 1. Instead of using two different functions, orthographic projection<br />
is assumed and therefore only one function must be calculated.<br />
<br />
h(|u”|)u” u”<br />
λq = λ<br />
=<br />
g(|u”|) g(|u”|)<br />
Furthermore we can account for the misalignment between camera and sensor planes<br />
by the following affine transformation:
5 1.2. Wide-angle Lens Model<br />
Figure 1.5: Mapping onto the sensor plane. Image from Micusik.<br />
Figure 1.6: Mapping to sensor plane for hyperbolic mirror and fish-eye lens. Image<br />
from Micusik.<br />
u” = Au’ + t<br />
where u’ is in pixel coordinates. As mentioned the proposed form for the lens<br />
function is polynomial, where the coefficients a0, a1,...,aN have to be calculated<br />
and N is considered 4 for this project as it was found to yield a complex and<br />
accurate enough polynomial.<br />
g(|u”|) = a0 + a1|u”| + a2|u”| 2 + ... + aN|u”| N<br />
Thus the final projection equation for omnidirectional cameras becomes:<br />
<br />
<br />
λp” = λ<br />
= P · X<br />
u”<br />
a0 + a1|u”| + a2|u”| 2 + ... + aN|u”| N
Chapter 1. Introduction 6
Chapter 2<br />
Method<br />
In this section the method followed for developing the algorithm will be discussed.<br />
Three main parts formed the development of the final version of the modified FAST<br />
algorithm. First, the calibration using Scaramuzza’s Toolbox is used where the<br />
lens model is obtained. Then, again using functions from this toolbox, a spherical<br />
model is obtained. The second part describes the application of the FAST idea to<br />
the spherical model and finally, the adjustments and further modifications to get a<br />
better algorithm.<br />
2.1 Calibration<br />
As illustrated in Figure 2.1 the calibration is done by taking several pictures from<br />
different views of a pattern, usually a checkerboard. Extrinsic parameters as well<br />
as intrinsic are determined. We are interested in the intrinsic, i.e. the coefficients<br />
of the n-order polynomial. We can also obtain the true center of the image which<br />
is helpful later on when we deal with the change in resolution.<br />
Figure 2.1: Calibration of an omnidirectional image. Taking various images of a<br />
checkerboard pattern we can obtain the lens model for the camera as an nth order<br />
polynomial. Image on the right is a “physical” representation of the lens assuming<br />
orthographic projection from the lens to the image sensor plane.<br />
As mentioned in the previous section, an omnidirectional image can be represented<br />
on a spherical model. This comes from the fact that cameras with a single viewpoint<br />
map a pixel on the image to a unique direction in the real world and therefore, to<br />
7
Chapter 2. Method 8<br />
a unique position on a sphere. The idea here is that the circle of analysis for each<br />
pixel on the image for the FAST will now be applied on the 3D representation of<br />
such pixel. In other words, instead of sliding a circle across the image, it will slide<br />
on a sphere where straight lines in the real-world have not yet been bent due to the<br />
projection onto the planar image.<br />
2.2 FAST on a Sphere<br />
Once we have the spherical model of the image we can calculate a circle around<br />
each pixel on 3D. This circle can be thought of as the intersection of the sphere and<br />
a cone starting on the center of the sphere. We know that the sphere is unitary so<br />
the only parameter to be designed is the radius of this circle, or equally the angle<br />
of the cone. To be consistent with the original FAST the radius was chosen to be 3.<br />
While the radius remains constant, its projection to the image will vary depending<br />
on where it is on the sphere. Figure 2.2 depicts this projection of the circles, and it<br />
is easy to see that those circles located near the z axis of the sphere (i.e. the center<br />
of the image) will project like a perfect circle on the image and will therefore be<br />
equivalent to the discretized circle for the original FAST. What is more interesting<br />
for us however is the circles on the regions away from the center of the image where<br />
we will have a change in the shape which will account for the line bending effect on<br />
the outer regions of the image.<br />
Figure 2.2: Projection of circles on the sphere to the image plane<br />
Figure 2.3 and 2.4 show a more detailed representation of this situation. We can<br />
conclude that the distortion model has an effect on the change of shape of these<br />
circles and these changes will be able to deal with the line bending. Note that what<br />
is being projected are not the whole circles but rather only 16 of the points (Fig.<br />
2.5). The FAST algorithm will be applied to these 16 points like it is applied to<br />
those of a discretized circle in the normal FAST. The difference here is that we no<br />
longer have discretized circles but real coordinates which introduces the need for<br />
interpolation in order to avoid aliasing. We chose bilinear interpolation to do this<br />
which means that for every point we need to save 4 coefficients on a look-up table.
9 2.3. First Results and Further Improvement<br />
This results in a 16x4xM matrix, M being the number of pixels in the image.<br />
Figure 2.3: A projected circle near the center of the image maintains its shape,<br />
being equivalent to the original FAST<br />
Figure 2.4: A projected circle away from the center of the image will become an<br />
ellipse. It is this change in shape that implicitly takes into account the distortion<br />
model.<br />
2.3 First Results and Further Improvement<br />
As said, the FAST algorithm is applied on these 16 points i.e. the intensity values<br />
at each of these points is taken, interpolating between its 4 closest pixel coordinates.<br />
These intensity values undergo the FAST criteria looking for a sequence of n<br />
connected points that are brighter or darker than the pixel around which the circle<br />
was built (plus/minus a threshold).<br />
Figure 2.6 shows the first comparison between the original FAST and the FAST on<br />
a sphere. From now on, all the images presented are taken with the most strict test<br />
for the FAST criteria: fast12. This was done to ensure that when using the FAST<br />
on the sphere, corners can only be detected if they actually are the result of two<br />
straight lines forming a corner. If it passes under the fast12 it passes under every<br />
other version. In general, we can find more corners with the FAST on the sphere<br />
but a closer look at the found corners reveals that interpolation is playing a big<br />
part in finding these corners and not necessarily the modification through the lens<br />
model.<br />
As we can see on Figure 2.7 the shape of both versions of FAST is similar. However<br />
one of them fails to recognize the corner while the other will be able to satisfy the<br />
conditions to determine a corner since it is interpolating values. In other words,
Chapter 2. Method 10<br />
Figure 2.5: A circle is projected as an ellipse and only 16 points are taken for the<br />
FAST algorithm. Interpolation is required for each of these points.<br />
Figure 2.6: Left: original FAST. Right: FAST on a sphere<br />
intensity values get pulled by neighbouring pixels and in corners such as the one<br />
depicted, interpolation will actually aid the detection. While desirable, this is not<br />
what is looked for since we have not yet seen the effect of the implicitly modeled<br />
distortion function on the detection of corners.<br />
Figure 2.8 shows the result when the algorithm is applied on a pattern more suitable<br />
for evaluation of corner detection. As we can observe, performance of the modified<br />
FAST is similar to the original version. Specifically we have the same detection<br />
issues in the outer regions of the image where the modified algorithm should outperform<br />
the original one. A closer look to the sphere hinted that the method should<br />
be detecting the corners. However, when projecting the circle to the image it is clear<br />
that the circle is too small for the resolution present in that region. We have not<br />
yet taken into account the fact that resolution varies nonlinearly as we move away<br />
from the center of the image.<br />
As a first approximation though, it was considered that resolution varied linearly<br />
and so the radius of the circle would grow linearly too as we go away from the<br />
center. The proportion between the distance from the center of the image and the
11 2.3. First Results and Further Improvement<br />
Figure 2.7: Effect of interpolation. Points in green are the points considered for the<br />
original FAST while points in red are the projected points from the circle on the<br />
sphere.<br />
radius of the circle is then tuned to attain detection at the borders.<br />
r = rc + kρ<br />
However, to avoid this tuning and take into account the nonlinear variation of<br />
resolution, the lens model was again taken into account for calculating the increase<br />
of the circle radius.<br />
r = rc + k∆f<br />
r = radius of circle at distance ρ from the image center<br />
rc = radius of circle at the center of the image (3)<br />
k = design constant. Note: It could be changed but leaving it to 1 gives good results<br />
∆f = f(ρ + ∆ρ) − f(ρ)<br />
∆ρ = 3<br />
Figure 2.10 shows a lens model function for the 190 ◦ camera used for all images so<br />
far. We basically use the slope of the function to determine how big the radius of<br />
the circle should be at a certain distance from the center. When evaluating this<br />
algorithm we can see that we finally get corner detection at the edge of the image<br />
without having to tune the radius of the circle.
Chapter 2. Method 12<br />
Figure 2.8: Comparison between original FAST and FAST on a sphere with circle<br />
of constant radius. Top: original FAST. Bottom: FAST on sphere
13 2.3. First Results and Further Improvement<br />
Figure 2.9: Modified FAST with a linearly increasing radius of the circle of analysis.<br />
Top: k = 4 Bottom: k = 6<br />
Figure 2.10: Lens function for a 190 ◦ camera
Chapter 2. Method 14<br />
Figure 2.11: Modified FAST with nonlinearly increasing radius of the circle of<br />
analysis
Chapter 3<br />
Results<br />
Now that we have derived the modifications for the FAST algorithm we can evaluate<br />
its performance. The evaluation presented in this section is qualitative. In order<br />
to fully conclude on the performance of the FAST on a sphere with respect to the<br />
original version, it is necessary to devise a method to quantitatively measure their<br />
performance.<br />
3.1 Comparison with the Original FAST<br />
We will now show some comparisons between the original FAST and the proposed<br />
modification. Figure 3.1 and 3.2 are representative images of the experiments performed<br />
with the camera positioned in such a way that affine transformations are<br />
avoided or they are minimal. In such circumstances, we get the same detections as<br />
the original FAST on the central regions of the image and more detections as we<br />
move away from the center. This indicates that the modified algorithm is in fact<br />
outperforming the original version. Even on those perspectives where both algorithms<br />
fail to recognize corners that clearly are present we can see on the borders,<br />
on parts of the image that are not the pattern, how the FAST on a sphere is able<br />
to detect corners better than the original algorithm.<br />
3.2 Affine Transformations<br />
We have already seen in Figure 3.2 that under some orientation of the corners with<br />
respect to the camera, they will not be detected and the original FAST and the<br />
FAST on a sphere will perform almost the same. We will now see another failure<br />
mode. When we introduce an affine transformation the FAST on a sphere will cease<br />
to detect also on the edges of the image. Even if we see that the modified FAST<br />
detects a couple of corners more than the original it is not conclusive enough to say<br />
that one performs better than the other. In fact, the edges detected are mainly due<br />
to the interpolation effect rather than to the lens distortion function.<br />
3.3 Application to a Real Image<br />
As a final result we will show the performance on a highly distorted image featuring<br />
a more realistic setting. While the pattern used showed us some of the potential<br />
modes in which the algorithm fails, it is also desirable to test it on a more complex<br />
image. Further testing with these kind of images can be made to statistically<br />
measure the repeatability of the algorithm.<br />
15
Chapter 3. Results 16<br />
As we said before, all testing with the patterns were done with the fast12, the most<br />
strict criteria for corner discrimination and a low threshold to eliminate the possibilty<br />
of not detecting a corner by idealizing too much the change in intensities on<br />
an edge or corner. Now that we have seen that the modified version outperforms<br />
the original one, especially on the edges, we can tune the FAST, relax the criteria<br />
in order to find the parameters that will give a good performance for a given application.<br />
In the case of Figures 3.6 and 3.7 these parameters were chosen to be a<br />
threshold of 50 and relax the FAST test to only 9 points, or fast9.<br />
In the same way, we can compare now the image that was presented at the beginning<br />
as motivation. We can again see that with the same parameters, the modified FAST<br />
performs better than the original, finding all corners of the rectangle.<br />
Even though this is a qualitative comparison it can already be seen that the modified<br />
algorithm is capable in general to detect many more corners than the original FAST.<br />
It must still be evaluated if this improvement is worth the added computational cost<br />
and processing time.
17 3.3. Application to a Real Image<br />
Figure 3.1: Comparison between original and modified FAST. fast12, threshold =<br />
25. Top: original FAST. Bottom: FAST on sphere
Chapter 3. Results 18<br />
Figure 3.2: Comparison between original and modified FAST. fast12, threshold =<br />
25. Top: original FAST. Bottom: FAST on sphere
19 3.3. Application to a Real Image<br />
Figure 3.3: Comparison between original and modified FAST in a setting where<br />
there exists an affine transformation between camera and pattern. fast12, threshold<br />
= 25. Top: original FAST. Bottom: FAST on sphere
Chapter 3. Results 20<br />
Figure 3.4: Comparison between original and modified FAST in a setting where<br />
there exists an affine transformation between camera and pattern. fast12, threshold<br />
= 25. Top: original FAST. Bottom: FAST on sphere
21 3.3. Application to a Real Image<br />
Figure 3.5: Comparison between original and modified FAST in a setting where<br />
there exists an affine transformation between camera and pattern. fast12, threshold<br />
= 25. Top: original FAST. Bottom: FAST on sphere
Chapter 3. Results 22<br />
Figure 3.6: original FAST<br />
Figure 3.7: Modified FAST with nonlinearly increasing radius of circle. fast9,<br />
threshold = 50
23 3.3. Application to a Real Image<br />
Figure 3.8: original FAST<br />
Figure 3.9: Modified FAST with nonlinearly increasing radius of circle. fast9,<br />
threshold = 50
Chapter 4<br />
Conclusions<br />
The proposed modifications to the FAST corner detector oriented to improving its<br />
performance on omnidirectional images have been evaluated qualitatively and have<br />
been found to successfully increase correct detection of corners. In general, it has<br />
been found that it will detect at least about the same amount of corners as the<br />
original implementation and when avoiding affine transformations, it will detect a<br />
lot more corners along the edge of the image. While satisfactory results, a more<br />
quantitative measure is still needed. Specifically we would like to compare the<br />
amount of correct detections between both implementations and the repeatability<br />
of the modified FAST.<br />
The fact that under affine transformations and in some orientations the corner<br />
detector fails already gives an indication that repeatability might not be so high<br />
and the detector by itself would not be able to find the same corner on a different<br />
image. However, we saw that the FAST can be tuned to achieve good performance<br />
by increasing the threshold while relaxing the number of points constraint or vice<br />
versa. Therefore if we would know or have an approximation of the relative motion<br />
of the camera between images we could adjust these parameters and have a higher<br />
probability of finding the same corner again. Interpolation also becomes an issue<br />
when encountered with artificial lines i.e. areas that under a certain perspective<br />
appear so thin they become a line. Together with discretization errors these lines<br />
are detected as corners. Since the normal FAST also suffers from discretization<br />
errors, interpolation will in general help to get a better detection as long as we<br />
remove artificial lines. If this is not possible we could also process further the<br />
corners detector and ensure there are not many points along the same line.<br />
Furthermore, this modifications only make sense for lenses with high distortion such<br />
as the one presented in this <strong>report</strong>. In appendix A some images are included for<br />
lenses with 150 ◦ and 90 ◦ lens angle. Detection is basically the same for the original<br />
and modified versions since lines are not bent so hard and resolution changes are<br />
also small. Although performance is not increased, it gives good signs that the<br />
modifications did not affect the normal performance of the detector.<br />
Finally, speed must still be evaluated. The very strength of the FAST corner detector<br />
is its speed that make it useful for real-time applications. The implementation<br />
of this algorithm requires the creation of very large look-up tables for the position of<br />
the points of the circle around each pixel and the coefficients necessary for interpolation.<br />
With each look-up table containing a matrix of 2x16xM for the circle points<br />
coordinates and 4x16xM for the interpolation coefficients, if we use a very large<br />
resolution this already becomes a problem regarding memory available. M is the<br />
number of pixels that will be examined. This value can also be reduced if we eliminate<br />
the black regions around the image but typically for an image of 480x640, M<br />
is around 300,000. Hence the need to evaluate if resolution can be reduced enough<br />
24
25<br />
to allow such a look-up table to be stored in the vehicle’s memory and evaluate if<br />
it can process images in real-time.
Chapter 4. Conclusions 26
Appendix A<br />
Comparison using 150 ◦ angle<br />
lens<br />
Figure A.1: Original FAST with a variable radius on a 150 ◦ lens<br />
27
Appendix A. Comparison using 150 ◦ angle lens 28<br />
Figure A.2: Modified FAST with a variable radius on a 150 ◦ lens
Appendix B<br />
Comparison using 90 ◦ angle<br />
lens<br />
Figure B.1: Original FAST with a variable radius on a 90 ◦ lens<br />
29
Appendix B. Comparison using 90 ◦ angle lens 30<br />
Figure B.2: Modified FAST with a variable radius on a 90 ◦ lens
Bibliography<br />
[1] M. Lourenco, J. Barreto, A. Malti: Feature Detection and Matching<br />
in Images with Radial Distortion. In 2010 IEEE International Conference on<br />
Robotics and Automation, Anchoragee, Alaska, USA 2010.<br />
[2] D. Burshka, M. Li, R. Taylor, G. Hager: Scale-invariant registration<br />
of monocular endoscopic images to ct-scans for sinus rugery. In MICCAI (2),<br />
2004<br />
[3] P. Hansen, P. Corke, W. Boles, K. Daniilidis: Scale-Invariant<br />
Features on the Sphere. In International Conference on Computer Vision, Oct.<br />
2007<br />
[4] D. Scaramuzza: Omnidirectional Vision: From Calibration to Robot Motion<br />
Estimation. PhD Thesis, Diss. <strong>ETH</strong> No. 17635. 2008<br />
[5] C. Hughes, P. Denny, E. Jones, M. Glavin: Accuracy of fish-eye lens<br />
model. Applied Optics/ Vol. 49 No. 17, 2010<br />
[6] D. Scaramuzza: Autonomous System Lab. OCamCalib Toolbox. Internet:<br />
http://robotics.ethz.ch/~scaramuzza/Davide_Scaramuzza_files/<br />
Research/OcamCalib_Tutorial.htm [Aug. 2010]<br />
[7] B. Micusik: Two View Geometry of Omnidirectional Cameras. PhD thesis,<br />
Center for Machine Perception, Czech Technical University in Prague, 2004.<br />
[8] D. Schneider, E. Schwalbe, H. -G. Maas: Validation of geometric models<br />
for fisheye lenses. ISPRS Journal of Photogrammetry and Remote Sensing<br />
64 (2009) 259-266<br />
[9] E. Rosten, T. Drummond: Machine learning for high-speed corner detection.<br />
In European Conference on Computer Vision, 2006<br />
31