Research Groups - Electrical, Electronic and Computer Engineering
Research Groups - Electrical, Electronic and Computer Engineering
Research Groups - Electrical, Electronic and Computer Engineering
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
Project descriptions of Mr Grobler – 2010<br />
<strong>Research</strong> <strong>Groups</strong>:<br />
A1 SystemC / GHDL extension of QEMU<br />
A2 Altera Nios-II support for QEMU<br />
A3 Altera Nios-II support for LLVM<br />
A4 PIC16F887 VHDL core for the Altera DE1<br />
A5 Multi-protocol analyzer based on the Altera DE1<br />
A6 Ultrasonic ranging sensor module<br />
A7 Ultrasonic imaging system<br />
A8 Low cost Inertial Navigation System<br />
A9 Optical mouse sensor based dead-reckoning module<br />
A10 FPGA based encryption for robotic applications<br />
A11 FPGA based compression for robotic applications<br />
A12 Integration of autonomous vehicle<br />
A13 Stereo vision system for robotic vehicles<br />
T14 MCQ assessment system<br />
<strong>Research</strong> Group:<br />
T15 De-weathering / de-noising of video sequences<br />
T16 Particle flow analysis in micro-fluidic channels<br />
T17 Road segmentation <strong>and</strong> structure analysis for an autonomous rover<br />
T18 Large scale object recognition<br />
T19 3D modelling <strong>and</strong> chroma keying studio<br />
T20 Real-time object recognition using a GPU<br />
T21 Real-time video encoding/decoding using natural image statistics<br />
T22 Map-building of an office environment<br />
T23 Content based video/image retrieval system<br />
T24 High performance real-time stereo vision system<br />
T25 Visual odometry for an autonomous vehicle<br />
T26 2D-3D pose estimation<br />
T27 GPU algorithmic implementation<br />
T28 Heat shimmer modeling, characterization <strong>and</strong> correction<br />
T29 Background subtraction in a foliage environment
T30 Identification of stable regions in video sequences<br />
T31 Distributed object identification <strong>and</strong> tracking
1. Project number: A1<br />
2. Project title: SystemC / GHDL extension of QEMU<br />
3. Study leader: Mr. H Grobler<br />
4. <strong>Research</strong> group: Advanced Computing <strong>and</strong> Embedded Systems<br />
5. Focus of project: Design<br />
6. Eligibility<br />
6.1 Intended degree programme: <strong>Computer</strong> <strong>Engineering</strong> or <strong>Electronic</strong> <strong>Engineering</strong><br />
6.2 Required running average: 60%<br />
7. Brief description of the project<br />
SystemC is a new IEEE st<strong>and</strong>ardized System Description Language similar in nature to the<br />
more well know Hardware Description Languages (HDLs) such as VHDL. A reference<br />
implementation of SystemC exists as an Open Source simulation kernel packaged as a set of<br />
C++ library routines <strong>and</strong> macros. The primary advantage of the free / open SystemC st<strong>and</strong>ard<br />
is that it allows hardware / software systems to be described <strong>and</strong> simulated without requiring<br />
any other support software. GHDL is a VHDL analyzer <strong>and</strong> simulator based on GCC. QEMU<br />
is an Open Source processor <strong>and</strong> peripheral simulator that supports a large number of CPU<br />
variants (including X86, ARM, SPARC, MIPS, PowerPC, etc.). Recently QEMU has been<br />
used by Google as an emulator for their Android software stack for mobile devices (next<br />
generation cellphones). The purpose of this project will be to add support for SystemC <strong>and</strong><br />
GHDL to QEMU. The resultant Open Source system will allow the simulation of complete<br />
embedded systems before any hardware implementation is done. The technical challenges of<br />
this project include learning the three systems (SystemC, GHDL <strong>and</strong> QEMU) <strong>and</strong> subsequent<br />
integration of the three systems.<br />
8. What will be expected of the student<br />
The student will need to perform a detailed study of the QEMU simulator, the SystemC<br />
language <strong>and</strong> simulation kernel implementation, as well as the GHDL simulator. The student<br />
must perform the necessary integration of QEMU, GHDL <strong>and</strong> SystemC. The implementation<br />
must be validated by simulating a sample ARM based embedded system with non-trivial<br />
peripherals described in VHDL <strong>and</strong> SystemC. C/C++ <strong>and</strong> VHDL skills will be required.<br />
Required outcomes: Extensions to QEMU to support SystemC / GHDL integration.<br />
9. Resources<br />
Any PC with Linux OS can be used for this project. The designated work area for this project<br />
will be one the Linux computers in the <strong>Computer</strong> <strong>Engineering</strong> Project Lab. The Department's<br />
simulation clusters are available if necessary. All required software freely available on the<br />
Internet.
1. Project number: A2<br />
2. Project title: Altera Nios-II support for QEMU<br />
3. Study leader: Mr. H Grobler<br />
4. <strong>Research</strong> group: Advanced Computing <strong>and</strong> Embedded Systems<br />
5. Focus of project: Design<br />
6. Eligibility<br />
6.1 Intended degree programme: <strong>Computer</strong> <strong>Engineering</strong> or <strong>Electronic</strong> <strong>Engineering</strong><br />
6.2 Required running average: 60%<br />
7. Brief description of the project<br />
The Altera Nios-II soft-core processor is a 32-bit processor that is implemented (essentially)<br />
in VHDL. The SOPC Builder (System on a Programmable Chip Builder) that forms part of<br />
the Altera FPGA design suite allows this processor to be configured <strong>and</strong> synthesized for any<br />
suitable Altera FPGA. QEMU is an Open Source processor <strong>and</strong> peripheral simulator that<br />
supports a large number of CPU variants (including X86, ARM, SPARC, MIPS, PowerPC,<br />
etc.). Recently QEMU has been used by Google as an emulator for their Android software<br />
stack for mobile devices (next generation cellphones). The purpose of this project is to<br />
implement support for the Altera Nios-II soft-core processor in QEMU. The technical<br />
challenge lies in the automatic extraction of the Nios-II processor configuration from the files<br />
generated by the SOPC Builder, in particular aspects such as floating point support <strong>and</strong><br />
custom instructions.<br />
8. What will be expected of the student<br />
The student will need to perform a detailed study of the QEMU simulator <strong>and</strong> the Altera<br />
Nios-II soft-core processor. The student will also need to study the intricacies of the Altera<br />
SOPC Builder. The student must subsequently add the necessary support to QEMU for the<br />
Nios-II processor. The implementation should be validated by executing sample programs<br />
compiled by the Nios-II GCC compiler included in the Altera Nios-II EDS. The sample<br />
programs must be of a sufficiently comprehensive nature. C programming skills will be<br />
required. Required outcomes: Extensions to QEMU to support the Altera Nios-II processor.<br />
9. Resources<br />
Any Linux based PC can be used for this project. All required software is freely available on<br />
the Internet. The designated work area for this project will be one the Linux computers in the<br />
<strong>Computer</strong> <strong>Engineering</strong> Project Lab.
1. Project number: A3<br />
2. Project title: Altera Nios-II support for LLVM<br />
3. Study leader: Mr. H Grobler<br />
4. <strong>Research</strong> group: Advanced Computing <strong>and</strong> Embedded Systems<br />
5. Focus of project: Design<br />
6. Eligibility<br />
6.1 Intended degree programme: <strong>Computer</strong> <strong>Engineering</strong> or <strong>Electronic</strong> <strong>Engineering</strong><br />
6.2 Required running average: 65%<br />
7. Brief description of the project<br />
The Altera Nios-II soft-core processor is a 32-bit processor that is implemented (essentially)<br />
in VHDL. The SOPC Builder (System on a Programmable Chip Builder) that forms part of<br />
the Altera FPGA design suite allows this processor to be configured <strong>and</strong> synthesized to any<br />
suitable Altera FPGA. Low Level Virtual Machine (LLVM) is a both a virtual machine with<br />
RISC-like instructions <strong>and</strong> a compiler infrastructure. LLVM has been used to created high<br />
performance compilers for various general purpose <strong>and</strong> specialized / embedded processors<br />
(typically FPGA based). Currently LLVM can generate static code for the following general<br />
purpose processors: X86, X86-64, PowerPC 32/64, ARM, Thumb, IA-64, Alpha, SPARC,<br />
MIPS <strong>and</strong> Cell architectures. Apple is using LLVM for various of the products, including the<br />
iPhone. LLVM has also been used to create a C-to-Verilog compiler (Verilog is a Hardware<br />
Description Language similar to VHDL). There has also been research into specialized signal<br />
processing language compilation using LLVM, as well as support for GPUs. Recently work<br />
has begun to support the Xilinx Microblaze soft-core processor <strong>and</strong> Microchip<br />
microcontrollers. The purpose of the project is to implement support for the Altera Nios-II<br />
soft-core processor in LLVM. The technical challenge lies in the detailed underst<strong>and</strong>ing of<br />
LLVM compiler system, <strong>and</strong> compilers in general, that must be gained.<br />
8. What will be expected of the student<br />
The student will need to perform a detailed study of LLVM <strong>and</strong> the Altera Nios-II soft-core<br />
processor. The student will also need to study the intricacies of the Altera SOPC Builder. The<br />
student must subsequently add the necessary support to LLVM for the Nios-II processor. The<br />
implementation should be validated by compiling sample programs that are cross checked<br />
with those produced by the Nios-II GCC compiler included in the Altera Nios-II EDS. The<br />
sample programs must be of a sufficiently comprehensive nature. C programming skills will<br />
be required. Required outcomes: Extensions to LLVM to support the Altera Nios-II<br />
processor.<br />
9. Resources<br />
Any Linux based PC can be used for this project. All required software is freely available on<br />
the Internet. The designated work area for this project will be one the Linux computers in the<br />
<strong>Computer</strong> <strong>Engineering</strong> Project Lab.
1. Project number: A4<br />
2. Project title: PIC16F887 VHDL core for the Altera DE1<br />
3. Study leader: Mr. H Grobler<br />
4. <strong>Research</strong> group: Advanced Computing <strong>and</strong> Embedded Systems<br />
5. Focus of project: Design<br />
6. Eligibility<br />
6.1 Intended degree programme: <strong>Computer</strong> <strong>Engineering</strong> or <strong>Electronic</strong> <strong>Engineering</strong><br />
6.2 Required running average: Unspecified<br />
7. Brief description of the project<br />
This project aims to synthesize a complete PIC16F887 microcontroller as a VHDL soft-core<br />
processor for the Altera DE1 Development <strong>and</strong> Education board. The implemented core will<br />
include all peripherals on the device including timers, I/O ports, memory <strong>and</strong> the register file.<br />
The technical challenge of the project is that implementation must allow the PICkit 2<br />
Development Programmer/Debugger (used in EMK310) to perform In-Circuit Programming /<br />
Debugging of the processor via the Expansion Headers of the DE1 board. The technical<br />
challenge lies in the complexity of the VHDL that will be required.<br />
8. What will be expected of the student<br />
The student will be required to study the datasheets <strong>and</strong> revise their underst<strong>and</strong>ing of the<br />
device. The student must then decompose the processor into a structural description of lower<br />
level blocks <strong>and</strong> synthesize these blocks behaviorally using Altera Quartus II. The complete<br />
microcontroller must then be assembled from the building blocks <strong>and</strong> synthesized. Care must<br />
be taken to parameterize the building blocks using generics so that the building blocks can be<br />
used to create other variants of the PIC16Fx family of processors. The student must define<br />
suitable testbenches for each building block as well as the system as a whole. These must be<br />
used to perform verification of each building block <strong>and</strong> the processor as a whole using Altera<br />
ModelSim. For the demonstration the student must implement at least one of the non-trivial<br />
practicals of EMK310 <strong>and</strong> compare the implementation used in EMK310 to that of the softcore<br />
implementation. VHDL skills required. An interest in microprocessor architectures is<br />
recommended. Required outcomes: A full VHDL description (fully commented) of a<br />
PIC16F887 microcontroller, together with simulator files to show the processor running a<br />
program.·<br />
9. Resources<br />
A PC with the full version of Altera Quartus II <strong>and</strong> Altera ModelSim installed required. The<br />
designated work area for this project will be one the computers in the <strong>Computer</strong> <strong>Engineering</strong><br />
Project Lab. Due to the time required for the synthesis <strong>and</strong> simulation of the complete<br />
processor, the Department's simulation clusters are also available (with Linux versions of the<br />
Quartus II <strong>and</strong> ModelSim installed).
1. Project number: A5<br />
2. Project title: Multi-protocol analyzer based on the Altera DE1<br />
3. Study leader: Mr. H Grobler<br />
4. <strong>Research</strong> group: Advanced Computing <strong>and</strong> Embedded Systems<br />
5. Focus of project: Design<br />
6. Eligibility<br />
6.1 Intended degree programme: <strong>Computer</strong> <strong>Engineering</strong> or <strong>Electronic</strong> <strong>Engineering</strong><br />
6.2 Required running average: Unspecified<br />
7. Brief description of the project<br />
This project aims to develop a multi-protocol analyzer based on the Altera DE1 Development<br />
<strong>and</strong> Education board. The system must provide st<strong>and</strong>ard logic analyzer functionality <strong>and</strong><br />
implement protocol decoding for the following st<strong>and</strong>ards: I2C, I2S, SPI, Async Serial, Sync<br />
Serial, USB, 1-Wire, PS/2, <strong>and</strong> SMBus. The technical challenge of the project lies in the realtime<br />
capture <strong>and</strong> analysis of the large number of protocols. USB in particular will prove<br />
challenging.<br />
8. What will be expected of the student<br />
The student will be required to study the various protocols to be supported. A suitable<br />
interface board for the Expansion Headers of the Altera DE1 board must be designed to allow<br />
the various bus types to be probed. The necessary firmware / software (if a soft-core processor<br />
is used) must be designed <strong>and</strong> implemented. A PC based Graphical User Interface (GUI) for<br />
the system must be developed using Qt4. The PC software must be portable to both Linux <strong>and</strong><br />
Windows. C/C++ <strong>and</strong> VHDL skills required. Required outcomes: The hardware interface<br />
board for the Altera DE1, firmware / software for the Altera DE1 board <strong>and</strong> Linux / Windows<br />
GUI software.<br />
9. Resources<br />
A PC with the full version of Altera Quartus II <strong>and</strong> Altera ModelSim installed will be<br />
required. The designated work area for this project will be one the computers in the <strong>Computer</strong><br />
<strong>Engineering</strong> Project Lab. Due to the time required for the synthesis <strong>and</strong> simulation of the<br />
complete processor, the Department's simulation clusters are also available (with Linux<br />
versions of the Quartus II <strong>and</strong> ModelSim installed). An Altera DE1 board will be made<br />
available. The necessary components for the hardware interface board must be sourced.
1. Project number: A6<br />
2. Project title: Ultrasonic ranging sensor module<br />
3. Study leader: Mr. H Grobler<br />
4. <strong>Research</strong> group: Advanced Computing <strong>and</strong> Embedded Systems<br />
5. Focus of project: Design<br />
6. Eligibility<br />
6.1 Intended degree programme: <strong>Computer</strong> <strong>Engineering</strong> or <strong>Electronic</strong> <strong>Engineering</strong><br />
6.2 Required running average: Unspecified<br />
7. Brief description of the project<br />
For small mobile robots, determining the distance to objects is a vital sense function. This<br />
knowledge is typically used to perform obstacle avoidance <strong>and</strong> can also be used for more<br />
general environment mapping. One of the techniques that can be used for this function is<br />
ultrasonic echolocation. This project entails the research, design <strong>and</strong> implementation of a<br />
compact, power efficient <strong>and</strong> modular ultrasonic ranging sensor for use on small robots.<br />
Power efficiency <strong>and</strong> space constraints will be the primary challenges of this project.<br />
8. What will be expected of the student<br />
There are a number of different ultrasonic methods which are used to measure the distance to<br />
an object. Each method has pros <strong>and</strong> cons depending on the range to an object, the type of<br />
object, etc. In this project, the c<strong>and</strong>idate will be required to investigate various ultrasonic<br />
ranging methods to select the best method for sensing ranges of 2cm - 1m <strong>and</strong> accuracy of<br />
1cm or better. Low power consumption <strong>and</strong> compact design are critical. The module must<br />
include a small microcontroller <strong>and</strong> interface to a RS485 compatible communication bus. The<br />
microcontroller must perform suitable control <strong>and</strong> filtering of the sensing operation to<br />
maximize accuracy. The completed module must be subjected to detailed testing <strong>and</strong> analysis.<br />
The performance of the device with regards to aspects such as accuracy, distance, sensing<br />
latency, power consumption <strong>and</strong> efficiency must be characterized by means of formally<br />
designed procedures. The performance of the module must also be compared to similar<br />
commercial implementations. Required outcomes: modular ultrasonic ranging sensor, module<br />
performance quantification data.<br />
9. Resources<br />
The designated work area for this project will be one the computers in the <strong>Computer</strong><br />
<strong>Engineering</strong> Project Lab. The components necessary for the sensor module must be sourced.
1. Project number: A7<br />
2. Project title: Ultrasonic imaging system<br />
3. Study leader: Mr. H Grobler<br />
4. <strong>Research</strong> group: Advanced Computing <strong>and</strong> Embedded Systems<br />
5. Focus of project: Design<br />
6. Eligibility<br />
6.1 Intended degree programme: <strong>Computer</strong> <strong>Engineering</strong> or <strong>Electronic</strong> <strong>Engineering</strong><br />
6.2 Required running average: 65%<br />
7. Brief description of the project<br />
The use of ultrasound for imaging purposes was originally proposed in the late 1920s for the<br />
detection of flaws in metals. One of the dominant current uses is medical ultrasound scanners.<br />
<strong>Research</strong> has shown the viability of using similar techniques for short range general imaging<br />
in air. The purpose of this project is to develop an ultrasonic scanning system that will allow<br />
3-D maps of the environment to be generated. The technical challenge of this project lies in<br />
the complexity of precision sensing with ultrasonic waves.<br />
8. What will be expected of the student<br />
The student will need to research ultrasonic imaging approaches, specifically those suitable<br />
for imaging in air. Based on the most feasible technique, the student must propose a design<br />
for a scanning system. The system should focus primarily on the ultrasonic transducers <strong>and</strong><br />
imaging aspects. The necessary processing should be done on a PC. The necessary interfacing<br />
must therefore also be developed. Required outcomes: modular ultrasonic imaging system,<br />
performance quantification data.<br />
9. Resources<br />
The designated work area for this project will be one the computers in the <strong>Computer</strong><br />
<strong>Engineering</strong> Project Lab. The necessary components for the imaging module must be sourced.
1. Project number: A8<br />
2. Project title: Low cost Inertial Navigation System<br />
3. Study leader: Mr. H Grobler<br />
4. <strong>Research</strong> group: Advanced Computing <strong>and</strong> Embedded Systems<br />
5. Focus of project: Design<br />
6. Eligibility<br />
6.1 Intended degree programme: <strong>Computer</strong> <strong>Engineering</strong> or <strong>Electronic</strong> <strong>Engineering</strong><br />
6.2 Required running average: Unspecified<br />
7. Brief description of the project<br />
Inertial navigation has been used in aviation <strong>and</strong> space exploration for decades. Inertial<br />
navigation systems (INS) have also been applied to robotic applications. INS systems are<br />
generally very expensive <strong>and</strong> large in size because of the high precision <strong>and</strong> reliability<br />
typically required in the intended applications. In contrast, for mobile robotic applications<br />
compact low cost implementations are required. Automotive applications have resulted in<br />
significant progress in the development of position <strong>and</strong> acceleration measurement devices,<br />
largely based on Micro-Electro-Mechanical Systems (MEMS) technology. This project entails<br />
the construction <strong>and</strong> characterization of a low cost INS system that can be used on small scale<br />
robots. The technical challenge of the project lies in the minimal budget <strong>and</strong> low power<br />
constraints imposed.<br />
8. What will be expected of the student<br />
The student will need to research available sensors <strong>and</strong> select suitable components based on<br />
formally defined specifications. A system based on the sensors must be designed <strong>and</strong><br />
implementation. The completed module must be subjected to detailed testing <strong>and</strong> analysis.<br />
The performance of the device with regards to aspects such as accuracy, distance, sensing<br />
latency, power consumption <strong>and</strong> efficiency must be characterized by means of formally<br />
designed test procedures. Required outcomes: low cost INS module, performance<br />
quantification data.<br />
9. Resources<br />
The designated work area for this project will be one the computers in the <strong>Computer</strong><br />
<strong>Engineering</strong> Project Lab. The necessary components for the sensor module must be sourced.
1. Project number: A9<br />
2. Project title: Optical mouse sensor based dead-reckoning module<br />
3. Study leader: Mr. H Grobler<br />
4. <strong>Research</strong> group: Advanced Computing <strong>and</strong> Embedded Systems<br />
5. Focus of project: Design<br />
6. Eligibility<br />
6.1 Intended degree programme: <strong>Computer</strong> <strong>Engineering</strong> or <strong>Electronic</strong> <strong>Engineering</strong><br />
6.2 Required running average: Unspecified<br />
7. Brief description of the project<br />
Optical mice have progressively replaced older mechanical mice, primarily due to their<br />
increased sensitivity <strong>and</strong> reliability. So called “Laser mice” have switched to using infra-red<br />
laser diodes to further increase sensitivity. One of the problems with these approaches is that<br />
the mice exhibit reduced functionality on smooth uniform surfaces such as glass. This<br />
problem has in turn been address by a relatively new type of mice that use a technique called<br />
“dark field microscopy”, which allows the mice to be used on almost any surface. The<br />
purpose of this project is to develop an accurate localization system for mobile robots. The<br />
aim is to construct a dead-reckoning system based on optical mouse sensors which track<br />
movement of the robot across a surface.<br />
8. What will be expected of the student<br />
The student will need to research the principles of dead-reckoning, as well as optical mice<br />
technology. Based on this research the student must specify <strong>and</strong> procure suitable sensors. The<br />
student must design <strong>and</strong> implement the necessary interface <strong>and</strong> processing subsystem for the<br />
dead-reckoning module. The completed module must be subjected to detailed testing <strong>and</strong><br />
analysis. The performance of the device with regards to aspects such as accuracy, distance,<br />
sensing latency, power consumption <strong>and</strong> efficiency must be characterized by means of<br />
formally designed qualification test procedures. Required outcomes: optical mouse sensor<br />
based dead-reckoning module, performance quantification data.<br />
9. Resources<br />
The designated work area for this project will be one the computers in the <strong>Computer</strong><br />
<strong>Engineering</strong> Project Lab. The necessary components for the sensor module must be sourced.
1. Project number: A10<br />
2. Project title: FPGA based encryption for robotic applications<br />
3. Study leader: Mr. H Grobler<br />
4. <strong>Research</strong> group: Advanced Computing <strong>and</strong> Embedded Systems<br />
5. Focus of project: Design<br />
6. Eligibility<br />
6.1 Intended degree programme: <strong>Computer</strong> <strong>Engineering</strong> or <strong>Electronic</strong> <strong>Engineering</strong><br />
6.2 Required running average: 65%<br />
7. Brief description of the project<br />
Robotic systems are often linked to a base station by means of a wireless link. For certain<br />
applications, this wireless link may represent a security risk. Encryption of the data transfer<br />
across such a link is therefore desirable. Power <strong>and</strong> processing capacity constrains place limits<br />
on the type of encryption possible. The aim <strong>and</strong> technical challenge of this project is to<br />
identify <strong>and</strong> implement the optimum encryption algorithm that can be implemented in a<br />
FPGA.<br />
8. What will be expected of the student<br />
The student will need to perform extensive research on encryption algorithms, particularly<br />
those suitable for hardware implementation. Using a suitable FPGA development board, such<br />
as the Altera DE1 board, the student must implement the three most promising c<strong>and</strong>idates.<br />
The three implementations must be subjected to detailed testing <strong>and</strong> analysis. The<br />
performance of the implementations with regards to aspects such as power consumption,<br />
resource utilization <strong>and</strong> maximum b<strong>and</strong>width must be characterized by means of formally<br />
designed test procedures. Required outcomes: VHDL implementation of encryption module,<br />
performance quantification data.<br />
9. Resources<br />
A PC with the full version of Altera Quartus II <strong>and</strong> Altera ModelSim installed will be<br />
required. The designated work area for this project will be one the computers in the <strong>Computer</strong><br />
<strong>Engineering</strong> Project Lab. Due to the time required for the synthesis <strong>and</strong> simulation of the<br />
complete processor, the Department's simulation clusters are also available (with Linux<br />
versions of the Quartus II <strong>and</strong> ModelSim installed). An Altera DE1 board will be made<br />
available.
1. Project number: A11<br />
2. Project title: FPGA based compression for robotic applications<br />
3. Study leader: Mr. H Grobler<br />
4. <strong>Research</strong> group: Advanced Computing <strong>and</strong> Embedded Systems<br />
5. Focus of project: Design<br />
6. Eligibility<br />
6.1 Intended degree programme: <strong>Computer</strong> <strong>Engineering</strong> or <strong>Electronic</strong> <strong>Engineering</strong><br />
6.2 Required running average: 65%<br />
7. Brief description of the project<br />
Robotic systems are often linked to a base station by means of a wireless link. For certain<br />
applications a video stream needs to be transferred. In such cases, a compression algorithm<br />
implemented in a FPGA is desirable. Power <strong>and</strong> processing capacity constrains place limits<br />
on the type of compression possible. The aim <strong>and</strong> technical challenge of this project is to<br />
identify <strong>and</strong> prototype the optimum compression algorithm that can be implemented in an<br />
FPGA.<br />
8. What will be expected of the student<br />
The student will need to perform extensive research on compression algorithms, particularly<br />
those suitable for hardware implementation. Using a suitable FPGA development board, such<br />
as the Altera DE1 board, the student must implement the three most promising c<strong>and</strong>idates.<br />
The three implementations must be subjected to detailed testing <strong>and</strong> analysis. The<br />
performance of the implementations with regards to aspects such as power consumption,<br />
resource utilization <strong>and</strong> maximum b<strong>and</strong>width must be characterized by means of formally<br />
designed qualification test procedures. Required outcomes: VHDL implementation of<br />
compression module, performance quantification data.<br />
9. Resources<br />
A PC with the full version of Altera Quartus II <strong>and</strong> Altera ModelSim installed will be<br />
required. The designated work area for this project will be one the computers in the <strong>Computer</strong><br />
<strong>Engineering</strong> Project Lab. Due to the time required for the synthesis <strong>and</strong> simulation of the<br />
complete processor, the Department's simulation clusters are also available (with Linux<br />
versions of the Quartus II <strong>and</strong> ModelSim installed). An Altera DE1 board will be made<br />
available.
1. Project number: A12<br />
2. Project title: Integration of autonomous vehicle<br />
3. Study leader: Mr. H Grobler<br />
4. <strong>Research</strong> group: Advanced Computing <strong>and</strong> Embedded Systems<br />
5. Focus of project: Design<br />
6. Eligibility<br />
6.1 Intended degree programme: <strong>Computer</strong> <strong>Engineering</strong> or <strong>Electronic</strong> <strong>Engineering</strong><br />
6.2 Required running average: 60%<br />
7. Brief description of the project<br />
The US Government sponsored DARPA Gr<strong>and</strong> Challenge (http://www.darpa.org) aims to<br />
stimulate the development of mobile intelligent autonomous systems. In 2008 the South<br />
African branch of National Instruments initiated a similar competition, albeit on a smaller<br />
scale. During 2008 <strong>and</strong> 2009 final year projects entailed the development of components of<br />
two robotic platforms to meet the requirements of the 2008 <strong>and</strong> 2009 competitions. This year<br />
a new competition will be held, with more open specifications. This project will refine the<br />
previously developed components <strong>and</strong> integrate them into an autonomous vehicle according<br />
to the rules of the new competition.<br />
8. What will be expected of the student<br />
The student will be required to perform a detailed analysis of the subsystems that have been<br />
previously developed. Similarly, the specification of the new competition must be analyzed.<br />
Based on this analysis the student must identify the subsystem refinements required <strong>and</strong><br />
addition modifications for integration. In particular the navigation <strong>and</strong> path planning system<br />
must be ported from a Matlab based system to a suitable embedded processor. Required<br />
outcomes: A fully functional autonomous vehicle verified to achieve the goals of the<br />
competition.<br />
9. Resources<br />
To be determined as part of the project. The designated work area for this project is the CAEC<br />
lab.
1. Project number: A13<br />
2. Project title: Stereo vision system for robotic vehicles<br />
3. Study leader: Mr. H Grobler<br />
4. <strong>Research</strong> group: Intelligent Systems<br />
5. Focus of project: Design<br />
6. Eligibility<br />
6.1 Intended degree programme: <strong>Electronic</strong> <strong>Engineering</strong><br />
6.2 Required running average: 60%<br />
7. Brief description of the project<br />
The US Government sponsored DARPA Gr<strong>and</strong> Challenge (http://www.darpa.org) aims to<br />
stimulate the development of mobile intelligent autonomous systems. In 2008 the South<br />
African branch of National Instruments initiated a similar competition, albeit on a smaller<br />
scale. During 2008 <strong>and</strong> 2009 final year projects entailed the development of components of<br />
two robotic platforms to meet the requirements of the 2008 <strong>and</strong> 2009 competitions. This year<br />
a new competition will be held, with more open specifications. During 2009 a Matlab based<br />
stereo vision system was developed for the vehicle. This project will refine the previously<br />
developed implementation <strong>and</strong> create a low-power stereo vision processing system.<br />
8. What will be expected of the student<br />
The student will be required to perform a detailed analysis of the subsystems that have been<br />
previously developed. Based on this analysis the student must identify a suitable platform for<br />
the implementation of the stereo vision algorithms (for example FPGA, DSP, etc.). Required<br />
outcomes: A stereo vision module which produces a distance map as output.<br />
9. Resources<br />
To be determined as part of the project. The designated work area for this project is the CAEC<br />
lab.
1. Project number: T14<br />
2. Project title: MCQ assessment system<br />
3. Study leader: Mr. H Grobler<br />
4. <strong>Research</strong> group: Intelligent Systems<br />
5. Focus of project: Design<br />
6. Eligibility<br />
6.1 Intended degree programme: <strong>Computer</strong> <strong>Engineering</strong><br />
6.2 Required running average: Unspecified<br />
7. Brief description of the project<br />
Growth in student numbers has increase the workload of lecturers with respect to assessments.<br />
Multiple Choice Questions (MCQ) can be used to reduce the time spend on marking. The aim<br />
of this project is to develop a computer based MCQ assessment system that can be used in<br />
two modes of operation: a) online web-based assessments <strong>and</strong> b) offline paper based<br />
assessments. For the online assessments, the system must allow predefined MCQ assessments<br />
to be performed in a computer laboratory context. For the offline assessments, the system<br />
must generate the appropriate MCQ answer sheets. Once the answer sheets have been<br />
completed, the sheets are to be scan to a multi-page Portable Document Format (PDF) using a<br />
digital multifunction copier/scanner/printer (such as the Ricoh Aficio MP 6500). The system<br />
must accept such a PDF via Simple Message Transfer Protocol (SMTP) <strong>and</strong> perform Optical<br />
Mark Recognition (OMR) to extract the answers. A SQL database system should be used for<br />
the storage of marks <strong>and</strong> other data. For both assessment approaches, the system must<br />
perform the necessary mark evaluation, statistical analysis <strong>and</strong> report generation. The system<br />
must support all required question weight assignment profiles <strong>and</strong> must allow the extracted<br />
answers to be exported in various formats (for example: raw <strong>and</strong> CSV). The primary<br />
challenge of this project lies in the development of a robust (zero error) OMR subsystem than<br />
can h<strong>and</strong>le both st<strong>and</strong>ard <strong>and</strong> generated OMR sheets. The secondary challenge is the<br />
integration of all the subsystems into a highly reliable system usable by non-technical users.<br />
8. What will be expected of the student<br />
The student must perform a literature study <strong>and</strong> research into existing implementations of<br />
OMR systems. In order to develop the specified system from first principles, the student will<br />
need to research the digital image processing <strong>and</strong> pattern recognition potentially suitable for<br />
OMR. The student must create a prototype OMR implementation <strong>and</strong> use TIFF based scans<br />
to test the functionality. Subsequently the student must incorporate a PDF based page<br />
extraction mechanism in order to h<strong>and</strong>le multi-page PDF documents. The OMR <strong>and</strong> PDF<br />
subsystems must be integrated with a SMTP mail transfer agent (MTA) subsystem to create a<br />
st<strong>and</strong>-alone server daemon. The system must store received PDF in a properly designed<br />
hierarchical filesystem structure. The document meta-data <strong>and</strong> processed results must be<br />
stored in a SQL database. A WWW based front-end must be created (in PHP or Ruby) for the<br />
system which allows non-technical users to operate the system.<br />
9. Resources<br />
To complete this project the student will need a Linux based computer. The dual-boot<br />
computers in the <strong>Computer</strong> <strong>Engineering</strong> Project Labs should be used. The compilers (GCC),<br />
libraries, database systems (PostgreSQL), WWW servers <strong>and</strong> script engines (PHP/Ruby)<br />
which come st<strong>and</strong>ard with Debian / Ubuntu are sufficient for this project. Sample PDF<br />
documents can be generated with minimal effort using the department's multifunctional<br />
copier/scanner/printer.
1. Project number: T15<br />
2. Project title: De-weathering / de-noising of video sequences<br />
3. Study leader: Mr. H Grobler<br />
4. <strong>Research</strong> group: Intelligent Systems<br />
5. Focus of project: Investigative<br />
6. Eligibility<br />
6.1 Intended degree programme: <strong>Computer</strong> <strong>Engineering</strong> or <strong>Electronic</strong> <strong>Engineering</strong><br />
6.2 Required running average: 75%<br />
7. Brief description of the project<br />
Surveillance videos are often distorted by environmental effects such as rain <strong>and</strong> varying light<br />
conditions. Recently research has produced algorithms <strong>and</strong> techniques to compensate for<br />
specific types of (environmental) noise in images. These include approaches based on<br />
physical models <strong>and</strong> purely signal processing based methods. For this project the signal<br />
processing based methods must be researched <strong>and</strong> implemented to create a dynamic deweathering<br />
/ de-noising video sequence pre-processor. Amongst other, Discrete Fourier<br />
Transform (DFT) <strong>and</strong> Discrete Wavelet Transform (DWT) based methods must be research<br />
<strong>and</strong> implemented. Recently the Discrete Curvelet Transform (DCuT) has shown considerable<br />
promise with regards to de-weathering / de-noising <strong>and</strong> the DCuT must therefore also be<br />
researched <strong>and</strong> implemented.<br />
8. What will be expected of the student<br />
The student is required to design a dynamic de-weathering / de-noising video sequence preprocessor.<br />
Strong Mathematical, Signal Processing <strong>and</strong> C++ programming skills will be<br />
required. The student will be expected to follow a formal engineering procedure<br />
(requirements analysis, specification, architecture, design, implementation, testing <strong>and</strong><br />
configuration management). Required outcomes: A fully functional system with associated<br />
technical <strong>and</strong> user documentation.<br />
9. Resources<br />
The Open Source OpenCV image processing toolkit will be used as foundation. The Qt4<br />
library will be used for GUI aspects. The designated work area for this project will be one the<br />
Linux computers in the <strong>Computer</strong> <strong>Engineering</strong> Project Lab.
1. Project number: T16<br />
2. Project title: Particle flow analysis in micro-fluidic channels<br />
3. Study leader: Mr. H Grobler, in collaboration with CSIR (MSM)<br />
4. <strong>Research</strong> group: Intelligent Systems<br />
5. Focus of project: Investigative<br />
6. Eligibility<br />
6.1 Intended degree programme: <strong>Computer</strong> <strong>Engineering</strong> or <strong>Electronic</strong> <strong>Engineering</strong><br />
6.2 Required running average: 60%<br />
7. Brief description of the project<br />
The Mechatronics <strong>and</strong> Micro-Manufacturing group at the CSIR is involved in research<br />
towards the application of Micro-fluidics in manufacturing <strong>and</strong> medicine. This is achieved by<br />
exploiting the unique physical properties of fluids <strong>and</strong> particles at the micro scale. Micro<br />
channels are cut within a polymer substrate <strong>and</strong> fluid <strong>and</strong> particles are allowed to flow<br />
through. As part of the research it is important to measure various parameters of the flow.<br />
Currently, high-speed videos of the channels are captured <strong>and</strong> manually analyzed offline. It is<br />
desirable to automate the analysis. Therefore the aim of the project is to construct a software<br />
system that will read the video data <strong>and</strong> use image processing <strong>and</strong> computer vision techniques<br />
to make various quantitative measurements about the fluid flow <strong>and</strong> the particles. Some<br />
examples of the kind of measurements are: Measure the flow rate of the fluid by identifying<br />
<strong>and</strong> tracking known particles that are injected into the fluid. Classify/segment amongst the<br />
different particles in the channel. Measure the velocity, trajectory, size, identity <strong>and</strong> path of<br />
all particles <strong>and</strong> annotate the video feed with this information as per user requirement.<br />
8. What will be expected of the student<br />
The desired output is a software implementation of all the necessary algorithms to perform the<br />
measurements <strong>and</strong> a user friendly interface that allows access to their functionality. The<br />
system must allow the user to view <strong>and</strong> extract the information about the particle <strong>and</strong>/or fluid<br />
flow as required. Detailed user requirements should be obtained from the Mechatronics <strong>and</strong><br />
Micro-Manufacturing research group.<br />
9. Resources<br />
The designated work area for this project will be one the computers in the <strong>Computer</strong><br />
<strong>Engineering</strong> Project Lab. The Open Source OpenCV image processing toolkit will be used as<br />
foundation <strong>and</strong> the Qt4 library used for GUI aspects. High-speed video data will be supplied<br />
by the Mechatronics <strong>and</strong> Micro-Manufacturing research group.
1. Project number: T17<br />
2. Project title: Road segmentation <strong>and</strong> structure analysis for an<br />
autonomous rover<br />
3. Study leader: Mr. H Grobler, in collaboration with CSIR (MIAS)<br />
4. <strong>Research</strong> group: Intelligent Systems<br />
5. Focus of project: Investigative<br />
6. Eligibility<br />
6.1 Intended degree programme: <strong>Computer</strong> <strong>Engineering</strong> or <strong>Electronic</strong> <strong>Engineering</strong><br />
6.2 Required running average: 60%<br />
7. Brief description of the project<br />
Autonomous unmanned ground vehicles (AUGVs, also known as smart cars) are autonomous<br />
robots that operate on ground surfaces. To safely navigate under normal traffic conditions, it<br />
is necessary for such vehicles to build a dense map of the road surface. Using a video feed<br />
provided by cameras on board the vehicle, the system should identify the road surface present<br />
in the video feed <strong>and</strong> augment its three-dimensional model of the road surface. The threedimensional<br />
model will be used to for navigating the vehicle. In addition, important properties<br />
of the road surface is to be identified, such as traffic lanes, the road shoulder, curbs, the<br />
presence of humps, potholes, etc. Currently only tar road surfaces are considered.<br />
8. What will be expected of the student<br />
The student is required to design <strong>and</strong> implement a software system that uses advanced image<br />
processing <strong>and</strong> computer vision techniques to create a three-dimensional model of the road<br />
surface across the entire length of road that the vehicle traverses, <strong>and</strong> to augment the model<br />
with important road surface information such as traffic lanes, curbs, humps, potholes, etc.<br />
Furthermore the three-dimensional model with augmented data is to be visualized as part of<br />
the software. Strong C++ programming skills will be required. The student will be expected<br />
to follow a formal engineering procedure (requirements analysis, specification, architecture,<br />
design, implementation, testing <strong>and</strong> configuration management). Required outcomes: A fully<br />
functional system (Open Source, ANSI C++) with associated technical <strong>and</strong> user<br />
documentation.<br />
9. Resources<br />
The Open Source OpenCV image processing toolkit will be used as foundation. The Qt4<br />
library will be used for GUI aspects. Data will be supplied by the Mobile Intelligent<br />
Autonomous Systems research group in the CSIR. The designated work area for this project<br />
will be one the Linux computers in the <strong>Computer</strong> <strong>Engineering</strong> Project Lab. For specialised<br />
experiments the facilities at the <strong>Computer</strong> Vision Laboratory (CSIR) will be available.
1. Project number: T18<br />
2. Project title: Large scale object recognition<br />
3. Study leader: Mr. H Grobler, in collaboration with CSIR (MIAS)<br />
4. <strong>Research</strong> group: Intelligent Systems<br />
5. Focus of project: Investigative<br />
6. Eligibility<br />
6.1 Intended degree programme: <strong>Computer</strong> <strong>Engineering</strong> or <strong>Electronic</strong> <strong>Engineering</strong><br />
6.2 Required running average: 60%<br />
7. Brief description of the project<br />
With the phenomenal growth of the Internet over the large decade, it is estimated that there<br />
are billions of images available on the Internet. These images cover just about any<br />
conceivable topic. Recent studies indicate that by incorporating such vast amounts of<br />
information in an object recognition system, the results can be vastly improved. In this<br />
project, the student is required to develop a system that can query for <strong>and</strong> download a large<br />
number labeled (context based) images from the web (using search engines such as Google or<br />
Yahoo, <strong>and</strong> sharing sites such as MySpace, Facebook or Flickr). These images are to cover<br />
the complete spectrum of perceivable objects (for such purposes, a tool such as WordNet can<br />
be used). Once such a database is established, computer vision <strong>and</strong> machine learning<br />
techniques are to be applied (for example using SIFT features) to associate the relevant parts<br />
in the images with category labels (content based) <strong>and</strong> to reject labels where no strong<br />
correspondence between images can be found. Using the improved database, a system is to<br />
be designed that uses a st<strong>and</strong>ard object recognition technique to classify all object present in<br />
an image presented to it.<br />
8. What will be expected of the student<br />
The student is required to design a software system that extracts a vast number of images<br />
from the Internet <strong>and</strong> uses image processing <strong>and</strong> computer vision techniques to make<br />
associations between the images. The system should use these associations to recognize<br />
objects when presented an image. A user interface that displays the image, <strong>and</strong> highlights<br />
recognized objects in the image should be developed. Strong C++ programming skills will be<br />
required. The student will be expected to follow a formal engineering procedure<br />
(requirements analysis, specification, architecture, design, implementation, testing <strong>and</strong><br />
configuration management). Required outcomes: A fully functional system (Open Source,<br />
ANSI C++) <strong>and</strong> with associated technical <strong>and</strong> user documentation.<br />
9. Resources<br />
The Open Source OpenCV image processing toolkit will be used as foundation. The Qt4<br />
library will be used for GUI aspects. Image data is to be acquired as part of the project. The<br />
designated work area for this project will be one the Linux computers in the <strong>Computer</strong><br />
<strong>Engineering</strong> Project Lab. For specialized experiments the facilities at the <strong>Computer</strong> Vision<br />
Laboratory (CSIR) will be available.
1. Project number: T19<br />
2. Project title: 3D modelling <strong>and</strong> chroma keying studio<br />
3. Study leader: Mr. H Grobler, in collaboration with CSIR (MIAS)<br />
4. <strong>Research</strong> group: Intelligent Systems<br />
5. Focus of project: Investigative<br />
6. Eligibility<br />
6.1 Intended degree programme: <strong>Computer</strong> <strong>Engineering</strong> or <strong>Electronic</strong> <strong>Engineering</strong><br />
6.2 Required running average: 60%<br />
7. Brief description of the project<br />
In this project, the student is required to build a workbench that can be used to create 3D<br />
models of small objects. 3D models are often used in computer graphics in applications such<br />
as movies <strong>and</strong> games. It is also useful in the domain of robotics, to give machines the ability<br />
to recognise objects <strong>and</strong> plan tasks based on their pose.<br />
8. What will be expected of the student<br />
In the first part of the project, a small workbench for 3D modeling needs to be set up. The<br />
workbench should allow an object to be placed in such a way that it is positioned against a<br />
uniform background. The system should allow images of the object to be taken from multiple<br />
viewpoints, by rotating the object on a small turntable <strong>and</strong> taking images from a stationary<br />
mounted camera. Adequate lighting should be provided as part of the system. In the second<br />
part of the project, algorithms for chroma keying (greenscreening) need to be developed.<br />
These algorithms should make it possible to segment an object from a uniform background<br />
<strong>and</strong> then to insert the segmented object on another background. In the third part of the project,<br />
algorithms need to be developed to extract feature points from objects across multiple views,<br />
<strong>and</strong> to use that information to construct 3D models of the objects. The 3D models need to be<br />
validated against ground truth (there is a Riegl Laser Scanner at the CSIR that could be used<br />
for this purpose). A GUI need to be developed to visualise the 3D objects (supporting<br />
different modes, such point-clouds, wire-frames <strong>and</strong> textured models) <strong>and</strong> to project them<br />
onto a different background.<br />
9. Resources<br />
A set of small objects will be provided to demonstrate the system, ranging from simple to<br />
complex geometric shapes. A web camera could be made available upon request. The CSIR’s<br />
Riegl laser scanner will be made available to acquire ground truth measurements. A turntable<br />
will be required <strong>and</strong> should be sourced as part of an old LP player or similar equipment.<br />
Additional funding will be available if necessary.
1. Project number: T20<br />
2. Project title: Real-time object recognition using a GPU<br />
3. Study leader: Mr. H Grobler, in collaboration with CSIR (MIAS)<br />
4. <strong>Research</strong> group: Intelligent Systems<br />
5. Focus of project: Investigative<br />
6. Eligibility<br />
6.1 Intended degree programme: <strong>Computer</strong> <strong>Engineering</strong> or <strong>Electronic</strong> <strong>Engineering</strong><br />
6.2 Required running average: 60%<br />
7. Brief description of the project<br />
The availability of relatively cheap Graphical Processing Units (GPUs), having the ability to<br />
parallelise execution of certain algorithms, have made teraflop computing accessible to a<br />
larger audience. The availability of such computational power has made many new<br />
applications possible. One such example is in the field of computer vision, where large<br />
amounts of visual data have to be processed. In this project, the objective is to develop a<br />
system that can recognise objects in real-time, by implementing common feature extraction<br />
<strong>and</strong> description algorithms on a GPU. Such algorithms include Scale Invariant Feature<br />
Transform (SIFT), Speeded Up Robust Features (SURF), Gradient Location <strong>and</strong> Orientation<br />
Histogram (GLOH), etc. The primary focus <strong>and</strong> technical challenge of this project is the<br />
development of a real-time GPU based object recognition system.<br />
8. What will be expected of the student<br />
In the first part of the project, algorithms for feature point extraction, feature description <strong>and</strong><br />
feature matching will be implemented on the GPU, using a framework such as CUDA. A<br />
variety of popular algorithms need to be implemented. In the second part of the project, a<br />
database of features of common objects needs to be created. For this purpose, popular image<br />
datasets such as PASCAL VOC, MSRC <strong>and</strong> Caltech should be used. A method need to be<br />
developed to represent such a database on a GPU, such that it could easily be referenced <strong>and</strong><br />
match scores calculated. In the third part of the project, a GUI needs to be developed. The<br />
GUI should allow a user to select an image from disk, <strong>and</strong> then determine the objects present<br />
in the image by extracting features from it <strong>and</strong> matching it to the database. The image <strong>and</strong><br />
associated objects should be displayed in the GUI. The recognition capability of the system<br />
should be characterized. C++ programming skills will be required. Required outcomes:<br />
properly designed / implemented <strong>and</strong> fully functional real-time object recognition system,<br />
detailed experimental analysis of various recognition algorithms implemented.<br />
9. Resources<br />
The Open Source OpenCV image processing toolkit will be used as foundation <strong>and</strong> the Qt4<br />
library for GUI aspects. A PC with NVidia GTX 295 will be available for this project. The<br />
required software (such as CUDA) can be downloaded from the Internet, but will also be<br />
made available on the department's FTP site. Image datasets such as PASCAL VOC, MSRC<br />
<strong>and</strong> Caltech will be made available.
1. Project number: T21<br />
2. Project title: Real-time video encoding/decoding using natural<br />
image statistics<br />
3. Study leader: Mr. H Grobler, in collaboration with CSIR (MIAS)<br />
4. <strong>Research</strong> group: Intelligent Systems<br />
5. Focus of project: Investigative<br />
6. Eligibility<br />
6.1 Intended degree programme: <strong>Computer</strong> <strong>Engineering</strong> or <strong>Electronic</strong> <strong>Engineering</strong><br />
6.2 Required running average: 60%<br />
7. Brief description of the project<br />
Video is becoming a very important resource in online media <strong>and</strong> Internet culture. On online<br />
sharing sites such as YouTube, it is estimated that there are hundreds of millions of videos,<br />
with hundreds of thous<strong>and</strong>s being added daily. Due to the large amount of visual data,<br />
efficient encoding <strong>and</strong> decoding of such videos is required due to limitations in b<strong>and</strong>width.<br />
Classically, encoding / decoding algorithms use only the content of the video itself.<br />
However, due to the large amount of visual data that is currently available, it may be possible<br />
to construct a database containing natural image statistics that is shared <strong>and</strong> referenced<br />
between the encoder <strong>and</strong> decoder. By creating indices into the shared data, the video could be<br />
encoded <strong>and</strong> compressed efficiently using only such indices. The technical challenge of this<br />
project is to create a clever indexing structure that could be used to encode / decode video in<br />
real-time with minimal loss in visual appearance <strong>and</strong> maximum compression.<br />
8. What will be expected of the student<br />
The student will be required to study the principles of image <strong>and</strong> video compression. In the<br />
first part of the project, a database of natural image statistics is to be constructed. Datasets<br />
available online, such as PASCAL, MSRC or Caltech could be used, or videos that is<br />
currently available on YouTube could be assembled. Statistics based on image patches,<br />
frequency components, etc. could be gathered from such data. Similar patterns could be<br />
clustered to reduce the size of the database. In the second part of the project, a distance<br />
measure needs to be developed to select a sample from the database that is closest in visual<br />
appearance to a target sample. This would require an analysis of humans perceive visual<br />
information. In the third part of the project, an efficient encoder / decoder is to be developed,<br />
based on the database <strong>and</strong> distance measure. The encoder / decoder should be able to operate<br />
in real time. A GUI should be developed for viewing <strong>and</strong> managing the encoding/decoding of<br />
videos <strong>and</strong> images. Models need to be developed describing the “loss of visual information”<br />
in the encoded visual data. The student is required to design software to construct the<br />
database, distance measure <strong>and</strong> encoding / decoding scheme <strong>and</strong> a GUI to manage the<br />
process. C++ programming skills will be required. Required outcomes: properly designed /<br />
implemented <strong>and</strong> fully functional real-time video encoding/decoding system, detailed<br />
experimental analysis of system performance.<br />
9. Resources<br />
The Open Source OpenCV image processing toolkit will be used as foundation <strong>and</strong> the Qt4<br />
library for GUI aspects. The designated work area for this project will be one the computers<br />
in the <strong>Computer</strong> <strong>Engineering</strong> Project Lab. Datasets with visual information are to be acquired<br />
from the Internet. The datasets need to be representative of a large number of visual<br />
circumstances that can occur. A set of videos will be provided that are to be encoded/decoded.
1. Project number: T22<br />
2. Project title: Map-building of an office environment<br />
3. Study leader: Mr. H Grobler, in collaboration with CSIR (MIAS)<br />
4. <strong>Research</strong> group: Intelligent Systems<br />
5. Focus of project: Investigative<br />
6. Eligibility<br />
6.1 Intended degree programme: <strong>Computer</strong> <strong>Engineering</strong> or <strong>Electronic</strong> <strong>Engineering</strong><br />
6.2 Required running average: 60%<br />
7. Brief description of the project<br />
Accurate map-building is an important task for mobile robots. It enables robots to be able to<br />
plan routes <strong>and</strong> move about in complex human environments, lessening the danger it poses to<br />
itself or its environment. The objective is to build a map of office environment, using only a<br />
stereo vision camera mounted on a small robot. The map should be geometric rather than<br />
topological in nature. The focus <strong>and</strong> technical challenge of this project is the development of<br />
stereo vision system where accuracy is the primary consideration (at the cost of high memory<br />
usage <strong>and</strong> computational requirements).<br />
8. What will be expected of the student<br />
The student is required to develop algorithms to perform stereo vision to construct an accurate<br />
map of an office environment. In the first part of the project, algorithms need to be developed<br />
to rectify images, do distortion correction to get rid of lens effects <strong>and</strong> perform feature<br />
matching across pairs of images (i.e. perform stereo vision). Using disparity information,<br />
distances to objects in the images should be calculated. In the second part of the project,<br />
information from pairs of images in an entire video sequence should be used to construct a<br />
map of the environment depicted in the video. In the third part of the project, a GUI should be<br />
developed to visualise the constructed map. The accuracy of the constructed map against<br />
ground truth should be calculated. The stereo datasets <strong>and</strong> ground truth must be captured<br />
according to formally designed procedures. C++ programming skills will be required.<br />
Required outcomes: properly designed / implemented <strong>and</strong> fully functional stereo vision<br />
system, detailed experimental analysis of system performance, high quality stereo vision<br />
datasets.<br />
9. Resources<br />
The Open Source OpenCV image processing toolkit will be used as foundation <strong>and</strong> the Qt4<br />
library for GUI aspects. The designated work area for this project will be one the computers<br />
in the <strong>Computer</strong> <strong>Engineering</strong> Project Lab. A high resolution stereo vision camera (CSIR) will<br />
be used to capture videos of an office environment. The student will have to do the<br />
experimental setup to capture all relevant data, such as calibration information, etc.
1. Project number: T23<br />
2. Project title: Content based video/image retrieval system<br />
3. Study leader: Mr. H Grobler, in collaboration with CSIR (MIAS)<br />
4. <strong>Research</strong> group: Intelligent Systems<br />
5. Focus of project: Investigative<br />
6. Eligibility<br />
6.1 Intended degree programme: <strong>Computer</strong> <strong>Engineering</strong> or <strong>Electronic</strong> <strong>Engineering</strong><br />
6.2 Required running average: 60%<br />
7. Brief description of the project<br />
The availability of cheap digital cameras <strong>and</strong> the emergence <strong>and</strong> strong support of social<br />
networking websites, have created a situation where billions of images <strong>and</strong> videos are<br />
available on the Internet, with millions being added daily. These images <strong>and</strong> videos cover<br />
just about any conceivable topic. Most current search engines don’t index these images <strong>and</strong><br />
videos based on their content, but rather based on text that is found near these images / videos<br />
on web pages. Although such search engines generally perform well for simple queries such<br />
as “face”, “horse” or “boat”, it fails for more complex queries such as “red boat sailing into<br />
the sunset” or “lady in pink dress walking on the beach”. The aim of this project is to develop<br />
a search engine capable of indexing images / videos based on their content, rather than textual<br />
descriptions found on web pages. Analysis of image <strong>and</strong> video data is also vital in activities<br />
such as strategy determination, tactics development, disaster recovery or mission debriefing.<br />
The system should therefore also be able to search images <strong>and</strong> videos for a specific object.<br />
8. What will be expected of the student<br />
In the first part of the project an image <strong>and</strong> video web crawler should be developed. It should<br />
create an index with common image / video properties, such as URL, dimensions, etc. The<br />
crawler should be run to assemble a large collection of images <strong>and</strong> videos. The crawler may<br />
be structured to use additional information, such as the structure of Google Image Search<br />
queries, Facebook image naming conventions, etc. to help in the construction of such a<br />
dataset. In the second part of the project, algorithms need to be developed to analyse these<br />
images / videos based on their content. To limit the scope of the project, the analysis should<br />
be restricted to the assignment of class categories, <strong>and</strong> simple features such as the colour<br />
content. In the third part of the project, a web interface needs to be developed to illustrate the<br />
content-based search engine. The user should be able to enter a search term, upon which the<br />
user interface provides an overview of the images <strong>and</strong> videos in the database that match the<br />
search term based on content.<br />
9. Resources<br />
The designated work area for this project will be one the computers in the <strong>Computer</strong><br />
<strong>Engineering</strong> Project Lab. The Open Source OpenCV image processing toolkit will be used as<br />
foundation. Web development packages such as WAMP or LAMP can be downloaded from<br />
the Internet (they are also available on the department's FTP server). Datasets with visual<br />
information are to be acquired from the Internet. The datasets need to be representative of a<br />
large number of visual circumstances that can occur.
1. Project number: T24<br />
2. Project title: High performance real-time stereo vision system<br />
3. Study leader: Mr. H Grobler, in collaboration with CSIR (MIAS)<br />
4. <strong>Research</strong> group: Intelligent Systems<br />
5. Focus of project: Investigative<br />
6. Eligibility<br />
6.1 Intended degree programme: <strong>Computer</strong> <strong>Engineering</strong> or <strong>Electronic</strong> <strong>Engineering</strong><br />
6.2 Required running average: 60%<br />
7. Brief description of the project<br />
The goal of computer vision is to infer information about the real world from images. One of<br />
the major applications in computer vision is three-dimensional reconstruction. This has been a<br />
widely researched area <strong>and</strong> there are a number of algorithms that exist such as stereopsis,<br />
structure from motion, volumetric graph cuts, etc. Most of these algorithms are computational<br />
intensive <strong>and</strong> extremely slow. Recently progress has been made in speeding up stereo vision<br />
algorithms. Stereo vision requires two images to be taken of the same scene but separated by<br />
a certain distance. By calculating the difference in distance between the same pixels present in<br />
both images, one can calculate the depth information, known as the disparity. Using the<br />
disparity information, 3D reconstruction of the scene can be done. The primary focus <strong>and</strong><br />
technical challenge of this project is the development of a real-time stereo vision system<br />
based on disparity information. To maximise the frame rate at which the system operates, not<br />
only must improved algorithms be implemented, various techniques such as multi-core<br />
operation <strong>and</strong> GPU off-loading must be explored.<br />
8. What will be expected of the student<br />
The student is required to design <strong>and</strong> implement a software system that uses image processing<br />
<strong>and</strong> computer vision techniques to produce a stereo vision system that can calculate the depth<br />
information <strong>and</strong> produce a visualisation of the scene in real-time. The student will therefore<br />
have to research the principles of stereo vision <strong>and</strong> in particular the methods relating to<br />
disparity. A reference implementation must be done <strong>and</strong> validated. Subsequently the student<br />
must study various performance profiling tools / techniques <strong>and</strong> use these to analyse the<br />
performance of the implementation. Based on the results, the student must investigate various<br />
techniques such as multi-core based parallel processing <strong>and</strong> GPU off-loading to maximise the<br />
frame rate achieved by the system. To achieve the latter, the student will be required to study<br />
multi-core programming approaches such as OpenMP <strong>and</strong> the Intel Threading Building<br />
Blocks (TBB), as well as GPU programming API such as CUDA. C++ programming skills<br />
will be required. Required outcomes: properly designed / implemented <strong>and</strong> fully functional<br />
real-time stereo vision system, detailed experimental analysis of various acceleration<br />
techniques implemented.<br />
9. Resources<br />
The Open Source OpenCV image processing toolkit will be used as foundation <strong>and</strong> the Qt4<br />
library for GUI aspects. The designated work area for this project will be one the computers<br />
in the <strong>Computer</strong> <strong>Engineering</strong> Project Lab. For GPU related work, a PC with a NVidia GTX<br />
295 will be available. Stereo images datasets will be made available (such as the Middlebury<br />
Stereo Datasets) <strong>and</strong> additional datasets should be captured at the <strong>Computer</strong> Vision<br />
Laboratory (CSIR).
1. Project number: T25<br />
2. Project title: Visual odometry for an autonomous vehicle<br />
3. Study leader: Mr. H Grobler, in collaboration with CSIR (MIAS)<br />
4. <strong>Research</strong> group: Intelligent Systems<br />
5. Focus of project: Investigative<br />
6. Eligibility<br />
6.1 Intended degree programme: <strong>Computer</strong> <strong>Engineering</strong> or <strong>Electronic</strong> <strong>Engineering</strong><br />
6.2 Required running average: 60%<br />
7. Brief description of the project<br />
Navigating in an unknown environment is a key task for an autonomous vehicle. Cameras<br />
mounted on the vehicle offer low cost, high information content sensors that are eminently<br />
suitable for human environments. The camera captures images which can be used to<br />
determine the visual odometry of the vehicle. The main goal of visual odometry is to recover<br />
the camera motion <strong>and</strong> the 3D structure of the world concurrently by exploiting the projective<br />
geometry relating multiple views. Structure from Motion (SfM) is one of the algorithms used<br />
to determine the visual odometry. SfM refers to the process of finding the three-dimensional<br />
structure by analysing the motion of an object over time. Different features such as corners,<br />
edges, etc. are tracked in sequential images to find the correspondences between them. The<br />
feature trajectories over time are then used to reconstruct their 3D positions as well as the<br />
camera motion. The primary focus <strong>and</strong> technical challenge of this project is the development<br />
of a real-time visual odometry system. To maximise the frame rate at which the system<br />
operates, various optimisations must be explored.<br />
8. What will be expected of the student<br />
The student is required to design <strong>and</strong> implement a software system that implements Structure<br />
from Motion to accurately determine the visual odometry of an autonomous vehicle as well as<br />
produce a 3D visualization of the path traveled. The student will therefore be required to<br />
study the topic Structure from Motion, as well as various techniques from the field of digital<br />
image processing. A reference implementation must be done <strong>and</strong> validated. Subsequently the<br />
student must study various performance profiling tools / techniques <strong>and</strong> use these to analyse<br />
the performance of the implementation. Based on the results, the student must investigate<br />
various algorithmic <strong>and</strong> implementation optimisations to maximise the frame rate achieved by<br />
the system. C++ programming skills will be required. Required outcomes: properly designed<br />
/ implemented <strong>and</strong> fully functional real-time visual odometry system, detailed experimental<br />
analysis of various optimisations implemented.<br />
9. Resources<br />
The Open Source OpenCV image processing toolkit will be used as foundation <strong>and</strong> the Qt4<br />
library for GUI aspects. The designated work area for this project will be one the computers<br />
in the <strong>Computer</strong> <strong>Engineering</strong> Project Lab. The student may use facilities available at the<br />
<strong>Computer</strong> Vision Laboratory (CSIR) to capture suitable data.
1. Project number: T26<br />
2. Project title: 2D-3D pose estimation<br />
3. Study leader: Mr. H Grobler, in collaboration with CSIR (MIAS)<br />
4. <strong>Research</strong> group: Intelligent Systems<br />
5. Focus of project: Investigative<br />
6. Eligibility<br />
6.1 Intended degree programme: <strong>Computer</strong> <strong>Engineering</strong> or <strong>Electronic</strong> <strong>Engineering</strong><br />
6.2 Required running average: 60%<br />
7. Brief description of the project<br />
Pose estimation of an object refers to the object’s position <strong>and</strong> orientation relative to some coordinate<br />
system. Determining the pose of an object is vital to many computer <strong>and</strong> robot vision<br />
tasks such as object grasping, manipulation <strong>and</strong> recognition or self-localization of mobile<br />
robots. The 2D-3D pose estimation problem requires the fitting of 2D sensor data (an image<br />
of an object) with a 3D object model. The aim is to estimate a rigid motion (containing both<br />
3D rotation <strong>and</strong> 3D translation) which minimises an error measure (which needs to be<br />
defined) between the image <strong>and</strong> object data. The primary focus <strong>and</strong> technical challenge of this<br />
project is the development of an accurate pose estimation system.<br />
8. What will be expected of the student<br />
The student is required to design <strong>and</strong> implement a software system that addresses the 2D-3D<br />
pose estimation problem <strong>and</strong> minimises the error measure. The student will therefore be<br />
required to study the topics of pose estimation, structure from motion <strong>and</strong> model based vision.<br />
A reference implementation must be done <strong>and</strong> validated. A detailed analysis of the<br />
performance of system must be done. The results must be compared to published results.<br />
Potential improvements must be identified, implemented <strong>and</strong> analysed. C++ programming<br />
skills will be required. Required outcomes: properly designed / implemented <strong>and</strong> fully<br />
functional pose estimation system, detailed experimental analysis of the performance of the<br />
system as improvements are integrated.<br />
9. Resources<br />
The Open Source OpenCV image processing toolkit will be used as foundation <strong>and</strong> the Qt4<br />
library for GUI aspects. The designated work area for this project will be one the Linux<br />
computers in the <strong>Computer</strong> <strong>Engineering</strong> Project Lab. The student may use facilities available<br />
at the <strong>Computer</strong> Vision Laboratory (CSIR) to capture suitable data.
1. Project number: T27<br />
2. Project title: GPU algorithmic implementation<br />
3. Study leader: Mr. H Grobler, in collaboration with CSIR (OSS)<br />
4. <strong>Research</strong> group: Intelligent Systems<br />
5. Focus of project: Investigative<br />
6. Eligibility<br />
6.1 Intended degree programme: <strong>Computer</strong> <strong>Engineering</strong> or <strong>Electronic</strong> <strong>Engineering</strong><br />
6.2 Required running average: 60%<br />
7. Brief description of the project<br />
Many algorithms have been ported to run on Graphics Processing Units (GPUs) to take<br />
advantage of its parallel processing performance benefits. The aim of this project is to create a<br />
GPU accelerated suite of image processing algorithms. It is expected that the student will<br />
implement the algorithms listed below both in CPU code <strong>and</strong> GPU code <strong>and</strong> then compare the<br />
processing speeds obtained on at least three different CPUs <strong>and</strong> four different GPUs. Note<br />
that two types of GPU programs exist, vertex shaders, <strong>and</strong> fragment shaders, unless otherwise<br />
stated both should be implemented unless sound justification is provided. The suite should<br />
include at least the following algorithms implemented from first principles:<br />
• Local Histogram: Many algorithms require knowledge of the distribution of<br />
intensities around a given pixel position.<br />
• Intensity Statistics: The average <strong>and</strong> st<strong>and</strong>ard deviation of the pixels in a specified<br />
neighbourhood are frequently required by high level algorithms.<br />
• FFT: Translating a picture to the frequency domain allows many types of processing<br />
(such as convolution filtering) to be performed more rapidly.<br />
• RGB-HLS: Converting from the traditional colour triplet to Hue-Light-Saturation<br />
representation is useful for many segmentation applications.<br />
• HLS-RGB Similar to the above, being able to convert back to the traditional colour<br />
triplet is required.<br />
8. What will be expected of the student<br />
The student will be required to study the various algorithms to be implemented. A C/C++<br />
implementation of each algorithm must be developed <strong>and</strong> thoroughly tested. Using a<br />
framework such as CUDA, the student must create GPU implementations of each algorithm.<br />
Using a formal experimental design, the performance of each implementation on various<br />
CPUs <strong>and</strong> GPUs must be characterized <strong>and</strong> analysed. C/C++ programming skills will be<br />
required. Required outcomes: GPU implementation of various image processing algorithms<br />
<strong>and</strong> reference implementation in C/C++.<br />
9. Resources<br />
The Open Source OpenCV image processing toolkit will be used as foundation <strong>and</strong> the Qt4<br />
library for GUI aspects. A PC with NVidia GTX 295 will be available for this project. The<br />
required software (such as CUDA) can be downloaded from the Internet, but will also be<br />
made available on the department's FTP site.
1. Project number: T28<br />
2. Project title: Heat shimmer modeling, characterization <strong>and</strong><br />
correction<br />
3. Study leader: Mr. H Grobler, in collaboration with CSIR (OSS)<br />
4. <strong>Research</strong> group: Intelligent Systems<br />
5. Focus of project: Investigative<br />
6. Eligibility<br />
6.1 Intended degree programme: <strong>Computer</strong> <strong>Engineering</strong> or <strong>Electronic</strong> <strong>Engineering</strong><br />
6.2 Required running average: 65%<br />
7. Brief description of the project<br />
In images obtained from a camera some distance away from observed objects, warm days can<br />
induce a mirage like effect which distorts the images. Similar effects can be observed both<br />
over l<strong>and</strong> <strong>and</strong> over open water <strong>and</strong> for distances as short as only a few hundred meters. One<br />
technique that has been used to produce clear images is super-resolution imaging. Superresolution<br />
(SR) imaging entails the combination of images with slight offset to produce<br />
images of higher resolution. The aim of this project is to create a image processing module<br />
that uses SR techniques to correct for heat shimmer effects.<br />
8. What will be expected of the student<br />
The student will be expected to study general image processing <strong>and</strong> specifically SR<br />
techniques. The student will be required to determine what differences, if any, exist between<br />
the distortion over l<strong>and</strong> <strong>and</strong> water. Using SR techniques, an algorithm which creates a more<br />
focused image in the presence of such noise must be developed. The performance of the<br />
implementation must be characterization on suitable datasets. Required outcomes:<br />
Implementation of head shimmer correction module.<br />
9. Resources<br />
The Open Source OpenCV image processing toolkit will be used as foundation <strong>and</strong> the Qt4<br />
library for GUI aspects. The designated work area for this project will be one the computers<br />
in the <strong>Computer</strong> <strong>Engineering</strong> Project Lab. Video datasets will be supplied.
1. Project number: T29<br />
2. Project title: Background subtraction in a foliage environment<br />
3. Study leader: Mr. H Grobler, in collaboration with CSIR (OSS)<br />
4. <strong>Research</strong> group: Intelligent Systems<br />
5. Focus of project: Investigative<br />
6. Eligibility<br />
6.1 Intended degree programme: <strong>Computer</strong> <strong>Engineering</strong> or <strong>Electronic</strong> <strong>Engineering</strong><br />
6.2 Required running average: 65%<br />
7. Brief description of the project<br />
The problem of 24 hour surveillance in an uncontrolled environment is pervasive problem.<br />
Due to the r<strong>and</strong>om movement of foliage in images captured in savanna environments,<br />
extracting the foreground is a very difficult problem. It is required that the student determine<br />
which of the current segmentation algorithms would be the most effective in a savanna<br />
environment where the dynamic environment consists primarily of low foliage moving due to<br />
the wind.<br />
8. What will be expected of the student<br />
The student will be expected to study general image processing <strong>and</strong> background subtraction<br />
techniques. As background subtraction is required in almost all digital image processing, a<br />
substantial amount of research exists. The student will be required to systematically explore<br />
the various techniques to determine which of the techniques produces best results for the<br />
problem of background subtraction in the presence of foliage. The student must implement<br />
the selected method as a comprehensive OpenCV extension. The performance of the<br />
implementation must be characterization on suitable datasets. Required outcomes: OpenCV<br />
implementation of background subtraction module.<br />
9. Resources<br />
The Open Source OpenCV image processing toolkit will be used as foundation <strong>and</strong> the Qt4<br />
library for GUI aspects. The designated work area for this project will be one the computers<br />
in the <strong>Computer</strong> <strong>Engineering</strong> Project Lab. Video datasets will be supplied <strong>and</strong> additional<br />
datasets can be captured.
1. Project number: T30<br />
2. Project title: Identification of stable regions in video sequences<br />
3. Study leader: Mr. H Grobler, in collaboration with CSIR (OSS)<br />
4. <strong>Research</strong> group: Intelligent Systems<br />
5. Focus of project: Investigative<br />
6. Eligibility<br />
6.1 Intended degree programme: <strong>Computer</strong> <strong>Engineering</strong> or <strong>Electronic</strong> <strong>Engineering</strong><br />
6.2 Required running average: 65%<br />
7. Brief description of the project<br />
It is just as important to know which parts of an image do not change as to know which parts<br />
do. Ideally these two sets are mutually exclusive yet constitute the entire image. In practice it<br />
might be more feasible to add a third category where the image might be moving. The stable<br />
parts of the image are useful both for image compression purposes as well as for stabilization<br />
of the input image. It is proposed that a student researches, develops/refines <strong>and</strong> tests/verifies<br />
a suitable algorithm to quickly determine which parts of a video sequence are likely to be<br />
stable for the next few frames. Once this has been done the motion of the camera over time,<br />
relative to the scene being viewed, is to be determined not necessarily by making use of the<br />
identified stable regions.<br />
8. What will be expected of the student<br />
The student will be expected to study general image processing <strong>and</strong> scene analysis. The<br />
identification of stable regions in video sequences has been researched <strong>and</strong> techniques such as<br />
Efficient Maximally Stable Extremal Region (MSER) tracking have been developed. The<br />
student must study this <strong>and</strong> other techniques <strong>and</strong> determine which of the techniques produces<br />
best results. The student must implement the selected method as a comprehensive OpenCV<br />
extension. The performance of the implementation must be characterization on suitable<br />
datasets. Required outcomes: OpenCV implementation of stable region tracking module.<br />
9. Resources<br />
The Open Source OpenCV image processing toolkit will be used as foundation <strong>and</strong> the Qt4<br />
library for GUI aspects. The designated work area for this project will be one the computers<br />
in the <strong>Computer</strong> <strong>Engineering</strong> Project Lab. Video datasets will be supplied.
1. Project number: T31<br />
2. Project title: Distributed object identification <strong>and</strong> tracking<br />
3. Study leader: Mr. H Grobler, in collaboration with CSIR (OSS)<br />
4. <strong>Research</strong> group: Intelligent Systems<br />
5. Focus of project: Investigative<br />
6. Eligibility<br />
6.1 Intended degree programme: <strong>Computer</strong> <strong>Engineering</strong> or <strong>Electronic</strong> <strong>Engineering</strong><br />
6.2 Required running average: 65%<br />
7. Brief description of the project<br />
It is unlikely that a single camera would be able to cover the entire required field of view with<br />
sufficient resolution to detect all possible targets. Thus it is probable that multiple cameras<br />
will be used in most surveillance applications. It is required to be able to combine the outputs<br />
of these cameras into a distributed surveillance system. Objects in these multiple feeds need<br />
to be identified <strong>and</strong> tracked across the various feeds. The technical challenge of this project<br />
will be to represent the identified objects in a non-pixel based reference system that is<br />
synchronised across multiple video streams.<br />
8. What will be expected of the student<br />
The student will be expected to study general image processing <strong>and</strong> scene analysis. Object<br />
recognition <strong>and</strong> tracking has been researched. The student must study the research <strong>and</strong><br />
identify techniques that can be adapted to operation on multiple video streams. The student<br />
must implement the selected method as a comprehensive OpenCV extension. The<br />
performance of the implementation must be characterization on suitable datasets. Required<br />
outcomes: OpenCV implementation of distributed object identification <strong>and</strong> tracking module.<br />
9. Resources<br />
The Open Source OpenCV image processing toolkit will be used as foundation <strong>and</strong> the Qt4<br />
library for GUI aspects. The designated work area for this project will be one the computers<br />
in the <strong>Computer</strong> <strong>Engineering</strong> Project Lab. Video datasets will need to be captured using, for<br />
example, web cameras.