Remote Monitoring Debriefing System (RMDS) - Military Embedded ...
Remote Monitoring Debriefing System (RMDS) - Military Embedded ...
Remote Monitoring Debriefing System (RMDS) - Military Embedded ...
- No tags were found...
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
Technology: Imaging – Seeing is believingRemote Monitoring DebriefingSystem (RMDS) conceivedand developed by theIsrael Aircraft Industry (IAI)Single Print OnlyBy Yehuda Singer, PhDThe Remote Monitoring Debriefing System (RMDS) isintended for the first solo flights of a pilot trainee. It enablesthe trainer to monitor the flight in real time and to play backthe flight later to provide detailed feedback and instructions.The system performs: 1) flight planning; 2) real-time flightmonitoring; and 3) post-flight processing and comparison ofthe intended flight plan to the performed flight simulation.Yehuda focuses on real-time flight monitoring and showsthe system analysis process that led IAI & Beyond2000to choose a dual core DSP architecture based onAnalog Devices’ BF561.A pilot trainee performing his or her first solo flights could commiterrors that could lead to serious accidents. A simple solutionis to put a video camera in the cockpit and transmit real-timevideo to the trainer monitoring the flight from the ground. However,the transmission of a digital video at the rate of 15 framesper second necessitates bandwidth of 160 Mbps. The cost ofsuch a channel is prohibitively expensive. The question is howto avoid transmitting video and still monitor the state of the aircraftin flight. The solution is to perform image processing on theinput video and transmit numerical results to the ground. Transmittingnumerical results requires a bandwidth of only 9,600 bitsper second, which is easily and economically transmitted over anRF radio modem (see Figure 1).Figure 2 shows the system operation during flight. The aircraftis equipped with cameras connected to the cockpit video processor,a radio modem, and a GPS. The cockpit video processorperforms image processing to analyze the state of the meters.The numerical results of the image processing and the GPSdata are sent via the radio modem to the Ground Station (GS),Figure 1 Figure 2
which restores the view of the panel and the 3D map as closely aspossible to the real view seen in the aircraft. The platform of theGS is based on the FSX-Microsoft Flight Simulator.To achieve a ground that is as close as possible to real video, weneed to perform image processing at the rate of 15 frames persecond on each camera.System analysisAt the system analysis stage of the project, the computer requirementsare:1. CPU load is 30 percent2. The bandwidth of the memory and bus is 40 percentThe assumptions for estimating the usage of theseresources are:1. The image processing is performed on a set ofRegions Of Interest (ROIs)42. Each ROI is 100 x 100 pixels = 10 pixels3. There are 20 ROIs4. 15 frames per second5. Memory access of 10 ns, which relates to failure inaccessing the cache memory.6. A CPU faster than the memory is delayed by the memorywhenever it accesses it.Therefore, to estimate the CPU load, we have to count the numberof memory accesses in our algorithm. Since the algorithm doesnot access the memory sequentially, we take a larger value ofthe memory access time. Table 1 shows the CPU load, which ismuch higher than desired: more than 54 percent compared withthe value of 30 percent corresponding to the typical requirements.Table 2 shows the required memory bandwidth. It is a little morethan 40 percent.Function Memory accesses CPU load in 1 secondThreshold 2 x 15 x 20 x 10 4 0.06Contrast 2 x 15 x 20 x 10 4 0.06Vibration 10 x 15 x 20 x 10 4 0.3Identification 4 x 15 x 20 x 10 4 0.12Total 18 x 15 x 20 x 10 4 0.54FunctionTable 1In the next section we shall introduce other considerations inchoosing the computation platform for our project.Choosing a platformSoftware development associated within such a project is thecritical path; hence, the requirement was to find an off-the-shelfevaluation board with two processors and start the softwaredevelopment without any wait for hardware development. At theend of 2004, we chose the Analog Devices BF561 dual-core as aplatform for our project. By choosing the BF561 (see Figure 3),we removed the bottleneck of CPU time.Memory bandwidth is supported by:MBpsVideo in (2 x 15 frames, 525 x 858) 25.76Video out 12.88Internal memory transfer 2.86Total 41.5Table 21. A separate cache for instruction2. A separate cache for data13. A fast DMA to capture streaming video in and out.(In addition, the BF561 has standard I/O resources such asRS-232, SPI, and parallel I/O)IRQ CONTROL/WATCHDOGTIMERVOLTAGEREGULATORIRQ CONTROL/WATCHDOGTIMERJTAG TESTEMULATIONUARTIrDAL1INSTRUCTIONMEMORYMMUL1DATAMEMORYL1INSTRUCTIONMEMORYMMUL1DATAMEMORYL2 SRAM128 KBSPISPORT0CORE SYSTEM/BUS INTERFACEIMDMACONTROLLERSPORT1BOOT ROMEAB32Single Print OnlyDEBDMACONTROLLER 1DMACONTROLLER 2DAB16GPIOTIMERSDABEXTERNAL PORTFLASH/SDRAM CONTROL32 PAB 16PPI0PPI1Figure 31Video out is needed for debugging purposes.
Technology: Imaging – Seeing is believingAnalog Devices’ evaluation board, the ADSP-BF561 EZ-KITLite, 2 enabled us to start software development immediately.I/O not included on the evaluation board was implemented on anextension board connected to the evaluation board via backwardconnectors.The software tools associated with the BF561 permit one PC toserve as a development platform controlling the two DSPs (seeFigure 4). Of course the software development process of a multiprocessorbecomes simpler when the development system iscontrolled by one PC. In addition, software development of a realtimeimage processing application implies supporting utilities tothe Integrated Development Environment (IDE). It is necessary todisplay the input image as sampled by the video decoder in theDSP. Figure 5 shows an image captured by the video decoder andstored in the DSP’s memory. The image is displayed by the imageviewer, which is a part of the IDE. This feature helped in integratingprogressive scan video cameras. The progressive video isessential in image processing applications in a vibrating environment.The camera produces a TV standard that is 525 lines and 858columns. The image produced by the camera is at the size of 640 x492 pixels per frame. The image viewer helped in finding the inputvideo’s real size, which is 525 x 858.Single Print OnlySoftware design to enhance performanceThe two cores interact via a shared memory. The shared memoryis an external DRAM controlled by external port flash/SDRAM(see again Figure 3). The utilization of the two cores dependson the functional decomposition of the project. In an idealsituation when the computation task can be decomposed intoindependent subtasks, the processors are fully utilized. On theother hand, if the functional decomposition yields interdependentsubtasks, much time is spent on synchronizing the processorsin accessing the data in the shared memory. The design goalis to minimize these interdependencies; hence, the processorsbecome loosely coupled.Figure 5 shows the functional decomposition of our system. EachDSP has its own cache memory instruction and cache memorydata. One DSP samples the video from the camera and performsimage processing only, while the second DSP interfaces theexternal world:1. To the GS via the RF modem2. To the GPS3. To the contrast control of the camerasThe DSP performing the image processing handles the secondDSP with numerical results and the contrast values via the sharedmemory. The second DSP does not acknowledge the acceptanceof these data, reducing our synchronization costs.This functional allocation of tasks to the two DSPs guaranteesthat they are loosely coupled; thus, their computational poweris maximized.Video processingThe streaming analog video is captured by the video decoder,which converts it to digital and transfers the converted video toAnalogvideo-inCamera DSP 1Figure 420-30numericaldata20-30 bytes message for contrast controlDSP 2ExternalworldFigure 5the external memory by one of the DMA channels. When a frameis completed:• An interrupt is generated to the DSP when a new frame hasbeen received and is stored in memory.• DMA switches automatically to get an additional frame in anew memory buffer without the interference of the DSP.The DSP processes the frames concurrently while new framesare captured by the video decoder. To enhance performance, portionsof the images that relate to the ROIs are transferred fromthe DSP’s external memory to its internal memory.To enable debugging in real time, we produce an image withvideo markers on a monitor TV that shows the last frame processed.However, our input video is a progressive scan type,while a monitor TV supports interlaced video.Figure 6 lists line information on the two types of video. A progressivescan video frame is composed of a contiguous sequenceof 525 lines starting at line 0 and ending at line 524. An interlacedvideo is composed of two subframes: One subframe is asequence of all even lines starting at line 0 and ending at line 524;the other subframe is a sequence of all odd lines starting at line 1and ending at line 523. We again used the fast DMA that is partof the BF561 to convert the progressive video to an interlacedvideo. In addition, we added special markers to show the resultsin video for debugging and recording for debug purposes.2We used the evaluation board as a part of the first prototypes delivered to the first customers and for the experimental flights.
Progressive VideoLine 0 Line 0Line 1 Line 2Line 2 Line 4Line 262 Line 524Line 263 Line 1Line 264 Line 3Line 524 Line 523Figure 6Interlaced VideoAudio handlingTo enable communication between trainee and trainer outside thenormal avionics communication channel, we added voice handlingvia the radio modem. The radio modem works at the baud rate of9,600 bits per second. The bandwidth for the voice is 3,200 bitsper second from the global baud rate of 9,600. The solution isto perform compression and decompression in the DSP thatinterfaces with the external world. The trainer has a microphoneand an earphone connected to the GS. The GS compresses thetrainer’s voice and sends it via the RF modem to the BF561; theBF561 decompresses and activates the audio decoder, which isconnected to the trainee’s earphones. The trainee can speak tothe trainer through his or her microphone, and the BF561 compresseshis or her voice and sends it via the RF modem to theGS. In the GS, voice restoration is performed and routed to thetrainer’s earphones. The convention in avionics systems is halfduplexcommunication. Voice handling involves interacting withthe audio code, Serial Port (SPORT), and the DMA. The digitalinterface of the audio codec is the SPORT. Audio streaming issupported by the DMA.Project statusOur version of the project is now working. The BF561-basedsystem performs image processing at the rate of 14.7 frames persecond; the numerical results are transferred via the RF modemto the GS, which restores the panel and the 3D and 2D maps.The computing load of the BF561 is around 80 percent. Nowwe are working to optimize the code and enhance internalmemory usage by transferring only the regions of interest of theframe to the internal memory. Our goal is to reduce CPU load to40-50 percent.Dr. Yehuda Singer received hisMaster’s of Science from Weitzman Instituteand his PhD from Bar-Ilan University.He has more than 28 years of experience inembedded systems, computer architectures,and FPGAs. Since 1995, he has acted asthe CTO of Beyond2000 Ltd., which is anoutsourcing company. He can be reachedat Yehuda.singer@be2k.co.il.Beyond2000 Ltd.972-8-9265333www.be2k.co.il© 2008 OpenSystems Publishing. Not Licensed for distribution. Visit opensystems-publishing.com/reprints for copyright permissions.