TITRE Adaptive Packet Video Streaming Over IP Networks - LaBRI
TITRE Adaptive Packet Video Streaming Over IP Networks - LaBRI
TITRE Adaptive Packet Video Streaming Over IP Networks - LaBRI
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
In a packet video application, the packet is either received correctly or lost entirely. This means<br />
that the boundaries for lost information are exactly determined by the packet boundaries. To<br />
combat the effect of packet loss, it is necessary to design a specific packet payload format for each<br />
video standard. The video server knows best, it can apply a good packetization. For example, if the<br />
network path-MTU is known, the video encoder can design video packets having the MTU size<br />
and independently decodable. Many codec such as H.263+, MPEG-4, and H.264 support the<br />
creation of different form of independently decodable video packets.<br />
Since the reliable transmission mechanism offered by TCP incurs considerable overhead by<br />
retransmission, RTP [110] does not rely on TCP. Instead, applications are put on top of UDP. How<br />
to cope with the lost packets is up to the applications. Following the application-level framing<br />
principle, RTP functionality is usually integrated into the application rather than being implemented<br />
as a separate protocol entity. RTP provides basic packet format definitions to support real-time<br />
communication but does not define control mechanisms or algorithms. The packet formats provide<br />
information required for audio and video data transfer, such as the incoming video data packet<br />
sequence. Continuous media such as non-compressed PCM audio can be synchronized by the use<br />
of sequence numbers. Non-continuous data such as MPEG can be resynchronized by using the<br />
time stamp fields. Many RTP payload format was proposed for many video codec such as for<br />
H.261 [111], for MPEG1/MPEG2 <strong>Video</strong> [112], for H.263 [113], for MPEG-4 Audio/Visual<br />
Streams [114], and in progress RTP payload for JVT/H.264 video [28].<br />
In MPEG-2, at the start of a new slice, information called a slice header is placed within the<br />
bitstream. The slice header provides information which allows the decoding process to be restarted<br />
when errors have occurred. In MPEG-4 a periodic synchronization marker is inserted to allow the<br />
decoder to resynchronize after synchronization loss.<br />
In the data partitioning technique the video bitstream is organized and prioritized. Encoded<br />
video data is organized into high priority data such as headers, motion vectors, low frequency DCT<br />
coefficients, and addresses of blocks, and low priority data such as higher frequency DCT. Such<br />
organization follows from this idea that bits which closely follow a synchronization marker are<br />
more likely to be accurately decoded that those further away. This approach is referred to data<br />
partitioning in MPEG-4. Thus, if there are two available channels, higher data priority is<br />
transmitted in channel with better error performance and less critical data is transmitted in the<br />
channel with poor error performance. The degradation to channel errors is minimized since the<br />
critical parts of a bitstream are better protected. Data from neither channel may be decoded on a<br />
decoder that is not intended for decoding data partitioned bitstreams.<br />
3.2.2.5 Error Concealment<br />
In order to conceal the fact that an error has occurred on decodable video, it is necessary that<br />
the receiver apply error concealment techniques which can estimate the lost information or missing<br />
pixels. Since the video compression removes redundant data, estimating lost information is not an<br />
easy task. A trivial technique to conceal packet lost is to replace the corresponding block by a green<br />
or black square. However, this approach can be highly disturbing for human eyes. Other techniques<br />
more efficient are considered such as (1) interpolation (2) freeze frame, and (3) motion estimation<br />
compensation. The interpolation is used by smoothly extrapolate the missing block from the<br />
53