Error-Resilient Live Video Multicast using Low-Rate Visual Quality ...

ivms.stanford.edu

Error-Resilient Live Video Multicast using Low-Rate Visual Quality ...

Error-Resilient Live Video Multicastusing Low-Rate Visual Quality FeedbackDavid Varodayan and Wai-tian TanHewlett-Packard Laboratories1501 Page Mill Rd.Palo Alto, California 94304{varodayan, wai-tian.tan}@hp.comABSTRACTEffective adaptive streaming systems need informative feedbackthat supports selection of appropriate actions. Packetlevel timing and reception statistics are already widely reportedin feedback. In this paper, we introduce a methodto produce low bit-rate visual quality feedback and evaluateits effectiveness in controlling errors in live video multicast.The visual quality feedback is a digest of picture content,and allows localized comparison in time and space on a continuousbasis. This conveniently allows detection and localizationof significant errors that may have originated fromearlier irrecoverable losses, a task that is typically challengingwith packet level feedback only. Our visual quality feedbackhas low bit overhead, at about 1% for high-definitionvideo encoded at typical rates. For live video multicast with10 clients, our experimental results show that the addedability to detect and correct large drift errors significantlyreduces the resulting visual quality fluctuations.Categories and Subject DescriptorsI.4.2 [Image Processing and Computer Vision]: Compression(Coding);C.2.4[Computer-Communication Networks]:Distributed SystemsGeneral TermsDesign, PerformanceKeywordsError-resilient video, video quality monitoring, video conferencing1. INTRODUCTIONControlling the effects of packet losses in real-time videoapplications like video conferencing is a challenging task.Centraltothechallengeisthelow-latencyrequirement,whichlimits the effectiveness of general data resilience methodsPermission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. To copy otherwise, torepublish, to post on servers or to redistribute to lists, requires prior specificpermission and/or a fee.MMSys’11, February 23–25, 2011, San Jose, California, USA.Copyright 2011 ACM 978-1-4503-0517-4/11/02 ...$10.00.such as retransmissions and forward error correction (FEC).Specifically,retransmissioncanaddsignificantdelayifroundtripdelay is large, and FEC needs added latency for interleavingto be effective against burst losses. Consequently, apractical streaming system often needs to operate under thecondition that parts of the transmitted video are irrecoverablylost. To this end, the general solution is to employlive encoders that reactively adapt to losses based on receiverfeedback. The effectiveness of such methods dependsfundamentally not only on the merits of available encodingadaptations, but also on the usefulness of feedback information.Packet level timing and reception statistics are alreadywidely reported in feedback. Standard protocols such asRTCP and its extensions [4], in particular, provide averageloss rates and individual packet reception statistics that cansupport adaptive use of FEC and retransmissions. Nevertheless,when irrecoverable losses become inevitable, packetlevel statistics are often less useful. This is because theyoffer little direct guidance in determining where correctivemeasures are needed without resorting to full emulation ofdecoder error tracking operations.A survey of feedback-based error resilience techniques canbe found in [6]. In the simplest technique, the encoder codesthe entire current frame as an intra-frame whenever a decodersignalsthatsomepriorframehassufferedaloss.Intracodingin this way worsens the rate-distortion performanceexcessively if losses are frequent. This can happen in a largemulticast, where aggregate loss can become large even if lossfor individual client remains small. Intra-frame coding issupported by packet level statistics. Reference picture selection(RPS) is a more efficient technique [5, 13] whereby errorpropagation is stopped by selectively performing predictiveencoding using only pictures known to be reconstructed correctly.As the number of decoders grows, the aggregate lossbecomes more frequent. The reference pictures selected atthe encoder become distant from the current frame to beencoded and, thus, rate-distortion performance suffers significantly.RPS is supported by packet level statistics.Both intra-coding and RPS reject an entire picture ascorrupt even if only a portion is impacted by loss. Suchan assumption is reasonable for low bit-rate video, whereone compressed picture is often carried in a single transportpacket. For high-definition video where one compressed pictureis often carried in multiple packets, such a conservativeapproach is inherently inefficient. Instead, it is desirable toselectively correct only the regions impacted by loss. Onesuch method that relies only on packet level feedback is error


tracking, which requires the encoder to duplicate the operationof the decoders whenever a loss is experienced [3, 12].In this way, the encoder can correct (e.g., by intra-coding)precisely the parts of the video that would continue to bedistorted. This method has two disadvantages. Firstly, theencoder must know the error concealment techniques usedby the decoders and, secondly, the encoder’s computationalburden scales linearly with the number of decoders.In this paper, we propose a low-complexity error trackingmethod that relies on low-rate visual quality feedbackfrom the decoders instead of burdening the encoder to duplicateeach of the decoders. Specifically, the encoder performslight-weight tracking of severe errors at the decoders usingtheir respective visual quality feedback to take corrective action,such as intra-coding or RPS, for the affected regionsonly. Compared to traditional intra-frame coding and RPS,the proposed method offers two advantages. Since errorsare tracked, mild or imperceptible errors do not need to becorrected unless they propagate into severe errors. Moreover,corrective action taken at the encoder can be targetedto the regions where the errors lie. Both of these advantagestranslate to less degradation in overall rate-distortionperformance.Our error resilience technique is based on low-rate visualquality feedback, an active area of research for a variety ofapplications. Thetermreduced-referencevideoqualitymonitoringencompasses techniques for estimating video qualitymetrics. One such reduced-reference method proposedin [14] extracts spatiotemporal features from edge-enhancedversions of the video. Another technique specified in theITU-T J.240 standard uses a block-wise pseudorandom projectionto create a low-rate signal [1, 7]. In both cases, featuresor projection coefficients from several frames of thereceived video are compared with the corresponding coefficientsof the original video to obtain a single estimate ofthe video quality. A single metric can be used to performadaptive encoder rate control as in [10], but it does not offerthe localization of errors required by our system.A projection capable of localization is explored in [8]. Theauthorsproposeavideothumbnailbecreated(usingablockwisemean operation) to assist in location-specific retransmissionand error concealment. Another work exploring theuse of thumbnails for the dual purpose of error localizationand concealment is given in [15], where robustly compressedthumbnails are combined with past full resolution picturesfor concealment purposes. In the current paper, by concentratingon error localization alone, we achieve an overheadof only 1% compared to about 10% in [15].Theworkin[8]alsosuggestsbinningthequantizedprojectioncoefficients to further reduce bit-rate; in effect, quantizationis performed using non-continguous quantization regions.Related to compression by binning is Slepian-Wolfcoding [11] of J.240 coefficients for video PSNR estimation[2, 9].In Section 2, we argue using illustrative examples thatour proposed error resilience technique based on low-ratevisual quality feedback improves performance of live videomulticast compared to several existing techniques. Section 3concernsthedesignofthevisualqualityfeedback; wediscussdesign considerations and develop the video projection. InSection 4, we incorporate the visual quality feedback techniqueinto a live multicast system and show comparativeexperimental results.Figure 1: Spatiotemporal error propagation. Dependingon content, uncorrected error in frame 3can migrate and expand over time. Intra-coding ofa later frame is guaranteed to stop error propagationbut is expensive. Light-weight method of trackingerror propagation is desirable.2. ILLUSTRATIVE EXAMPLESError resilience is a challenging problem because video iscoded in a motion-compensated predictive manner. In thetypical mode of operation, a block of pixels is encoded as aresidualwithrespecttoablockinapreviouslyreconstructedpicture at some motion vector offset. This means that, whena slice of video is lost, errors can propagate spatiotemporallyas illustrated in Fig.1. A packet loss in the third framecauses error confined to the slices that were lost. But blocksin these slices serve as predictors for motion-offset blocksin the fourth frame. Likewise, block in error in the fourthframe serve as predictors for the fifth frame.Probability of coding intra frame10.90.80.70.60.50.40.30.20.116 slices8 slices4 slices2 slices1 slice00 5 10 15 20Number of clientsFigure 2: In error resilience by intra-frame coding,the probability of coding an intra-frame does notscale well as the number of decoders grows.The intra-frame coding technique, discussed in the previoussection, guarantees that error propagation stops, butbecomes costly in terms of rate-distortion performance ifused too frequently. Assuming that all decoders suffer independentand identically distributed slice losses with probabilityp, then the probability of coding an intra-frame is1−(1−p) kn , where k is the number of slices per frame, andn is the number of decoders. Fig. 2 plots this probabilitywhentheslicelossprobabilityp = 0.01. Sinceintra-codingisexpensive, this technique does not scale well as the numberof decoders grows. Even a probability of coding an intraframeof 0.1 is excessive for applications that demand tensof consecutive frames to be predictively coded.Intra-slice coding is a less costly but more opportunisticvariant of the above technique. Instead of coding the entireframe as intra upon the receipt of a NACK, only the slices


corresponding to the NACK are intra-coded. This methodoften work well but fails spectacularly when motion propagatesthe error beyond the spatial region of the lost slices.Fig. 3 shows an example, for which the encoder reactiontime is 4 frame intervals. The lower slices of frame 120 arelost and error concealed by frame copy. The slice boundary,intersecting the words“Februari 2002,”is apparent. The lostslices are intra-coded 4 frames later in frame 124, but by thistime the words“Februari 2002”have moved out of the intracodedregion. Therefore, the error continues to propagateto frame 138 and beyond.Fig. 4 shows the corresponding frames when we use theproposed error resilience technique based on visual qualityfeedback. Frame 120 is identical to its counterpart in Fig. 3because the loss and error concealment are the same. Afterthe 4-frame reaction time, the encoder intra-codes the sameslices as in the intra-slice coding approach, because thoseare the slices which displayed errors in frame 120. For thisreason, frame 124 is also identical to its counterpart. Butthe encoder continues to use the visual quality feedback totrack and correct the propagating errors. Thus, frame 138is displayed at much better quality than its counterpart.3. VISUAL QUALITY FEEDBACKWe now delve into the design of the quality feedback signalsent back to the live video encoder. With the targetapplication of multi-party video conferencing in mind, wefirst enumerate several design constraints and opportunities.Then we step through the process of designing and tuningthe video projection.3.1 Video Projection Design ConsiderationsThe video projection is a dimensionality-reducing operationappliedtothereconstructedreceivedvideoatadecoder.The decoder feeds back the projection symbols to the encoderas a quality feedback signal. Meanwhile, the encoderapplies the same video projection to the local reconstructedvideo to create a set of local projection symbols. The encodercompares the two sets of projection symbols, markssome portions of the video as severely degraded, and takescorrective encoding action at those locations. We now outlinethe main design considerations for this video projection.3.1.1 Low Bit-RateThe quality feedback signal must have a bit-rate that isinsignificant compared to that of the primary video stream.In a video conferencing system, video travels both into andout of each terminal. Therefore, the quality feedback for agiven stream can piggyback on the packets of the reciprocalstream. We target a bit-rate for the quality feedback ofapproximately 1% of the primary video stream. We willdesign a video projection that is transmitted at around 20kbps for HD video at resolution 720×1280 and frame rate 30Hz. This level is below 1.35% of the primary video bit-rateas long as the video is encoded above 1.5 Mbps.3.1.2 LocalizationThe quality feedback signal must provide sufficient informationfor the encoder to localize quality degradationswithin received video frames. In this way, the encoder cantake corrective action (such as intra-coding) only in the regionswhere necessary. We will produce independent qualityfeedback for each 64×64 block of pixels. At this block size,(a) frame 120 (loss)(b) frame 124 (recovery)(c) frame 138Figure 3: Example of failure of the intra-slice codingvariant of intra-frame coding. Frame 120 (a) suffersfrom slice losses in the bottom portion of the frameand is concealed via frame copy. After an encoderreaction time of 4 frames, the lost region is intracodedin frame 124 (b). But imperfections in errorrecovery propagate to future frames such as frame138 (c), even when there is no further loss.the bit-rate target corresponds to 3 bits per block. Anysmaller square block size would allow less than 1 bit perblock.3.1.3 PerceptibilityThe quality feedback signal must enable the encoder todistinguish between mild (close to imperceptible) degradationsand severe degradations in the received video. The encodercan, thus, take corrective action (and incur additionalprimary bit-rate) where it is needed most. This requirement


jeffMHMHVMVMVMVMMHMHMHMHVMVM10 −1VMVMMHMHUndetection ratio10 0 False detection ratio10 −2MHVMMHVMHMMVHMMV10 −3MVMVHMHM10 −4unprocessedspatially processedspatially and temporally processed0 0.05 0.1 0.15 0.2shieldsHMHMFigure 9: Bayer-like patterns of projection types.The patterns cycle every 4 frames.MVMVUndetection ratioUndetection ratio10 −110 −210 −3unprocessedspatially processedspatially and temporally processed10 −40 0.05 0.1 0.15 0.210 0 False detection ratiomobcal10 −110 −210 −3unprocessedspatially processedspatially and temporally processed10 −40 0.05 0.1 0.15 0.210 0 False detection ratioFigure 8: Detection performance ROC curves forthe vertical difference projection applied to “jeff”,“shields” and “mobcal.” Each curve is traced fromleft to right by varying the quantization step sizefrom 2 4 to 2 −2 by factors of 2.over windows of 4 frames similarly takes advantage of thecycling patterns.Fig. 10 shows ROC scatter plots for the combined projection.Each point is generated by choosing a pair of quantizationstep sizes drawn from 2 5 to 2 −1 for the mean projectionand from 2 4 to 2 −2 for the other blocks. The performance ofthe combined projection, after spatial and temporal processing,exceeds that of the other projections for all sequences.Consider the following pair of quantization step sizes: 2 3 forthe mean projection and 2 −2 for horizontal and vertical differenceprojections.Atthissetting, thefalsedetectionratiosare 10%, 9.5% and 8.3% for “jeff”, “shields” and “mobcal”,respectively, and the corresponding undetection ratios are0.2%, 0.4% and 0.5%. Thus, the combined projection withquantization step sizes 2 3 and 2 −2 provides for very effectivelow-rate visual quality feedback.4. LIVE VIDEO MULTICAST SYSTEMIn this section, we present evaluation results of our proposedvisual quality feedback in a live streaming multicastsetting, where one live encoded video stream is distributedto multiple clients with independent loss patterns. We seekto compare the video quality achievable using visual qualityfeedback and schemes that only exploit packet level lossstatistics such as sequence number of lost packets.Our focus is to evaluate the advantage of visual qualityfeedback in a practical and reasonable setting rather thandeveloping a state-of-the-art multicast system. As such, wedo not attempt to employ and optimally combine multipleerror recovery tools such as retransmissions, error correctingcodes, resilientsourcecodingandpeer-to-peererrorrecoverystructures. Instead, we will focus our attention on a singleeffective error recovery measure: intra-coding.4.1 System DescriptionWe perform the following three experiments using thesetup in Fig. 11:• intra-frame: the feedback from each client contains sequencenumbers of its lost packets . The adaptationagent signals the encoder to intra-code all slices of the


jeffLiveVideoEncoderVideoDelayClient1Client2Undetection ratio10 0 False detection ratio10 −110 −2intrasliceindicatorAdaptationAgentfeedbackClient10Undetection ratioUndetection ratio10 −310 −4unprocessedspatially processedspatially and temporally processed0 0.05 0.1 0.15 0.2shields10 −110 −210 −3unprocessedspatially processedspatially and temporally processed10 −40 0.05 0.1 0.15 0.210 0 False detection ratiomobcal10 −110 −210 −3unprocessedspatially processedspatially and temporally processed10 −40 0.05 0.1 0.15 0.210 0 False detection ratioFigure 10: Detection performance ROC scatterplots for the combined projection applied to “jeff”,“shields” and “mobcal.” The points represent quantizationstep sizes drawn from 2 5 to 2 −1 for the meanblocks and from 2 4 to 2 −2 for the other blocks.Figure 11: Experimental setup for live video multicast.Depending on the experiment, feedback cancontain sequence number of lost packets, or visualfeedback to determine which slices, if any, need tobe intra-coded by the live encoder in the next frame.next frame if any client experience a loss since the lastintra-coded frame.• intra-slice: same feedback as intra-frame. The adaptationagent locates the slice corresponding to the sequencenumber of lost packets for all clients, and singalall slices with losses to be intra-coded in the nextframe.• visual: the feedback contains visual quality feedbackdescribed in Section 3. The adaptation agent performsthe same projection on the loss-free video (not shown)to determine which slices contain errors for each client.All slices with errors for at least one client is intracodedin the next frame.For live video encoding, we employ the H.264 encoder fromIntel Performance Primitive v6.1, which we have modifiedto support signalling of intra-slice coding. For video source,we employ the 250 frame Mobile Calendar sequence at 720×1280 and 25 frames per second, which we cropped to a sizeof 704×1280 so that both dimensions are a multiple of 64,our chosen block size in Section 3. The source is repeatedlytransmitted in a loop to support longer experiments. Eachframe is coded using 11 slices, each with size 64 × 1280.In all experiments, the encoder is configured to produce aconstant bit-rate of 1.7 Mbps, and each coded slice is brokeninto one or more packets with maximum size 1400 bytes. Asdiscussed in Section 3, the feedback overhead of visual is 3bits per 64 × 64 block, or 16.5 kbps, which is about 1% ofthe video bit-rate.Packet losses are simulated using trace files that are generatedindependently for the 10 clients. The loss model israndom loss with a low packet loss rate of 0.05%. For faircomparison among the experiments, the same set of tracefiles are used for all experiments.The simulated delay in our experiment is 10 ms for allclients, which is less than half the inter-frame time of 40ms for video at 25 Hz. This means a loss in frame N canbe reported to adaptation agent in time to effect correctivemeasures at frame N + 1. In a general multicast system,the clients may have widely varying delays. Nevertheless, amore uniform and low delay is of practical interest in manysettings such as school and corporate campuses.


4.2 Experimental ResultsWe examine the video quality of the schemes intra-frame,intra-slice, and visual in terms of average Peak Signal-to-Noise Ratio (PSNR) 1 . We first show the per-frame PSNRaveraged over all clients for the various schemes in Fig. 12.The apparent 10-second periodic structure is a direct resultoftransmittinga10secondscliprepeatedlyinaloop. Weseethat intra-frame achieves the lowest average PSNR, at 31.25dB, with large quality fluctuation over time despite the lowpacket loss rate of 0.05%. This is due to the aggressive bitusage of an intra-coded frame when only one packet is lost.Specifically, with 11 slices commonly mapped to 11 packets,the probability of coding an intra-frame due to losses fromone client becomes about 0.5%. With 10 clients, the overallprobability of intra-coding a frame increases to about 5%.The resulting frequent usage of intra-coding causes averagePSNR to be low. Furthermore, these intra-coded frames arecreated in response to random losses, and may occasionallycluster in time, causing the need to produce very low qualityvideo in order to maintain average bit-rate, resulting in highvolatility in quality over time.In contrast, by trading off the guarantee in error recoveryfor lower bit cost, intra-slice avoids the two problemsof low and volatile quality. It achieves an average PSNR of33.33 dB. Similar average performance is obtained by visual,achieving a PSNR of 33.24 dB. Nevertheless, while the averagePSNR reported in Fig. 12 is a good way to summarizeexperience of all clients, the averaged result is not indicativeof the actual experience of any one client.The individual PSNR traces for all 10 clients using thevisual and intra-slice schemes are shown in Fig. 13. Noticethat the traces for visual and intra-slice are similar with afew notable exceptions. For client 1, intra-slice suffers froma severe 7-second drop in PSNR of about 10 dB relativeto visual. Clients 5 and 7, when using intra-slice, also experienceslong bouts of error propagation of about 5 and 3seconds in duration, respectively. In contrast, the noticeablePSNR drops encountered by using visual are limited to individualframes. These brief errors can be visually masked bymomentarily freezing the previous frame. Under intra-slice,there is error indication only at the frame where losses occur.There is no provisions for error tracking to determinesubsequent error propagation. The support for continuouserror checking is one key advantage of using visual qualityfeedback over simple loss indicators like sequence number oflost packets.Thecurrentimplementationofvisualperformsintra-codingat a slice level. The slices are of size 64×1280, which contains20 64×64 blocks. As discussed in Section 3, the 1% bitoverheadallowsforerrordetectiononagranularityof64×64block. As a result, greater improvement over intra-slice maybe possible by adopting intra-coding on a per 64×64 ratherthan 64 ×1280 basis. This is a subject of future investigation.1 255×255computed as 10 × log 10 , where MSE is the pixelwisemean square error. Generally a PSNR of about 35 dBMSEis considered excellent, 32 dB is considered good, and below30 dB is considered poor.5. CONCLUSIONSIn this paper, we argue that providing useful feedback isan important task for feedback adaptive streaming applications.While typical feedback schemes exploit only packetlevel statistics, we show that it is possible to construct visualfeedback that allows error tracking and localization with alow overhead of about 1%. In such a way, selective correctionof necessary portions can be realized without thecomplex task of emulating decoders. We also demonstratein a live video multicast setting how such visual feedbackcan be exploited to allow more intelligent use of a specificadaptation method, namely intra-coding of slices. With visualquality feedback, we achieve over 2 dB gain comparedto intra-frame coding, and achieves significantly less qualityfluctuation than an intra-slice scheme.There are several aspects of the current work that canbe further improved. The encoder should match the regionwhere intra-coding is applied to the detection granularityof 64 × 64 rather than one slice. Slepian-Wolf coding offeedback should provide more efficient compression, whichwould allow lower overhead or more reliable feedback. Abroader study should investigate the impact of conflictinguses of the network (such as other applications and videostreams) on the proposed error resilience system.6. REFERENCES[1] ITU-T Recommendation J.240: Framework for remotemonitoring of transmitted picture signal-to-noise ratiousing spread-spectrum and orthogonal transform, Jun.2004.[2] K. Chono, Y.-C. Lin, D. P. Varodayan, and B. Girod.Reduced-reference image quality estimation usingdistributed source coding. In Proc. IEEE Internat.Conf. Multimedia and Expo, Hannover, Germany, Jun.2008.[3] N. Faerber, E. Steinbach, and B. Girod. Robust H.263compatible video transmission over wireless channels.In Proc. Picture Coding Symp., Melbourne, Australia,1996.[4] T. Friedman, R. Caceres, and A. Clark. RTP controlprotocol extended reports (RTCP XR). RFC 3611,November 2003.[5] S. Fukunaga, T. Nakai, and H. Inoue. Error resilientvideo coding by dynamic replacing of referencepictures. In Proc. IEEE Global Commun. Conf.,London, United Kingdom, 1996.[6] B. Girod and N. Faerber. Feedback-based error controlfor mobile video transmission. Proc. IEEE,87(10):1707–1723, Oct. 1999.[7] R. Kawada, O. Sugimoto, A. Koike, M. Wada, andS. Matsumoto. Highly precise estimation scheme forremote video PSNR using spread spectrum andextraction of orthogonal transform coefficients.Electronics Commun. Japan (Part I), 89(6):51–62,Jun. 2006.[8] Z. Li, Y.-C. Lin, D. P. Varodayan, P. Baccichet, andB. Girod. Distortion-aware retransmission andconcealment of video packets using a Wyner-Ziv-codedthumbnail. In Proc. IEEE Internat. WorkshopMultimedia Signal Process., Cairns, Australia, Oct.2008.


[9] Y.-C. Lin, D. P. Varodayan, and B. Girod. Videoquality monitoring for mobile multicast peers usingdistributed source coding. In Proc. Internat. MobileMultimedia Commun. Conf., London, UnitedKingdom, Sept. 2009.[10] X. Lu, S. Tao, M. El Zarki, and R. Guerin.Quality-based adaptive video over the internet. InProc. Commun. Networks Distrib. Syst. Conf.,Orlando, Florida, 2003.[11] D. Slepian and J. K. Wolf. Noiseless coding ofcorrelated information sources. IEEE Trans. Inform.Theory, 19(4):471–480, Jul. 1973.[12] E. Steinbach, N. Faerber, and B. Girod. Standardcompatible extension of H.263 for robust videotransmission in mobile environments. IEEE Trans.Circuits Syst. Video Technol., 7(12):872–881, Dec.1997.[13] Y. Tomita, T. Kimura, and T. Ichikawa. Errorresilient modified inter-frame coding system forlimited reference picture memories. In Proc. PictureCoding Symp., Berlin, Germany, 1997.[14] S. Wolf and M. Pinson. Spatial-temporal distortionmetrics for in-service quality monitoring and anydigital video system. In Proc. SPIE Symp. VoiceVideo Data Commun., Boston, Massachusetts, 1999.[15] C. Yeo, W. Tan, and D. Mukherjee. Receiver errorconcealment using acknowledge preview (RECAP) -an approach to resilient video streaming. In Proc.IEEE Internat. Conf. Acoustics, Speech and SignalProcess., Taipei, Taiwan, 2009.


4035PSNR (dB)3025visualintra−sliceintra−frame200 10 20 30 40 50 60time (s)Figure 12: Per-frame PSNR, averaged over all clients. The traces for visual, intra-slice and intra-frame areshown as solid black, dashed red and dotted black curves, respectively.PSNR (dB)PSNR (dB)PSNR (dB)PSNR (dB)PSNR (dB)4030Client 1200 20 40 60Client 34030200 20 40 60Client 54030200 20 40 60Client 74030200 20 40 60Client 94030200 20 40 60time (s)PSNR (dB)PSNR (dB)PSNR (dB)PSNR (dB)PSNR (dB)4030Client 2200 20 40 60Client 44030200 20 40 60Client 64030200 20 40 60Client 84030200 20 40 60Client 104030200 20 40 60time (s)Figure 13: PSNR traces for all 10 clients. The traces for visual and intra-slice are shown as solid black anddashed red curves, respectively.

More magazines by this user
Similar magazines